threads
listlengths 1
2.99k
|
|---|
[
{
"msg_contents": "Hi,\n\nAs we have many hooks in postgres that extends the postgres\nfunctionality, I'm wondering if it's a good idea to add a new\nfunction, say, pg_get_all_server_hooks, returning hook name, hook\ndeclaration and its current value (if there's any external module\nloaded implementing the hook). This basically helps to know at any\ngiven point of time what all the hooks are installed and the modules\nimplementing them. Imagine using this new function on a production\nserver with many supported extensions installed which might implement\nsome of the hooks.\n\nAlso, a dedicated page in the documentation listing out all the hooks,\ntheir declarations and a short description. This will help\ndevelopers/users to know the available hooks.\n\nOne problem is that the new function and doc page create an extra\nburden of keeping them up to date with the hooks modifications and new\nhook additions, but I think that can be taken care of in the review\nphases.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 4 May 2022 16:24:57 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add a new function and a document page to get/show all the server\n hooks"
},
{
"msg_contents": "On 04.05.22 12:54, Bharath Rupireddy wrote:\n> One problem is that the new function and doc page create an extra\n> burden of keeping them up to date with the hooks modifications and new\n> hook additions, but I think that can be taken care of in the review\n> phases.\n> \n> Thoughts?\n\nI think this has been proposed a number of times and rejected.\n\n\n\n",
"msg_date": "Wed, 4 May 2022 15:39:34 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add a new function and a document page to get/show all the server\n hooks"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 04.05.22 12:54, Bharath Rupireddy wrote:\n>> One problem is that the new function and doc page create an extra\n>> burden of keeping them up to date with the hooks modifications and new\n>> hook additions, but I think that can be taken care of in the review\n>> phases.\n\n> I think this has been proposed a number of times and rejected.\n\nThe most recent such discussion was here:\n\nhttps://www.postgresql.org/message-id/flat/20201231032813.GQ13234%40fetter.org\n\nThe basic point was that there's a pretty low bar to creating a hook,\nbut the more infrastructure you want to have around hooks the harder\nit will be to add any ... and the less likely that the infrastructure\nwill be kept up-to-date.\n\nMy takeaway from that thread was that there could be support for\nminimalistic documentation, along the lines of a README file listing\nall the hooks. I'm not in favor of trying to do more than that\n--- in particular, the cost/benefit ratio for the function proposed\nhere seems to approach infinity.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 May 2022 10:00:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add a new function and a document page to get/show all the server\n hooks"
}
] |
[
{
"msg_contents": "Hi Team,\n\nwe experienced pg_upgrade failing when attempted to upgrade from 12.10 to 14.2 \n(on AWS RDS)\n\nWe had this aggregate:\n\nCREATE AGGREGATE public.array_accum(anyelement) ( SFUNC = array_append, STYPE \n= anyarray, INITCOND = '{}');\n\nSyntax in version 14 is changed (we didn't try to run version 13)\n\nWe solved it (in our case) dropping the aggregate before upgrade and re-create \nin using new syntax in V14:\n\nCREATE AGGREGATE public.array_accum(anycompatible) ( SFUNC = array_append, \nSTYPE = anycompatiblearray, INITCOND = '{}');\n\nbut pg_upgrade shouldn't fail on this.\n\nI hope it can help to improve pg_upgrade process.\n\nThanks.\n\n--\nBest\nPetr\n\n\n\n\n",
"msg_date": "Wed, 04 May 2022 15:05:32 +0200",
"msg_from": "Petr Vejsada <pve@paymorrow.com>",
"msg_from_op": true,
"msg_subject": "pg_upgrade (12->14) fails on aggregate"
},
{
"msg_contents": "On Wed, May 4, 2022 at 7:29 AM Petr Vejsada <pve@paymorrow.com> wrote:\n\n> We solved it (in our case) dropping the aggregate before upgrade and\n> re-create\n> in using new syntax in V14:\n>\n> but pg_upgrade shouldn't fail on this.\n>\n> I hope it can help to improve pg_upgrade process.\n>\n>\nThe release notes say explicitly that one needs to drop and recreate the\naffected functions. Thus, we know about the issue and to date our best\nsolution is to have the user do exactly what you did (i.e., it is not\nsomething pg_upgrade is going to do for you). If you have an alternative\nsolution to suggest that would help.\n\nhttps://www.postgresql.org/docs/current/release-14.html : the first\ncompatibility note\n\nDavid J.\n\nOn Wed, May 4, 2022 at 7:29 AM Petr Vejsada <pve@paymorrow.com> wrote:We solved it (in our case) dropping the aggregate before upgrade and re-create \nin using new syntax in V14:\n\nbut pg_upgrade shouldn't fail on this.\n\nI hope it can help to improve pg_upgrade process.The release notes say explicitly that one needs to drop and recreate the affected functions. Thus, we know about the issue and to date our best solution is to have the user do exactly what you did (i.e., it is not something pg_upgrade is going to do for you). If you have an alternative solution to suggest that would help.https://www.postgresql.org/docs/current/release-14.html : the first compatibility noteDavid J.",
"msg_date": "Wed, 4 May 2022 07:34:15 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade (12->14) fails on aggregate"
},
{
"msg_contents": "On Wed, May 04, 2022 at 07:34:15AM -0700, David G. Johnston wrote:\n> On Wed, May 4, 2022 at 7:29 AM Petr Vejsada <pve@paymorrow.com> wrote:\n> > We solved it (in our case) dropping the aggregate before upgrade and\n> > re-create in using new syntax in V14:\n> >\n> > but pg_upgrade shouldn't fail on this.\n> >\n> > I hope it can help to improve pg_upgrade process.\n>\n> The release notes say explicitly that one needs to drop and recreate the\n> affected functions. Thus, we know about the issue and to date our best\n> solution is to have the user do exactly what you did (i.e., it is not\n> something pg_upgrade is going to do for you). If you have an alternative\n> solution to suggest that would help.\n> \n> https://www.postgresql.org/docs/current/release-14.html : the first\n> compatibility note\n\nDavid is right that this is documented as a compatibility issue.\n\nBut Petr has a point - pg_upgrade should aspire to catch errors in --check,\nrather than starting and then leaving a mess behind for the user to clean up\n(remove existing dir, rerun initdb, start old cluster, having first moved the\ndir back into place if you moved it out of the way as I do). This can take\nextra minutes, and exacerbates any other problem one encounters.\n\n$ ./tmp_install/usr/local/pgsql/bin/pg_upgrade -d pg95.dat -D pg15.dat -b /usr/lib/postgresql/9.5/bin\n...\nRestoring global objects in the new cluster ok\nRestoring database schemas in the new cluster\n postgres\n*failure*\n\nConsult the last few lines of \"pg15.dat/pg_upgrade_output.d/20220610T104419.303/log/pg_upgrade_dump_12455.log\" for\nthe probable cause of the failure.\n\npg_restore: error: could not execute query: ERROR: function array_append(anyarray, anyelement) does not exist\nCommand was: CREATE AGGREGATE \"public\".\"array_accum\"(\"anyelement\") (\n SFUNC = \"array_append\",\n STYPE = \"anyarray\",\n INITCOND = '{}'\n);\n\nThis patch catches the issue; the query needs to be reviewed.\n\n SELECT pn.nspname, p.proname FROM pg_proc p\n JOIN pg_aggregate a ON a.aggfnoid=p.oid \n JOIN pg_proc q ON q.oid=a.aggtransfn \n JOIN pg_namespace pn ON pn.oid=p.pronamespace \n JOIN pg_namespace qn ON qn.oid=q.pronamespace \n WHERE pn.nspname != 'pg_catalog' AND qn.nspname = 'pg_catalog' \n AND 'anyelement'::regtype = ANY(q.proargtypes) \n AND q.proname IN ('array_append', 'array_prepend', 'array_cat', 'array_position', 'array_positions', 'array_remove', 'array_replace', 'width_bucket');",
"msg_date": "Tue, 14 Jun 2022 18:09:49 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade (12->14) fails on aggregate"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> But Petr has a point - pg_upgrade should aspire to catch errors in --check,\n> rather than starting and then leaving a mess behind for the user to clean up\n\nAgreed; pg_upgrade has historically tried to find problems similar to\nthis. However, it's not just aggregates that are at risk. People\nmight also have built user-defined plain functions, or operators,\natop these functions. How far do we want to go in looking?\n\nAs for the query, I think it could be simplified quite a bit by\nrelying on regprocedure literals, that is something like\n\nWHERE ... a.aggtransfn IN\n ('array_append(anyarray,anyelement)'::regprocedure,\n 'array_prepend(anyelement,anyarray)'::regprocedure,\n ...)\n\nNot sure if it's necessary to stick explicit \"pg_catalog.\" schema\nqualifications into this --- IIRC pg_upgrade runs with restrictive\nsearch_path, so that this would be safe as-is.\n\nAlso, I think you need to check aggfinalfn too.\n\nAlso, I'd be inclined to reject system-provided objects by checking\nfor OID >= 16384 rather than hard-wiring assumptions about things\nbeing in pg_catalog or not.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Jun 2022 15:32:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade (12->14) fails on aggregate"
},
{
"msg_contents": "On Wed, Jun 15, 2022 at 03:32:04PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > But Petr has a point - pg_upgrade should aspire to catch errors in --check,\n> > rather than starting and then leaving a mess behind for the user to clean up\n> \n> Agreed; pg_upgrade has historically tried to find problems similar to\n> this. However, it's not just aggregates that are at risk. People\n> might also have built user-defined plain functions, or operators,\n> atop these functions. How far do we want to go in looking?\n\nI wasn't yet able to construct a user-defined function which fails to reload.\n\n> As for the query, I think it could be simplified quite a bit by\n> relying on regprocedure literals, that is something like\n\nYes, thanks.\n\n> Also, I think you need to check aggfinalfn too.\n\nDone but maybe needs more cleanup.\n\n> Also, I'd be inclined to reject system-provided objects by checking\n> for OID >= 16384 rather than hard-wiring assumptions about things\n> being in pg_catalog or not.\n\nTo me, oid>=16384 seems more hard-wired than namespace!='pg_catalog'.\n\nThis patch also resolves an issue with PQfinish()/dangling connections.\n\n-- \nJustin",
"msg_date": "Thu, 16 Jun 2022 21:01:03 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade (12->14) fails on aggregate"
},
{
"msg_contents": "On Thu, Jun 16, 2022 at 10:01 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Also, I'd be inclined to reject system-provided objects by checking\n> > for OID >= 16384 rather than hard-wiring assumptions about things\n> > being in pg_catalog or not.\n>\n> To me, oid>=16384 seems more hard-wired than namespace!='pg_catalog'.\n\nExtensions can be installed into pg_catalog, but they can't get\nlow-numbered OIDs.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 17 Jun 2022 09:07:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade (12->14) fails on aggregate"
},
{
"msg_contents": "pá 17. 6. 2022 v 15:07 odesílatel Robert Haas <robertmhaas@gmail.com>\nnapsal:\n\n> On Thu, Jun 16, 2022 at 10:01 PM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n> > > Also, I'd be inclined to reject system-provided objects by checking\n> > > for OID >= 16384 rather than hard-wiring assumptions about things\n> > > being in pg_catalog or not.\n> >\n> > To me, oid>=16384 seems more hard-wired than namespace!='pg_catalog'.\n>\n> Extensions can be installed into pg_catalog, but they can't get\n> low-numbered OIDs.\n>\n\nyes\n\nUnfortunately, I did it in Orafce\n\nRegards\n\nPavel\n\n\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n>\n>\n\npá 17. 6. 2022 v 15:07 odesílatel Robert Haas <robertmhaas@gmail.com> napsal:On Thu, Jun 16, 2022 at 10:01 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Also, I'd be inclined to reject system-provided objects by checking\n> > for OID >= 16384 rather than hard-wiring assumptions about things\n> > being in pg_catalog or not.\n>\n> To me, oid>=16384 seems more hard-wired than namespace!='pg_catalog'.\n\nExtensions can be installed into pg_catalog, but they can't get\nlow-numbered OIDs.yesUnfortunately, I did it in OrafceRegardsPavel\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 17 Jun 2022 15:30:15 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade (12->14) fails on aggregate"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Jun 16, 2022 at 10:01 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> To me, oid>=16384 seems more hard-wired than namespace!='pg_catalog'.\n\n> Extensions can be installed into pg_catalog, but they can't get\n> low-numbered OIDs.\n\nExactly. (To be clear, I had in mind writing something involving\nFirstNormalObjectId, not that you should put literal \"16384\" in the\ncode.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Jun 2022 10:14:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade (12->14) fails on aggregate"
},
{
"msg_contents": "On Fri, Jun 17, 2022 at 10:14:13AM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Thu, Jun 16, 2022 at 10:01 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >> To me, oid>=16384 seems more hard-wired than namespace!='pg_catalog'.\n> \n> > Extensions can be installed into pg_catalog, but they can't get\n> > low-numbered OIDs.\n> \n> Exactly. (To be clear, I had in mind writing something involving\n> FirstNormalObjectId, not that you should put literal \"16384\" in the\n> code.)\n\nActually, 16384 is already used in two other places in check.c, so ...\ndone like that for consistency.\nAlso fixes parenthesis, typos, and renames vars.",
"msg_date": "Wed, 22 Jun 2022 18:58:45 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade (12->14) fails on aggregate"
},
{
"msg_contents": "\n\n> On 23 Jun 2022, at 04:58, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> On Fri, Jun 17, 2022 at 10:14:13AM -0400, Tom Lane wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> On Thu, Jun 16, 2022 at 10:01 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>>> To me, oid>=16384 seems more hard-wired than namespace!='pg_catalog'.\n>> \n>>> Extensions can be installed into pg_catalog, but they can't get\n>>> low-numbered OIDs.\n>> \n>> Exactly. (To be clear, I had in mind writing something involving\n>> FirstNormalObjectId, not that you should put literal \"16384\" in the\n>> code.)\n> \n> Actually, 16384 is already used in two other places in check.c, so ...\n\nYes, but it's a third copy of the comment (\"* The query below hardcodes FirstNormalObjectId as 16384 rather than\") across the file.\n\nAlso, we can return slightly more information about found objects. For example, operator will look like \"operator: ||\". At least we can get nspname and oid. And, maybe return type for aggregator and leftarg\\rightarg types for operator?\n\nBTW comment /* Before v11, used proisagg=true, and afterwards uses prokind='a' */ seems interesting, but irrelevant. We join with pg_aggregate anyway.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n\n\n",
"msg_date": "Fri, 24 Jun 2022 23:43:18 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade (12->14) fails on aggregate"
},
{
"msg_contents": "On Fri, Jun 24, 2022 at 11:43:18PM +0500, Andrey Borodin wrote:\n> > On 23 Jun 2022, at 04:58, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > \n> > On Fri, Jun 17, 2022 at 10:14:13AM -0400, Tom Lane wrote:\n> >> Robert Haas <robertmhaas@gmail.com> writes:\n> >>> On Thu, Jun 16, 2022 at 10:01 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >>>> To me, oid>=16384 seems more hard-wired than namespace!='pg_catalog'.\n> >> \n> >>> Extensions can be installed into pg_catalog, but they can't get\n> >>> low-numbered OIDs.\n> >> \n> >> Exactly. (To be clear, I had in mind writing something involving\n> >> FirstNormalObjectId, not that you should put literal \"16384\" in the\n> >> code.)\n> > \n> > Actually, 16384 is already used in two other places in check.c, so ...\n> \n> Yes, but it's a third copy of the comment (\"* The query below hardcodes FirstNormalObjectId as 16384 rather than\") across the file.\n> \n> Also, we can return slightly more information about found objects. For example, operator will look like \"operator: ||\". At least we can get nspname and oid. And, maybe return type for aggregator and leftarg\\rightarg types for operator?\n\nBut what I wrote already shows what you want.\n\nIn database: postgres\n aggregate: public.array_accum(anyelement)\n operator: public.!@#(anyarray,anyelement)\n\nIn my testing, this works great - it shows what you need to put in your DROP\ncommand. If you try it and still wanted the OID, I'll add it for consistency\nwith check_for_user_defined_{encoding_conversions,postfix_ops}\n\n> BTW comment /* Before v11, used proisagg=true, and afterwards uses prokind='a' */ seems interesting, but irrelevant. We join with pg_aggregate anyway.\n\nYes, that's why the query doesn't need to include that.\n\nSomething is broken in my old clusters and I can't test all the upgrades right\nnow, but this is my latest.",
"msg_date": "Fri, 24 Jun 2022 15:28:24 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade (12->14) fails on aggregate"
},
{
"msg_contents": "\n\n> On 25 Jun 2022, at 01:28, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> But what I wrote already shows what you want.\nJust tested that, you are right. My version was printing name, I didn't know regproc prints so nice definition.\n\n> this is my latest.\n> <0001-WIP-pg_upgrade-check-detect-old-polymorphics-from-pr.patch>\n\nLet's rename \"databases_with_old_polymorphics.txt\" to somthing like \"old_polymorphics.txt\" or maybe even \"incompatible_polymorphics_usage.txt\"?\nI think you will come up with a better name, my point is here everythin is in \"databases\", and \"old\" doesn't describe essence of the problem.\n\nAlso, let's check that oid of used functions belong to system catalog (<16384)? We don't care about user-defined functions with the same name.\n\nAnd, probably, we can do this unconditionally:\nif (old_cluster.major_version >= 9500)\n appendPQExpBufferStr(&old_polymorphics,\nNothing bad will happen if we blacklist usage of nonexistent functions. I see there's a lot of code to have a dynamic list, if you think this exclusion for pre-9.5 is justified - OK, from my POV we can keep this code.\n\nThese comment is unneeded too:\n// \"AND aggtranstype='anyarray'::regtype\n\nThank you!\n\nBest regards, Andrey Borodin.\n\n\n\n\n",
"msg_date": "Sat, 25 Jun 2022 15:34:49 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade (12->14) fails on aggregate"
},
{
"msg_contents": "On Sat, Jun 25, 2022 at 03:34:49PM +0500, Andrey Borodin wrote:\n> > On 25 Jun 2022, at 01:28, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> > this is my latest.\n> > <0001-WIP-pg_upgrade-check-detect-old-polymorphics-from-pr.patch>\n> \n> Let's rename \"databases_with_old_polymorphics.txt\" to somthing like \"old_polymorphics.txt\" or maybe even \"incompatible_polymorphics_usage.txt\"?\n> I think you will come up with a better name, my point is here everythin is in \"databases\", and \"old\" doesn't describe essence of the problem.\n\n> Also, let's check that oid of used functions belong to system catalog (<16384)? We don't care about user-defined functions with the same name.\n\nRight now, we test\n =ANY(ARRAY['array_remove(anyarray,anyelement)',...]::regprocedure)\n\n..which will find the system's array_remove, and not some other one, due to\nALWAYS_SECURE_SEARCH_PATH_SQL (which is also why ::regprocedure prints a\nnamespace for the non-system functions we're interested in displaying).\n\nI had \"transnsp.nspname='pg_catalog'\", which was redundant, so I removed it.\n\nI tested that this allows upgrades with aggregates on top of non-system\nfunctions of the same name/args:\n\npostgres=# CREATE FUNCTION array_append(anyarray, anyelement) RETURNS ANYARRAY LANGUAGE SQL AS $$ $$;\npostgres=# CREATE AGGREGATE foo(anyelement) (sfunc=public.array_append, stype=anyarray, initcond='{}');\n\n> And, probably, we can do this unconditionally:\n> if (old_cluster.major_version >= 9500)\n> appendPQExpBufferStr(&old_polymorphics,\n> Nothing bad will happen if we blacklist usage of nonexistent functions.\n\nNope, it's as I said: this would break pg_upgrade from older versions.\n\n> I realized that my latest patch would break upgrades from old servers, which do\n> not have array_position/s nor width_bucket, so ::reprocedure would fail. Maybe\n> Andrey's way is better (checking proname rather than its OID).\n\nThis fixes several error with the version test.\n\n-- \nJustin",
"msg_date": "Mon, 27 Jun 2022 18:30:03 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade (12->14) fails on aggregate"
},
{
"msg_contents": "\n\n> On 28 Jun 2022, at 04:30, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> Nope, it's as I said: this would break pg_upgrade from older versions.\n\nAs far as I understand 9.5 is not supported. Probably, it makes sense to keep pg_upgrade running against 9.5 clusters, but I'm not sure if we do this routinely.\n\nBesides this the patch seems to be RfC.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 29 Jun 2022 22:58:44 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade (12->14) fails on aggregate"
},
{
"msg_contents": "On Wed, Jun 29, 2022 at 10:58:44PM +0500, Andrey Borodin wrote:\n> > On 28 Jun 2022, at 04:30, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > \n> > Nope, it's as I said: this would break pg_upgrade from older versions.\n> \n> As far as I understand 9.5 is not supported. Probably, it makes sense to keep pg_upgrade running against 9.5 clusters, but I'm not sure if we do this routinely.\n\nAs of last year, there's a reasonably clear policy for support of old versions:\n\nhttps://www.postgresql.org/docs/devel/pgupgrade.html\n|pg_upgrade supports upgrades from 9.2.X and later to the current major release of PostgreSQL, including snapshot and beta releases.\n\nSee: e469f0aaf3c586c8390bd65923f97d4b1683cd9f\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 29 Jun 2022 13:07:14 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade (12->14) fails on aggregate"
},
{
"msg_contents": "\n\n> On 29 Jun 2022, at 23:07, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> On Wed, Jun 29, 2022 at 10:58:44PM +0500, Andrey Borodin wrote:\n>>> On 28 Jun 2022, at 04:30, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>> \n>>> Nope, it's as I said: this would break pg_upgrade from older versions.\n>> \n>> As far as I understand 9.5 is not supported. Probably, it makes sense to keep pg_upgrade running against 9.5 clusters, but I'm not sure if we do this routinely.\n> \n> As of last year, there's a reasonably clear policy for support of old versions:\n> \n> https://www.postgresql.org/docs/devel/pgupgrade.html\n> |pg_upgrade supports upgrades from 9.2.X and later to the current major release of PostgreSQL, including snapshot and beta releases.\nThis makes sense, thank you for clarification.\n\nThe patch is marked WiP, what is in progress as of now?\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 29 Jun 2022 23:43:00 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade (12->14) fails on aggregate"
},
{
"msg_contents": "Andrey Borodin <x4mmm@yandex-team.ru> writes:\n> The patch is marked WiP, what is in progress as of now?\n\nIt looks about ready to me. Pushed with some minor cosmetic\nadjustments.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Jul 2022 13:07:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade (12->14) fails on aggregate"
},
{
"msg_contents": "On Tue, Jul 05, 2022 at 01:07:48PM -0400, Tom Lane wrote:\n> It looks about ready to me. Pushed with some minor cosmetic\n> adjustments.\n\ncrake and drongo look unhappy after that, as of the upgrade from 9.6:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=crake&dt=2022-07-05%2020%3A48%3A21\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2022-07-05%2019%3A06%3A04\n\nChecking for incompatible polymorphic functions fatal\nThe dumps used by the buildfarm may need some adjustments, it seems.\n--\nMichael",
"msg_date": "Wed, 6 Jul 2022 10:50:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade (12->14) fails on aggregate"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> crake and drongo look unhappy after that, as of the upgrade from 9.6:\n\nYeah. I think that 08385ed26 fixes this, but we've had no new\nreports yet :-(\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Jul 2022 23:29:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade (12->14) fails on aggregate"
},
{
"msg_contents": "On Tue, Jul 05, 2022 at 11:29:03PM -0400, Tom Lane wrote:\n> Yeah. I think that 08385ed26 fixes this, but we've had no new\n> reports yet :-(\n\nIndeed. Things are right now. Thanks!\n--\nMichael",
"msg_date": "Thu, 7 Jul 2022 14:04:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade (12->14) fails on aggregate"
}
] |
[
{
"msg_contents": "Hey,\n\nIs there a thread I'm not finding where the upcoming JSON function\ndocumentation is being made reasonably usable after doubling its size with\nall the new JSON Table features that we've added? If nothing else, the\ntable of contents at the top of the page needs to be greatly expanded to\nmake seeing and navigating to all that is available a possibility.\n\nDavid J.\n\nHey,Is there a thread I'm not finding where the upcoming JSON function documentation is being made reasonably usable after doubling its size with all the new JSON Table features that we've added? If nothing else, the table of contents at the top of the page needs to be greatly expanded to make seeing and navigating to all that is available a possibility.David J.",
"msg_date": "Wed, 4 May 2022 08:32:51 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "JSON Functions and Operators Docs for v15"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Is there a thread I'm not finding where the upcoming JSON function\n> documentation is being made reasonably usable after doubling its size with\n> all the new JSON Table features that we've added? If nothing else, the\n> table of contents at the top of the page needs to be greatly expanded to\n> make seeing and navigating to all that is available a possibility.\n\nThe entire structure of that text needs to be rethought, IMO, as it\nhas been written with precisely no concern for fitting into our\nhard-won structure for func.sgml. Andrew muttered something about\nrewriting it awhile ago, but I don't know what progress he's made.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 May 2022 11:39:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: JSON Functions and Operators Docs for v15"
},
{
"msg_contents": "On Wed, May 04, 2022 at 08:32:51AM -0700, David G. Johnston wrote:\n> Hey,\n> \n> Is there a thread I'm not finding where the upcoming JSON function\n> documentation is being made reasonably usable after doubling its size with\n> all the new JSON Table features that we've added? If nothing else, the\n> table of contents at the top of the page needs to be greatly expanded to\n> make seeing and navigating to all that is available a possibility.\n\nhttps://www.postgresql.org/message-id/20220411160905.GH26620@telsasoft.com\n\n\n",
"msg_date": "Wed, 4 May 2022 10:42:48 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: JSON Functions and Operators Docs for v15"
},
{
"msg_contents": "On Wed, May 4, 2022 at 8:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > Is there a thread I'm not finding where the upcoming JSON function\n> > documentation is being made reasonably usable after doubling its size\n> with\n> > all the new JSON Table features that we've added? If nothing else, the\n> > table of contents at the top of the page needs to be greatly expanded to\n> > make seeing and navigating to all that is available a possibility.\n>\n> The entire structure of that text needs to be rethought, IMO, as it\n> has been written with precisely no concern for fitting into our\n> hard-won structure for func.sgml. Andrew muttered something about\n> rewriting it awhile ago, but I don't know what progress he's made.\n>\n>\nI suppose regardless of the answer, or which thread is used for the patch,\nthe question at hand is whether this is problematic enough to warrant an\nopen item. I would lean toward yes, we can decide how much reworking is\nconsidered sufficient to clear the open item separately.\n\nDavid J.\n\nOn Wed, May 4, 2022 at 8:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Is there a thread I'm not finding where the upcoming JSON function\n> documentation is being made reasonably usable after doubling its size with\n> all the new JSON Table features that we've added? If nothing else, the\n> table of contents at the top of the page needs to be greatly expanded to\n> make seeing and navigating to all that is available a possibility.\n\nThe entire structure of that text needs to be rethought, IMO, as it\nhas been written with precisely no concern for fitting into our\nhard-won structure for func.sgml. Andrew muttered something about\nrewriting it awhile ago, but I don't know what progress he's made.I suppose regardless of the answer, or which thread is used for the patch, the question at hand is whether this is problematic enough to warrant an open item. I would lean toward yes, we can decide how much reworking is considered sufficient to clear the open item separately.David J.",
"msg_date": "Wed, 4 May 2022 08:44:01 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: JSON Functions and Operators Docs for v15"
},
{
"msg_contents": "\nOn 2022-05-04 We 11:39, Tom Lane wrote:\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n>> Is there a thread I'm not finding where the upcoming JSON function\n>> documentation is being made reasonably usable after doubling its size with\n>> all the new JSON Table features that we've added? If nothing else, the\n>> table of contents at the top of the page needs to be greatly expanded to\n>> make seeing and navigating to all that is available a possibility.\n> The entire structure of that text needs to be rethought, IMO, as it\n> has been written with precisely no concern for fitting into our\n> hard-won structure for func.sgml. Andrew muttered something about\n> rewriting it awhile ago, but I don't know what progress he's made.\n>\n\nYes, I've been clearing the decks a bit, but I'm working on it now,\nshould have something within the next week.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 4 May 2022 15:14:58 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: JSON Functions and Operators Docs for v15"
},
{
"msg_contents": "\nOn 2022-05-04 We 15:14, Andrew Dunstan wrote:\n> On 2022-05-04 We 11:39, Tom Lane wrote:\n>> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n>>> Is there a thread I'm not finding where the upcoming JSON function\n>>> documentation is being made reasonably usable after doubling its size with\n>>> all the new JSON Table features that we've added? If nothing else, the\n>>> table of contents at the top of the page needs to be greatly expanded to\n>>> make seeing and navigating to all that is available a possibility.\n>> The entire structure of that text needs to be rethought, IMO, as it\n>> has been written with precisely no concern for fitting into our\n>> hard-won structure for func.sgml. Andrew muttered something about\n>> rewriting it awhile ago, but I don't know what progress he's made.\n>>\n> Yes, I've been clearing the decks a bit, but I'm working on it now,\n> should have something within the next week.\n\n\n\nRunning slightly long on this. Will definitely post a patch by COB Friday.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 10 May 2022 17:45:53 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: JSON Functions and Operators Docs for v15"
},
{
"msg_contents": "On 2022-05-10 Tu 17:45, Andrew Dunstan wrote:\n> On 2022-05-04 We 15:14, Andrew Dunstan wrote:\n>> On 2022-05-04 We 11:39, Tom Lane wrote:\n>>> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n>>>> Is there a thread I'm not finding where the upcoming JSON function\n>>>> documentation is being made reasonably usable after doubling its size with\n>>>> all the new JSON Table features that we've added? If nothing else, the\n>>>> table of contents at the top of the page needs to be greatly expanded to\n>>>> make seeing and navigating to all that is available a possibility.\n>>> The entire structure of that text needs to be rethought, IMO, as it\n>>> has been written with precisely no concern for fitting into our\n>>> hard-won structure for func.sgml. Andrew muttered something about\n>>> rewriting it awhile ago, but I don't know what progress he's made.\n>>>\n>> Yes, I've been clearing the decks a bit, but I'm working on it now,\n>> should have something within the next week.\n>\n>\n> Running slightly long on this. Will definitely post a patch by COB Friday.\n>\n>\n\nNot done yet but here's where I'm at. If I'm on the wrong track or\nmissing things that should be done please let me know.\n\nI got rid of all the sub-sub-sub-sections, and put most of the functions\ninto tables like most other function sections. I added indexterm entries\nliberally, and removed a deal of repetitive text. I put json_table in\nits own subsection, because it's big enough and important enough, I\nthink. I reworked some of its docco, particularly around joining of\nsibling rows and PLAN DEFAULT, but there's a deal of work still to do to\nwhip it into shape, which I will continue to do over the weekend.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 13 May 2022 21:37:54 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: JSON Functions and Operators Docs for v15"
},
{
"msg_contents": ">\n> Not done yet but here's where I'm at. If I'm on the wrong track or\n> missing things that should be done please let me know.\n\n> [sqljson-dox-rework.patch] \n\n\nHere are a few errors/typos/improvements.\n\nI've added (=copied from the old docs) the CREATE TABLE for the my_films \ntable so that the more complicated json_table examples can be run easily.\n\n\nErik Rijkers\n\n\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com",
"msg_date": "Sat, 14 May 2022 08:17:05 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: JSON Functions and Operators Docs for v15"
},
{
"msg_contents": "On 2022-05-14 Sa 02:17, Erik Rijkers wrote:\n>>\n>> Not done yet but here's where I'm at. If I'm on the wrong track or\n>> missing things that should be done please let me know.\n>\n>> [sqljson-dox-rework.patch] \n>\n>\n> Here are a few errors/typos/improvements.\n>\n> I've added (=copied from the old docs) the CREATE TABLE for the\n> my_films table so that the more complicated json_table examples can be\n> run easily.\n>\n>\n>\n\nThanks. I have incorporated all of these, added a result for the last\njson_table example, and done some more wordsmithing around PLAN DEFAULT.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 16 May 2022 10:49:19 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: JSON Functions and Operators Docs for v15"
},
{
"msg_contents": "Op 16-05-2022 om 16:49 schreef Andrew Dunstan:\n\n> [sqljson-dox-rework-2.patch]\n\nTwo issues, derived from func.sgml:\n\n-----\n1.\n\nI noticed that some json functions, for instance json_object(), in their \noutput insert unexpected spaces before the separator-colon:\n\ntestdb=# select json_object('{a, 1, b, \"def\", c, 3.5}');\n\n json_object\n---------------------------------------\n {\"a\" : \"1\", \"b\" : \"def\", \"c\" : \"3.5\"}\n(1 row)\n\ninstead of the expected\n {\"a\": \"1\", \"b\": \"def\", \"c\": \"3.5\"}\n\nOf course not outright wrong but wouldn't it make more sense to \nnormalize such output? There is here no reason in the input to space \nthe colon on both sides.\n\nFunctions that yield this peculiarly spaced output are:\n json_object\n json_objectagg\n json_build_object\n\n-----\n2.\n\nThis example in func.sgml says it gives 't' but on my instance it \nreturns 'f'. Is the example correct?\n\njsonb_path_exists_tz('[\"2015-08-01 12:00:00 -05\"]', '$[*] ? \n(@.datetime() < \"2015-08-02\".datetime())') → t\n\n\nThanks,\n\nErik\n\n\n> andrew\n> \n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 16 May 2022 19:52:43 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: JSON Functions and Operators Docs for v15"
},
{
"msg_contents": "\nOn 2022-05-16 Mo 13:52, Erik Rijkers wrote:\n> Op 16-05-2022 om 16:49 schreef Andrew Dunstan:\n>\n>> [sqljson-dox-rework-2.patch]\n>\n> Two issues, derived from func.sgml:\n>\n> -----\n> 1.\n>\n> I noticed that some json functions, for instance json_object(), in\n> their output insert unexpected spaces before the separator-colon:\n>\n> testdb=# select json_object('{a, 1, b, \"def\", c, 3.5}');\n>\n> json_object\n> ---------------------------------------\n> {\"a\" : \"1\", \"b\" : \"def\", \"c\" : \"3.5\"}\n> (1 row)\n>\n> instead of the expected\n> {\"a\": \"1\", \"b\": \"def\", \"c\": \"3.5\"}\n>\n> Of course not outright wrong but wouldn't it make more sense to\n> normalize such output? There is here no reason in the input to space\n> the colon on both sides.\n>\n> Functions that yield this peculiarly spaced output are:\n> json_object\n> json_objectagg\n> json_build_object\n>\nWell, yes, possibly, but don't think we're going to change the behavior\nnow, it might break things.\n\n\n> -----\n> 2.\n>\n> This example in func.sgml says it gives 't' but on my instance it\n> returns 'f'. Is the example correct?\n>\n> jsonb_path_exists_tz('[\"2015-08-01 12:00:00 -05\"]', '$[*] ?\n> (@.datetime() < \"2015-08-02\".datetime())') → t\n\n\n\nYeah, it doesn't like the format of the timestamp literal. It works with\n\"2015-08-01T12:00:0 -05\". I'll fix the example in the next version.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 16 May 2022 14:53:55 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: JSON Functions and Operators Docs for v15"
},
{
"msg_contents": "\nOn 2022-05-16 Mo 14:53, Andrew Dunstan wrote:\n> On 2022-05-16 Mo 13:52, Erik Rijkers wrote:\n>> Op 16-05-2022 om 16:49 schreef Andrew Dunstan:\n>>\n>>> [sqljson-dox-rework-2.patch]\n>> Two issues, derived from func.sgml:\n>>\n>> -----\n>> 1.\n>>\n>> I noticed that some json functions, for instance json_object(), in\n>> their output insert unexpected spaces before the separator-colon:\n>>\n>> testdb=# select json_object('{a, 1, b, \"def\", c, 3.5}');\n>>\n>> json_object\n>> ---------------------------------------\n>> {\"a\" : \"1\", \"b\" : \"def\", \"c\" : \"3.5\"}\n>> (1 row)\n>>\n>> instead of the expected\n>> {\"a\": \"1\", \"b\": \"def\", \"c\": \"3.5\"}\n>>\n>> Of course not outright wrong but wouldn't it make more sense to\n>> normalize such output? There is here no reason in the input to space\n>> the colon on both sides.\n>>\n>> Functions that yield this peculiarly spaced output are:\n>> json_object\n>> json_objectagg\n>> json_build_object\n>>\n> Well, yes, possibly, but don't think we're going to change the behavior\n> now, it might break things.\n>\n>\n>> -----\n>> 2.\n>>\n>> This example in func.sgml says it gives 't' but on my instance it\n>> returns 'f'. Is the example correct?\n>>\n>> jsonb_path_exists_tz('[\"2015-08-01 12:00:00 -05\"]', '$[*] ?\n>> (@.datetime() < \"2015-08-02\".datetime())') → t\n>\n>\n> Yeah, it doesn't like the format of the timestamp literal. It works with\n> \"2015-08-01T12:00:0 -05\". I'll fix the example in the next version.\n\n\n\nOr rather \"2015-08-01T12:00:00-05\"\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 16 May 2022 15:02:27 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: JSON Functions and Operators Docs for v15"
},
{
"msg_contents": "Op 16-05-2022 om 20:53 schreef Andrew Dunstan:\n> \n> On 2022-05-16 Mo 13:52, Erik Rijkers wrote:\n>> -----\n>> 2.\n>>\n>> This example in func.sgml says it gives 't' but on my instance it\n>> returns 'f'. Is the example correct?\n>>\n>> jsonb_path_exists_tz('[\"2015-08-01 12:00:00 -05\"]', '$[*] ?\n>> (@.datetime() < \"2015-08-02\".datetime())') → t\n> \n> Yeah, it doesn't like the format of the timestamp literal. It works with\n> \"2015-08-01T12:00:0 -05\". I'll fix the example in the next version.\n\nThat doesn't work either, in my hands. It seems the offending \nchracteristic is the presence of the second space, before -05\n\n\n> cheers\n> \n> \n> andrew\n> \n> \n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n> \n\n\n",
"msg_date": "Mon, 16 May 2022 21:06:59 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: JSON Functions and Operators Docs for v15"
},
{
"msg_contents": "On 2022-May-16, Andrew Dunstan wrote:\n\n> Thanks. I have incorporated all of these, added a result for the last\n> json_table example, and done some more wordsmithing around PLAN DEFAULT.\n\nFor sure this is a big improvement, thanks. No longer do we have to\nrefer to section 9.16.3.2.2.3 -- that's in table 9.53 now.\n\nI noticed that after applying it, the (some?) synopses end up partly\ntypeset with regular typeface rather than fixed-width, which looks a bit\nodd. I think you need some <literal> tags around keywords and\n<parameter> around the parameters to those; that's how\n<function>overlay</function> does it for example for its\nPLACING/FROM/FOR weird SQL bits.\n\nThanks!\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 16 May 2022 21:33:28 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: JSON Functions and Operators Docs for v15"
},
{
"msg_contents": "On 2022-05-16 Mo 15:33, Alvaro Herrera wrote:\n> On 2022-May-16, Andrew Dunstan wrote:\n>\n>> Thanks. I have incorporated all of these, added a result for the last\n>> json_table example, and done some more wordsmithing around PLAN DEFAULT.\n> For sure this is a big improvement, thanks. No longer do we have to\n> refer to section 9.16.3.2.2.3 -- that's in table 9.53 now.\n>\n> I noticed that after applying it, the (some?) synopses end up partly\n> typeset with regular typeface rather than fixed-width, which looks a bit\n> odd. I think you need some <literal> tags around keywords and\n> <parameter> around the parameters to those; that's how\n> <function>overlay</function> does it for example for its\n> PLACING/FROM/FOR weird SQL bits.\n>\n> Thanks!\n\n\n\nThanks for reviewing. Here's an updated version that I hope addresses\nyour comments. I'm going on vacation for 10 days tomorrow, but I'm\nhoping to commit this before I leave.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 18 May 2022 21:30:29 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: JSON Functions and Operators Docs for v15"
},
{
"msg_contents": "> + same level are considerd to be <firstterm>siblings</firstterm>,\n\nconsidered\n\n> + <productname>PostgreSQL</productname> specific functions detailed in\n\npostgresql hyphen specific (as in the original)\n\nThese would all be easier to read with commas:\n\n+ <parameter>expression</parameter> is NULL an\n+ If <literal>WITH UNIQUE</literal> is specified the\n+ If the input is NULL an SQL NULL is returned. If the input is a number\n+ If <literal>WITH UNIQUE</literal> is specified there must not\n+ is <literal>strict</literal> an error is generated if it yields no items.\n\nThere's a few instances of \"space space\" that could be condensed:\n\n+ <function>json_scalar</function> functions, this needs to be either <type>json</type> or\n+ which must be a SELECT query returning a single column. If\n\n\n",
"msg_date": "Thu, 19 May 2022 07:19:05 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: JSON Functions and Operators Docs for v15"
},
{
"msg_contents": "\nOn 2022-05-19 Th 08:19, Justin Pryzby wrote:\n>> + same level are considerd to be <firstterm>siblings</firstterm>,\n> considered\n>\n>> + <productname>PostgreSQL</productname> specific functions detailed in\n> postgresql hyphen specific (as in the original)\n>\n> These would all be easier to read with commas:\n>\n> + <parameter>expression</parameter> is NULL an\n> + If <literal>WITH UNIQUE</literal> is specified the\n> + If the input is NULL an SQL NULL is returned. If the input is a number\n> + If <literal>WITH UNIQUE</literal> is specified there must not\n> + is <literal>strict</literal> an error is generated if it yields no items.\n>\n> There's a few instances of \"space space\" that could be condensed:\n>\n> + <function>json_scalar</function> functions, this needs to be either <type>json</type> or\n> + which must be a SELECT query returning a single column. If\n\n\n\nThanks, pushed now.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 19 May 2022 10:50:08 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: JSON Functions and Operators Docs for v15"
}
] |
[
{
"msg_contents": "I discovered while poking at an LDAP problem that our TAP tests are\n100% reproducibly capable of ignoring server crashes and reporting\nsuccess anyway. This problem occurs if the postmaster doesn't get\nthe child failure report until after shutdown has been initiated,\nin which case you find something like this in the postmaster log:\n\n2022-05-04 12:01:33.946 EDT [57945] [unknown] LOG: connection received: host=[local]\n2022-05-04 12:01:33.995 EDT [57945] [unknown] LOG: connection authenticated: identity=\"uid=test1,dc=example,dc=net\" method=ldap (/Users/tgl/pgsql/src/test/ldap/tmp_check/t_001_auth_node_data/pgdata/pg_hba.conf:1)\n2022-05-04 12:01:33.995 EDT [57945] [unknown] LOG: connection authorized: user=test1 database=postgres application_name=001_auth.pl\n2022-05-04 12:01:33.998 EDT [57945] 001_auth.pl LOG: statement: SELECT $$connected with user=test1$$\n2022-05-04 12:01:34.003 EDT [57937] LOG: received fast shutdown request\n2022-05-04 12:01:34.003 EDT [57937] LOG: aborting any active transactions\n2022-05-04 12:01:34.003 EDT [57937] LOG: background worker \"logical replication launcher\" (PID 57943) exited with exit code 1\n2022-05-04 12:01:35.750 EDT [57937] LOG: server process (PID 57945) was terminated by signal 11: Segmentation fault: 11\n2022-05-04 12:01:35.751 EDT [57937] LOG: terminating any other active server processes\n2022-05-04 12:01:35.751 EDT [57937] LOG: abnormal database system shutdown\n2022-05-04 12:01:35.751 EDT [57937] LOG: database system is shut down\n\nOur TAP scripts don't notice the \"abnormal database system shutdown\",\nso it looks like things have passed. It's even worse when a script\ndemands an immediate shutdown, because then the postmaster won't wait\naround for the child status.\n\nIf you have core dumps enabled, that adds some time before the child\nexit status is delivered, making this scenario extremely probable.\nI'm finding that src/test/ldap reports success 100% reproducibly\nafter doing \"ulimit -c unlimited\", even though four backend core dumps\nare produced during the run.\n\nI think that (a) at least by default, node->stop() ought to check for\nnormal shutdown status, and (b) immediate shutdown should only be used\nin cases where we've already decided that the test failed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 May 2022 12:25:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "TAP test fail: we don't always detect backend crashes"
}
] |
[
{
"msg_contents": "Would it make sense to make GetFreeIndexPage() atomic?\n\nInternally, GetFreeIndexPage() calls GetPageWithFreeSpace() and then\nRecordUsedIndexPage() in two separate operations. It's possible for two\ndifferent processes to get the same free page at the same time.\n\nTo guard against this, there are several index access methods that do\nsomething like this:\n\n/* First, try to get a page from FSM */\nfor (;;)\n{\nBlockNumber blkno = GetFreeIndexPage(index);\n\nif (blkno == InvalidBlockNumber)\nbreak;\n\nbuffer = ReadBuffer(index, blkno);\n\n/*\n* We have to guard against the possibility that someone else already\n* recycled this page; the buffer may be locked if so.\n*/\nif (ConditionalLockBuffer(buffer))\n{\nif (GinPageIsRecyclable(BufferGetPage(buffer)))\nreturn buffer; /* OK to use */\n\nLockBuffer(buffer, GIN_UNLOCK);\n}\n\n/* Can't use it, so release buffer and try again */\nReleaseBuffer(buffer);\n}\n\nSimilar code is repeated in a bunch of places. Each access method has to\nexplicitly write something into a freed page that identifies it as ok to\nuse. Otherwise, you could have two processes get the same page, then one\nlocks it, writes it, and unlocks it, then the second one does the same\nclobbering the first.\n\nAll this logic goes away if GetFreeIndexPage() is atomic, such that finding\nand marking a page as not-free always happens in one go. No longer would\nthe index pages themselves need to be modified to identify them as\navailable.\n\nI'm thinking of surrounding the code that calls GetFreeIndexPage() with a\nlightweight lock, although I wonder if that would harm performance. Perhaps\nthere is a more performant way to do this deep down in the FSM code.\n\nThoughts?\n\n-- \nChris Cleveland\n312-339-2677 mobile\n\nWould it make sense to make GetFreeIndexPage() atomic?Internally, GetFreeIndexPage() calls GetPageWithFreeSpace() and then RecordUsedIndexPage() in two separate operations. It's possible for two different processes to get the same free page at the same time.To guard against this, there are several index access methods that do something like this: /* First, try to get a page from FSM */ for (;;) { BlockNumber blkno = GetFreeIndexPage(index); if (blkno == InvalidBlockNumber) break; buffer = ReadBuffer(index, blkno); /* * We have to guard against the possibility that someone else already * recycled this page; the buffer may be locked if so. */ if (ConditionalLockBuffer(buffer)) { if (GinPageIsRecyclable(BufferGetPage(buffer))) return buffer; /* OK to use */ LockBuffer(buffer, GIN_UNLOCK); } /* Can't use it, so release buffer and try again */ ReleaseBuffer(buffer); }Similar code is repeated in a bunch of places. Each access method has to explicitly write something into a freed page that identifies it as ok to use. Otherwise, you could have two processes get the same page, then one locks it, writes it, and unlocks it, then the second one does the same clobbering the first.All this logic goes away if GetFreeIndexPage() is atomic, such that finding and marking a page as not-free always happens in one go. No longer would the index pages themselves need to be modified to identify them as available.I'm thinking of surrounding the code that calls GetFreeIndexPage() with a lightweight lock, although I wonder if that would harm performance. Perhaps there is a more performant way to do this deep down in the FSM code.Thoughts?-- Chris Cleveland312-339-2677 mobile",
"msg_date": "Wed, 4 May 2022 14:16:02 -0500",
"msg_from": "Chris Cleveland <ccleveland@dieselpoint.com>",
"msg_from_op": true,
"msg_subject": "Atomic GetFreeIndexPage()?"
},
{
"msg_contents": "Chris Cleveland <ccleveland@dieselpoint.com> writes:\n> Would it make sense to make GetFreeIndexPage() atomic?\n\nI strongly doubt it. The loss of concurrency would outweigh any\ncode-simplicity benefits.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 May 2022 15:27:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Atomic GetFreeIndexPage()?"
},
{
"msg_contents": "On Wed, May 4, 2022 at 12:16 PM Chris Cleveland\n<ccleveland@dieselpoint.com> wrote:\n> Similar code is repeated in a bunch of places. Each access method has to explicitly write something into a freed page that identifies it as ok to use.\n\nI wouldn't say that that's what this code is doing, though I do see\nwhat you mean. The reason why the ultimate consumer of the free page\nwrites to it is because....well, it wouldn't have asked for a page if\nit didn't have something to write.\n\nCode like GinPageIsRecyclable() is generally concerned with something\ncalled recycle safety, typically using something called the drain\ntechnique. That's what this code is primarily concerned about. The\ndefinition of recycle safety is documented in the nbtree README.\n\n> Otherwise, you could have two processes get the same page, then one locks it, writes it, and unlocks it, then the second one does the same clobbering the first.\n\nBut you could have that anyway, since the FSM isn't crash safe. It's\ninherently not completely trustworthy for that reason.\n\nActually, the FSM shouldn't really contain a page that is !\nGinPageIsRecyclable() to begin with, but that is still possible for a\nvariety of reasons. All of which boil down to \"the current FSM design\ncannot be totally trusted, so we verify redundantly\".\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 4 May 2022 16:42:11 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Atomic GetFreeIndexPage()?"
}
] |
[
{
"msg_contents": "Hey,\n\nFor the following sequence of commands, on a newly initdb v15devel and\nmostly clean v13 I get a failure and a created table respectively.\n\nShowing v15devel:\n\npostgres=# create database testdb;\nCREATE DATABASE\npostgres=# create role testrole;\nCREATE ROLE\npostgres=# \\c testdb\nYou are now connected to database \"testdb\" as user \"vagrant\".\ntestdb=# set session authorization testrole;\nSET\ntestdb=> create table public.testtable(id int);\nERROR: permission denied for schema public\nLINE 1: create table public.testtable(id int);\ntestdb=> select version();\n version\n----------------------------------------------------------------------------------------------------------\n PostgreSQL 15devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit\n(1 row)\n\n\n=======================================================================================\nv13.6 (I also have a report this is the behavior of v14)\n\npostgres=# create database testdb;\ncrCREATE DATABASE\npostgres=# create role testrole;\nCREATE ROLE\npostgres=# \\c testdb\nYou are now connected to database \"testdb\" as user \"postgres\".\ntestdb=# select version();\n version\n----------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 13.6 (Ubuntu 13.6-1.pgdg20.04+1) on x86_64-pc-linux-gnu,\ncompiled by gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit\n(1 row)\n\ntestdb=# set session authorization testrole;\nSET\ntestdb=> create table public.testtable (id int);\nCREATE TABLE\n\n\nDavid J.\n\nHey,For the following sequence of commands, on a newly initdb v15devel and mostly clean v13 I get a failure and a created table respectively.Showing v15devel:postgres=# create database testdb;CREATE DATABASEpostgres=# create role testrole;CREATE ROLEpostgres=# \\c testdbYou are now connected to database \"testdb\" as user \"vagrant\".testdb=# set session authorization testrole;SETtestdb=> create table public.testtable(id int);ERROR: permission denied for schema publicLINE 1: create table public.testtable(id int);testdb=> select version(); version---------------------------------------------------------------------------------------------------------- PostgreSQL 15devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit(1 row)=======================================================================================v13.6 (I also have a report this is the behavior of v14)postgres=# create database testdb;crCREATE DATABASEpostgres=# create role testrole;CREATE ROLEpostgres=# \\c testdbYou are now connected to database \"testdb\" as user \"postgres\".testdb=# select version(); version---------------------------------------------------------------------------------------------------------------------------------- PostgreSQL 13.6 (Ubuntu 13.6-1.pgdg20.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit(1 row)testdb=# set session authorization testrole;SETtestdb=> create table public.testtable (id int);CREATE TABLEDavid J.",
"msg_date": "Wed, 4 May 2022 12:42:43 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Did we intend to change whether PUBLIC can create tables in the\n public schema by default?"
},
{
"msg_contents": "On Wed, May 4, 2022 at 12:42 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> Hey,\n>\n> For the following sequence of commands, on a newly initdb v15devel and\n> mostly clean v13 I get a failure and a created table respectively.\n>\n>\nApparently I didn't search commit history well enough the first time...\n\nhttps://github.com/postgres/postgres/commit/b073c3ccd06e4cb845e121387a43faa8c68a7b62\n\nSorry for the noise.\n\nDavid J.\n\nOn Wed, May 4, 2022 at 12:42 PM David G. Johnston <david.g.johnston@gmail.com> wrote:Hey,For the following sequence of commands, on a newly initdb v15devel and mostly clean v13 I get a failure and a created table respectively.Apparently I didn't search commit history well enough the first time...https://github.com/postgres/postgres/commit/b073c3ccd06e4cb845e121387a43faa8c68a7b62Sorry for the noise.David J.",
"msg_date": "Wed, 4 May 2022 12:49:35 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Did we intend to change whether PUBLIC can create tables in the\n public schema by default?"
}
] |
[
{
"msg_contents": "In [1] I complained about how SubqueryScans that get deleted from\na plan tree by setrefs.c nonetheless contribute cost increments\nthat might cause the planner to make odd choices. That turned\nout not to be the proximate cause of that particular issue, but\nit still seems like it might be a good idea to do something about\nit. Here's a little patch to improve matters.\n\nIt turns out to be hard to predict perfectly whether setrefs.c will\nremove a SubqueryScan, because createplan.c plays some games with\nmoving tlist evaluations around and/or inserting \"physical\"\n(i.e. trivial) tlists, which can falsify any earlier estimate of\nwhether a SubqueryScan is trivial. I'm not especially interested in\nredesigning those mechanisms, so the best we can hope for is an\napproximate determination. (Those same behaviors also make a lot of\nother path cost estimates a bit incorrect, so it doesn't seem like\nwe need to feel too awful about not getting SubqueryScan perfect.)\n\nGiven that ground rule, however, it's not very hard to determine\nwhether a SubqueryScanPath looks like it will be trivial and change\nits costing accordingly. The attached draft patch does that.\n\nI instrumented the code in setrefs.c, and found that during the\ncore regression tests this patch estimates correctly in 2103\nplaces while guessing wrongly in 54, so that seems like a pretty\ngood step forward.\n\nPerhaps I overcomplicated matters by making the callers responsible\nfor determining triviality of the paths' targets. We could just\nmake cost_subqueryscan check that for itself (using logic similar\nto what I wrote in set_subquery_pathlist), but that'd result in\nduplicative calculations anytime we make more than one Path for a\nsubquery. On the other hand, said calculations wouldn't be that\nexpensive, so perhaps a more localized patch would be better.\n\nI also notice that setrefs.c can elide Append and MergeAppend nodes\ntoo in some cases, but AFAICS costing of those node types doesn't\ntake that into account.\n\nAnyway, I'll stick this in the queue for v16.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/328872.1651247595%40sss.pgh.pa.us",
"msg_date": "Wed, 04 May 2022 18:32:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Costing elided SubqueryScans more nearly correctly"
},
{
"msg_contents": "I wrote:\n> I instrumented the code in setrefs.c, and found that during the\n> core regression tests this patch estimates correctly in 2103\n> places while guessing wrongly in 54, so that seems like a pretty\n> good step forward.\n\nOn second thought, that's not a terribly helpful summary. Breaking\nthings down to the next level, there were\n\n 1088 places where we correctly guessed a subquery isn't trivial\n\t(so no change from current behavior, which is correct)\n\n 1015 places where we correctly guessed a subquery is trivial\n\t(hence, improving the cost estimate from before)\n\n 40 places where we incorrectly guessed a subquery isn't trivial\n (so no change from current behavior, although that's wrong)\n\n 14 places where we incorrectly guessed a subquery is trivial\n\t(hence, incorrectly charging zero for the SubqueryScan)\n\n1015 improvements to 14 disimprovements isn't a bad score. I'm\na bit surprised there are that many removable SubqueryScans TBH;\nmaybe that's an artifact of all the \"SELECT *\" queries.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 May 2022 19:02:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Costing elided SubqueryScans more nearly correctly"
},
{
"msg_contents": "On Thu, May 5, 2022 at 7:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > I instrumented the code in setrefs.c, and found that during the\n> > core regression tests this patch estimates correctly in 2103\n> > places while guessing wrongly in 54, so that seems like a pretty\n> > good step forward.\n>\n> On second thought, that's not a terribly helpful summary. Breaking\n> things down to the next level, there were\n>\n> 1088 places where we correctly guessed a subquery isn't trivial\n> (so no change from current behavior, which is correct)\n>\n> 1015 places where we correctly guessed a subquery is trivial\n> (hence, improving the cost estimate from before)\n>\n> 40 places where we incorrectly guessed a subquery isn't trivial\n> (so no change from current behavior, although that's wrong)\n>\n> 14 places where we incorrectly guessed a subquery is trivial\n> (hence, incorrectly charging zero for the SubqueryScan)\n>\n> 1015 improvements to 14 disimprovements isn't a bad score. I'm\n> a bit surprised there are that many removable SubqueryScans TBH;\n> maybe that's an artifact of all the \"SELECT *\" queries.\n>\n\nThe patch looks sane to me. 1015 vs 14 is a good win.\n\nThanks\nRichard\n\nOn Thu, May 5, 2022 at 7:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> I instrumented the code in setrefs.c, and found that during the\n> core regression tests this patch estimates correctly in 2103\n> places while guessing wrongly in 54, so that seems like a pretty\n> good step forward.\n\nOn second thought, that's not a terribly helpful summary. Breaking\nthings down to the next level, there were\n\n 1088 places where we correctly guessed a subquery isn't trivial\n (so no change from current behavior, which is correct)\n\n 1015 places where we correctly guessed a subquery is trivial\n (hence, improving the cost estimate from before)\n\n 40 places where we incorrectly guessed a subquery isn't trivial\n (so no change from current behavior, although that's wrong)\n\n 14 places where we incorrectly guessed a subquery is trivial\n (hence, incorrectly charging zero for the SubqueryScan)\n\n1015 improvements to 14 disimprovements isn't a bad score. I'm\na bit surprised there are that many removable SubqueryScans TBH;\nmaybe that's an artifact of all the \"SELECT *\" queries.The patch looks sane to me. 1015 vs 14 is a good win.ThanksRichard",
"msg_date": "Thu, 5 May 2022 15:30:33 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Costing elided SubqueryScans more nearly correctly"
},
{
"msg_contents": "On Thu, May 5, 2022 at 4:30 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> On Thu, May 5, 2022 at 7:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> 1015 improvements to 14 disimprovements isn't a bad score. I'm\n>> a bit surprised there are that many removable SubqueryScans TBH;\n>> maybe that's an artifact of all the \"SELECT *\" queries.\n\n> The patch looks sane to me. 1015 vs 14 is a good win.\n\n+1\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 2 Jun 2022 20:40:35 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Costing elided SubqueryScans more nearly correctly"
},
{
"msg_contents": "I wrote:\n> I also notice that setrefs.c can elide Append and MergeAppend nodes\n> too in some cases, but AFAICS costing of those node types doesn't\n> take that into account.\n\nI took a closer look at this point and realized that in fact,\ncreate_append_path and create_merge_append_path do attempt to account\nfor this. But they get it wrong! Somebody changed the rules in\nsetrefs.c to account for parallelism, and did not update the costing\nside of things.\n\nThe attached v2 is the same as v1 plus adding a fix for this point.\nNo regression test results change from that AFAICT.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 17 Jul 2022 15:16:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Costing elided SubqueryScans more nearly correctly"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 3:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > I also notice that setrefs.c can elide Append and MergeAppend nodes\n> > too in some cases, but AFAICS costing of those node types doesn't\n> > take that into account.\n>\n> I took a closer look at this point and realized that in fact,\n> create_append_path and create_merge_append_path do attempt to account\n> for this. But they get it wrong! Somebody changed the rules in\n> setrefs.c to account for parallelism, and did not update the costing\n> side of things.\n>\n> The attached v2 is the same as v1 plus adding a fix for this point.\n> No regression test results change from that AFAICT.\n\n\nThe new fix looks good to me. Seems setrefs.c added a new logic to check\nparallel_aware when removing single-child Appends/MergeAppends in\nf9a74c14, but it neglected to update the related costing logic. And I\ncan see this patch fixes the costing for that.\n\nBTW, not related to this patch, the new lines for parallel_aware check\nin setrefs.c are very wide. How about wrap them to keep consistent with\narounding codes?\n\nThanks\nRichard\n\nOn Mon, Jul 18, 2022 at 3:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> I also notice that setrefs.c can elide Append and MergeAppend nodes\n> too in some cases, but AFAICS costing of those node types doesn't\n> take that into account.\n\nI took a closer look at this point and realized that in fact,\ncreate_append_path and create_merge_append_path do attempt to account\nfor this. But they get it wrong! Somebody changed the rules in\nsetrefs.c to account for parallelism, and did not update the costing\nside of things.\n\nThe attached v2 is the same as v1 plus adding a fix for this point.\nNo regression test results change from that AFAICT.The new fix looks good to me. Seems setrefs.c added a new logic to checkparallel_aware when removing single-child Appends/MergeAppends inf9a74c14, but it neglected to update the related costing logic. And Ican see this patch fixes the costing for that.BTW, not related to this patch, the new lines for parallel_aware checkin setrefs.c are very wide. How about wrap them to keep consistent witharounding codes?ThanksRichard",
"msg_date": "Mon, 18 Jul 2022 13:48:51 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Costing elided SubqueryScans more nearly correctly"
},
{
"msg_contents": "On 2022-Jul-18, Richard Guo wrote:\n\n> BTW, not related to this patch, the new lines for parallel_aware check\n> in setrefs.c are very wide. How about wrap them to keep consistent with\n> arounding codes?\n\nNot untrue! Something like this, you mean? Fixed the nearby typo while\nat it.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Mon, 18 Jul 2022 18:36:19 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Costing elided SubqueryScans more nearly correctly"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Jul-18, Richard Guo wrote:\n>> BTW, not related to this patch, the new lines for parallel_aware check\n>> in setrefs.c are very wide. How about wrap them to keep consistent with\n>> arounding codes?\n\n> Not untrue! Something like this, you mean? Fixed the nearby typo while\n> at it.\n\nWFM. (I'd fixed the comment typo in my patch, but I don't mind if\nyou get there first.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Jul 2022 13:30:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Costing elided SubqueryScans more nearly correctly"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 1:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2022-Jul-18, Richard Guo wrote:\n> >> BTW, not related to this patch, the new lines for parallel_aware check\n> >> in setrefs.c are very wide. How about wrap them to keep consistent with\n> >> arounding codes?\n>\n> > Not untrue! Something like this, you mean? Fixed the nearby typo while\n> > at it.\n>\n> WFM. (I'd fixed the comment typo in my patch, but I don't mind if\n> you get there first.)\n\n\n+1 The fix looks good to me.\n\nThanks\nRichard\n\nOn Tue, Jul 19, 2022 at 1:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Jul-18, Richard Guo wrote:\n>> BTW, not related to this patch, the new lines for parallel_aware check\n>> in setrefs.c are very wide. How about wrap them to keep consistent with\n>> arounding codes?\n\n> Not untrue! Something like this, you mean? Fixed the nearby typo while\n> at it.\n\nWFM. (I'd fixed the comment typo in my patch, but I don't mind if\nyou get there first.)+1 The fix looks good to me.ThanksRichard",
"msg_date": "Tue, 19 Jul 2022 09:03:58 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Costing elided SubqueryScans more nearly correctly"
},
{
"msg_contents": "On 2022-Jul-19, Richard Guo wrote:\n\n> On Tue, Jul 19, 2022 at 1:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > WFM. (I'd fixed the comment typo in my patch, but I don't mind if\n> > you get there first.)\n\nAh, I see now you had other grammatical fixes and even more content\nthere.\n\n> +1 The fix looks good to me.\n\nThanks, pushed.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 19 Jul 2022 10:17:18 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Costing elided SubqueryScans more nearly correctly"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Thanks, pushed.\n\nPushed the original patch now too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 19 Jul 2022 11:19:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Costing elided SubqueryScans more nearly correctly"
}
] |
[
{
"msg_contents": "Hi!\n\nDuring work on 64-bit XID patch [1] we found handy to have initdb options\nto set initial xid/mxid/mxoff values to arbitrary non default values. It\nhelps test different scenarios: related to wraparound, pg_upgrade from\n32-bit XID to 64-bit XID, etc.\n\nWe realize, this patch can be singled out as an independent patch from the\nwhole patchset in [1] and be useful irrespective of 64-bit XID in cases of\ntesting of wraparound and so on.\n\nIn particular, we employed this patch to test recent changes in logical\nreplication of subxacts [2] and found no problems in it near the point of\npublisher wraparound.\n\nPlease share your opinions and reviews are always welcome.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CACG%3DezZe1NQSCnfHOr78AtAZxJZeCvxrts0ygrxYwe%3DpyyjVWA%40mail.gmail.com\n[2]\nhttps://postgr.es/m/d045f3c2-6cfb-06d3-5540-e63c320df8bc@enterprisedb.com\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Thu, 5 May 2022 18:47:04 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add initial xid/mxid/mxoff to initdb"
},
{
"msg_contents": "Hi!\n\nCF bot says patch does not apply. Rebased.\nYour reviews are very much welcome!\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Tue, 22 Nov 2022 14:04:27 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add initial xid/mxid/mxoff to initdb"
},
{
"msg_contents": "On 05.05.22 17:47, Maxim Orlov wrote:\n> During work on 64-bit XID patch [1] we found handy to have initdb \n> options to set initial xid/mxid/mxoff values to arbitrary non default \n> values. It helps test different scenarios: related to wraparound, \n> pg_upgrade from 32-bit XID to 64-bit XID, etc.\n> \n> We realize, this patch can be singled out as an independent patch from \n> the whole patchset in [1] and be useful irrespective of 64-bit XID in \n> cases of testing of wraparound and so on.\n> \n> In particular, we employed this patch to test recent changes in logical \n> replication of subxacts [2] and found no problems in it near the point \n> of publisher wraparound.\n\nJust for completeness, over in the other thread the feedback was that \nthis functionality is better put into pg_resetwal.\n\n\n\n",
"msg_date": "Fri, 25 Nov 2022 13:19:18 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add initial xid/mxid/mxoff to initdb"
},
{
"msg_contents": "On Fri, 25 Nov 2022 at 07:22, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> Just for completeness, over in the other thread the feedback was that\n> this functionality is better put into pg_resetwal.\n\nSo is that other thread tracked in a different commitfest entry and\nthis one completely redundant? I'll mark it Rejected then?\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Mon, 20 Mar 2023 15:30:41 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add initial xid/mxid/mxoff to initdb"
},
{
"msg_contents": "On Mon, 20 Mar 2023 at 22:31, Gregory Stark (as CFM) <stark.cfm@gmail.com>\nwrote:\n\n>\n> So is that other thread tracked in a different commitfest entry and\n> this one completely redundant? I'll mark it Rejected then?\n>\n\nYep, it appears so.\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Mon, 20 Mar 2023 at 22:31, Gregory Stark (as CFM) <stark.cfm@gmail.com> wrote:\n\nSo is that other thread tracked in a different commitfest entry and\nthis one completely redundant? I'll mark it Rejected then?\nYep, it appears so.-- Best regards,Maxim Orlov.",
"msg_date": "Tue, 21 Mar 2023 12:44:27 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add initial xid/mxid/mxoff to initdb"
}
] |
[
{
"msg_contents": "I've completed the first draft of the 14.3 release notes, see\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=66ca1427a4963012fd565b922d0a67a8a8930d1f\n\nAs usual, please send comments/corrections by Sunday.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 05 May 2022 18:29:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "First-draft release notes for next week's minor releases"
},
{
"msg_contents": "On 5/5/22 6:29 PM, Tom Lane wrote:\r\n> I've completed the first draft of the 14.3 release notes, see\r\n\r\nThanks!\r\n\r\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=66ca1427a4963012fd565b922d0a67a8a8930d1f\r\n> \r\n> As usual, please send comments/corrections by Sunday.\r\n\r\nI found one small thing:\r\n\r\n- because it didn't actually do anything with the bogus values;\r\n+ because it didn't actually do anything with the bogus values,\r\n\r\nShould be a \",\" or remove the \"but\" on the following line.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Sat, 7 May 2022 11:16:53 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: First-draft release notes for next week's minor releases"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> I found one small thing:\n\n> - because it didn't actually do anything with the bogus values;\n> + because it didn't actually do anything with the bogus values,\n\n> Should be a \",\" or remove the \"but\" on the following line.\n\nDone that way (with a \",\"), thanks for reviewing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 08 May 2022 12:00:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: First-draft release notes for next week's minor releases"
}
] |
[
{
"msg_contents": "PSA trivial patch to fix some very old comment typo.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Fri, 6 May 2022 19:25:34 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix typo in code comment - origin.c"
},
{
"msg_contents": "On Fri, May 06, 2022 at 07:25:34PM +1000, Peter Smith wrote:\n> PSA trivial patch to fix some very old comment typo.\n\nThanks, fixed.\n--\nMichael",
"msg_date": "Fri, 6 May 2022 20:02:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo in code comment - origin.c"
}
] |
[
{
"msg_contents": "Maybe, the first letter of comments in postinit.c should be capitalized.\nAttaching a tiny patch to fix it.",
"msg_date": "Fri, 6 May 2022 20:40:33 +0800",
"msg_from": "Zaorang Yang <zaorangy@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix typo in comment"
},
{
"msg_contents": "On 2022-May-06, Zaorang Yang wrote:\n\n> Maybe, the first letter of comments in postinit.c should be capitalized.\n\nHmm, typically these one-line comments are not \"full sentences\", so they\ndon't have capitals and no ending periods either. I wouldn't like the\nendless stream of patches that would result if we let this go in.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 6 May 2022 14:50:11 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo in comment"
},
{
"msg_contents": "> On 6 May 2022, at 14:50, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> On 2022-May-06, Zaorang Yang wrote:\n> \n>> Maybe, the first letter of comments in postinit.c should be capitalized.\n> \n> Hmm, typically these one-line comments are not \"full sentences\", so they\n> don't have capitals and no ending periods either. I wouldn't like the\n> endless stream of patches that would result if we let this go in.\n\nAgreed. A quick grep turns up a fair number of such comments:\n\n $ git grep \"^\\s*\\/\\* [a-z].\\+\\*\\/$\" src/ | wc -l\n 16588\n\nIf anything should be done to this it would perhaps be better to add to\npgindent or a similar automated process. If anything.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 6 May 2022 14:58:46 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo in comment"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-May-06, Zaorang Yang wrote:\n>> Maybe, the first letter of comments in postinit.c should be capitalized.\n\n> Hmm, typically these one-line comments are not \"full sentences\", so they\n> don't have capitals and no ending periods either. I wouldn't like the\n> endless stream of patches that would result if we let this go in.\n\nThere is no project style guideline suggesting one over the other, so\nI think we should just leave it alone.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 May 2022 09:34:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo in comment"
}
] |
[
{
"msg_contents": "I'm studying how the gist index works trying to improve the test coverage of gistbuild.c.\n\nReading the source code I noticed that the gistInitBuffering function is not covered, so I decided to start with it.\nReading the documentation and the source I understood that for this function to be executed it is necessary to force\nbufferring=on when creating the index or the index to be created is big enough to not fit in the cache, am I correct?\n\nConsidering the above, I added two new index creation statements in the gist regression test (patch attached) to create\nan index using buffering=on and another to try to simulate an index that does not fit in the cache.\n\nWith these new tests the coverage went from 45.3% to 85.5%, but I have some doubts:\n- Does this test make sense?\n- Would there be a way to validate that the buffering was done correctly?\n- Is this test necessary?\n\nI've been studying Postgresql implementations and I'm just trying to start contributing the source code.\n\n\n--\nMatheus Alcantara",
"msg_date": "Fri, 06 May 2022 13:19:43 +0000",
"msg_from": "Matheus Alcantara <mths.dev@pm.me>",
"msg_from_op": true,
"msg_subject": "Trying to add more tests to gistbuild.c"
},
{
"msg_contents": "The attached patch is failing on make check due to a typo, resubmitting the correct one.\n\n--\nMatheus Alcantara",
"msg_date": "Fri, 06 May 2022 13:43:39 +0000",
"msg_from": "Matheus Alcantara <mths.dev@pm.me>",
"msg_from_op": true,
"msg_subject": "Re: Trying to add more tests to gistbuild.c"
},
{
"msg_contents": "Hi Matheus,\n\nMany thanks for hacking on increasing the code coverage! I noticed\nthat this patch was stuck in \"Needs Review\" state for some time and\ndecided to take a look.\n\n> With these new tests the coverage went from 45.3% to 85.5%, but I have some doubts:\n> - Does this test make sense?\n> - Would there be a way to validate that the buffering was done correctly?\n> - Is this test necessary?\n\nI can confirm that the coverage improved as stated.\n\nPersonally I believe this change makes perfect sense. Although this is\narguably not an ideal test for gistInitBuffering(), writing proper\ntests for `static` procedures is generally not an easy task. Executing\nthe code at least once in order to make sure that it doesn't crash,\ndoesn't throw errors and doesn't violate any Asserts() is better than\ndoing nothing.\n\nHere is a slightly modified patch with added commit message. Please\nnote that patches created with `git format-patch` are generally\npreferable than patches created with `git diff`.\n\nI'm going to change the status of this patch to \"Ready for Committer\"\nin a short time unless anyone has a second opinion.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Wed, 27 Jul 2022 16:07:46 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Trying to add more tests to gistbuild.c"
},
{
"msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> Personally I believe this change makes perfect sense. Although this is\n> arguably not an ideal test for gistInitBuffering(), writing proper\n> tests for `static` procedures is generally not an easy task. Executing\n> the code at least once in order to make sure that it doesn't crash,\n> doesn't throw errors and doesn't violate any Asserts() is better than\n> doing nothing.\n\nYeah, our poor test coverage for gist buffering builds has been\ncomplained of before [1]. It'd be good to do something about that;\nthe trick is to not bloat the runtime of the core regression tests\ntoo much.\n\nI checked this patch and what I see is:\n* gistbuild.c coverage improves to 81.8% line coverage, more or less\n as stated (probably depends on if you use --enable-cassert)\n* gistbuildbuffers.c coverage improves from 0 to 14.0%\n* gist.sql runtime goes from ~215ms to ~280ms\n\nThe results for gistbuildbuffers.c are kind of disappointing, especially\ngiven the nontrivial runtime cost. (YMMV on the runtime of course, but\nI see pretty stable numbers under non-parallel \"make installcheck\".)\n\nIn the previous thread, Pavel Borisov offered a test patch [2] that\nstill applies, and it brings the line count coverage to 95%+ in\nboth files. Unfortunately it more than doubles the test runtime,\nto somewhere around 580ms, so I rejected it at the time hoping\nfor a better compromise.\n\nThe idea I see you using that Pavel missed is to reduce\neffective_cache_size to persuade the buffering build logic to kick in\nwith less data. But it looks like multilevel buffering still doesn't\nget activated, which is how come gistbuildbuffers.c coverage still\nremains poor. (I tried reducing effective_cache_size further,\nbut it didn't help.)\n\nI wonder if we can combine ideas from the two patches to get a\nbetter tradeoff of code coverage vs. runtime.\n\nAnother thing we might consider is to move the testing responsibility\nsomewhere else. The reason I'm allergic to adding a lot of runtime\nhere is that the core regression tests are invoked at least four times\nin a standard buildfarm run, often more. But that concern could be\nalleviated if we put the test somewhere else. Maybe contrib/btree_gist\nwould be suitable?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/16329-7a6aa9b6fa1118a1%40postgresql.org\n[2] https://www.postgresql.org/message-id/CALT9ZEECCV5m7wvxg46PC-7x-EybUmnpupBGhSFMoAAay%2Br6HQ%40mail.gmail.com\n\n\n",
"msg_date": "Fri, 29 Jul 2022 18:53:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Trying to add more tests to gistbuild.c"
},
{
"msg_contents": "------- Original Message -------\nOn Friday, July 29th, 2022 at 19:53, Tom Lane tgl@sss.pgh.pa.us wrote:\n\n> I wonder if we can combine ideas from the two patches to get a\n> better tradeoff of code coverage vs. runtime.\n\nI was checking the Pavel patch and notice that he was using the fillfactor\nparameter when creating the gist index. I changed my previous patch to include\nthis parameter and the code coverage of gistbuild.c and gistbuildbuffers.c was\nimproved to 97.7% and 92.8% respectively.\n\nI'm attaching this new patch, could you please check if this change make sense\nand also don't impact the test runtime?\n\n> Another thing we might consider is to move the testing responsibility\n> somewhere else. The reason I'm allergic to adding a lot of runtime\n> here is that the core regression tests are invoked at least four times\n> in a standard buildfarm run, often more. But that concern could be\n> alleviated if we put the test somewhere else. Maybe contrib/btree_gist\n> would be suitable?\n\nI can't say much about it. If there's anything I can do here, please let\nme know.\n\n--\nMatheus Alcantara",
"msg_date": "Sat, 30 Jul 2022 18:55:51 +0000",
"msg_from": "Matheus Alcantara <mths.dev@pm.me>",
"msg_from_op": true,
"msg_subject": "Re: Trying to add more tests to gistbuild.c"
},
{
"msg_contents": "Matheus Alcantara <mths.dev@pm.me> writes:\n> On Friday, July 29th, 2022 at 19:53, Tom Lane tgl@sss.pgh.pa.us wrote:\n>> I wonder if we can combine ideas from the two patches to get a\n>> better tradeoff of code coverage vs. runtime.\n\n> I was checking the Pavel patch and notice that he was using the fillfactor\n> parameter when creating the gist index. I changed my previous patch to include\n> this parameter and the code coverage of gistbuild.c and gistbuildbuffers.c was\n> improved to 97.7% and 92.8% respectively.\n\nNice!\n\nI poked at this some more, wondering if we could combine the two new\nindex builds into one test, and eventually realized something I should\nprobably have figured out before: if you make effective_cache_size\ntoo small, it refuses to use buffering build at all, and you get here:\n\n if (levelStep <= 0)\n {\n elog(DEBUG1, \"failed to switch to buffered GiST build\");\n buildstate->buildMode = GIST_BUFFERING_DISABLED;\n return;\n }\n\nIn fact, at least on my machine, the first test case hits this and\nthus effectively adds no coverage at all :-(. If I remove that and\njust add the second index build, the above-quoted bit is the *only*\nthing in gistbuild.c or gistbuildbuffers.c that is missed compared\nto using both test cases. Moreover, the runtime of the test comes\ndown to ~240 ms, which is an increment of ~25ms instead of ~65ms.\n(Which shows that the non-buffering build is slower, not surprising\nI guess.)\n\nI judge that covering those three lines is not worth the extra 40ms,\nso pushed just the second test case.\n\nThanks for poking at this! I'm much happier now about the state of\ncode coverage in that area.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 30 Jul 2022 16:33:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Trying to add more tests to gistbuild.c"
},
{
"msg_contents": "On Sun, 31 Jul 2022 at 00:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Matheus Alcantara <mths.dev@pm.me> writes:\n> > On Friday, July 29th, 2022 at 19:53, Tom Lane tgl@sss.pgh.pa.us wrote:\n> >> I wonder if we can combine ideas from the two patches to get a\n> >> better tradeoff of code coverage vs. runtime.\n>\n> > I was checking the Pavel patch and notice that he was using the\n> fillfactor\n> > parameter when creating the gist index. I changed my previous patch to\n> include\n> > this parameter and the code coverage of gistbuild.c and\n> gistbuildbuffers.c was\n> > improved to 97.7% and 92.8% respectively.\n>\n> Nice!\n>\n> I poked at this some more, wondering if we could combine the two new\n> index builds into one test, and eventually realized something I should\n> probably have figured out before: if you make effective_cache_size\n> too small, it refuses to use buffering build at all, and you get here:\n>\n> if (levelStep <= 0)\n> {\n> elog(DEBUG1, \"failed to switch to buffered GiST build\");\n> buildstate->buildMode = GIST_BUFFERING_DISABLED;\n> return;\n> }\n>\n> In fact, at least on my machine, the first test case hits this and\n> thus effectively adds no coverage at all :-(. If I remove that and\n> just add the second index build, the above-quoted bit is the *only*\n> thing in gistbuild.c or gistbuildbuffers.c that is missed compared\n> to using both test cases. Moreover, the runtime of the test comes\n> down to ~240 ms, which is an increment of ~25ms instead of ~65ms.\n> (Which shows that the non-buffering build is slower, not surprising\n> I guess.)\n>\n> I judge that covering those three lines is not worth the extra 40ms,\n> so pushed just the second test case.\n>\n> Thanks for poking at this! I'm much happier now about the state of\n> code coverage in that area.\n>\n\nI'm happy, that the improvement of the tests I've forgotten about so long\nago is finally committed. Thank you!\n\n-- \nBest regards,\nPavel Borisov\n\nOn Sun, 31 Jul 2022 at 00:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:Matheus Alcantara <mths.dev@pm.me> writes:\n> On Friday, July 29th, 2022 at 19:53, Tom Lane tgl@sss.pgh.pa.us wrote:\n>> I wonder if we can combine ideas from the two patches to get a\n>> better tradeoff of code coverage vs. runtime.\n\n> I was checking the Pavel patch and notice that he was using the fillfactor\n> parameter when creating the gist index. I changed my previous patch to include\n> this parameter and the code coverage of gistbuild.c and gistbuildbuffers.c was\n> improved to 97.7% and 92.8% respectively.\n\nNice!\n\nI poked at this some more, wondering if we could combine the two new\nindex builds into one test, and eventually realized something I should\nprobably have figured out before: if you make effective_cache_size\ntoo small, it refuses to use buffering build at all, and you get here:\n\n if (levelStep <= 0)\n {\n elog(DEBUG1, \"failed to switch to buffered GiST build\");\n buildstate->buildMode = GIST_BUFFERING_DISABLED;\n return;\n }\n\nIn fact, at least on my machine, the first test case hits this and\nthus effectively adds no coverage at all :-(. If I remove that and\njust add the second index build, the above-quoted bit is the *only*\nthing in gistbuild.c or gistbuildbuffers.c that is missed compared\nto using both test cases. Moreover, the runtime of the test comes\ndown to ~240 ms, which is an increment of ~25ms instead of ~65ms.\n(Which shows that the non-buffering build is slower, not surprising\nI guess.)\n\nI judge that covering those three lines is not worth the extra 40ms,\nso pushed just the second test case.\n\nThanks for poking at this! I'm much happier now about the state of\ncode coverage in that area.I'm happy, that the improvement of the tests I've forgotten about so long ago is finally committed. Thank you!-- Best regards,Pavel Borisov",
"msg_date": "Mon, 1 Aug 2022 14:30:00 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Trying to add more tests to gistbuild.c"
}
] |
[
{
"msg_contents": "Hi,\n\nNot sure if these compiler-mutterings are worth reporting but I guess \nwe're trying to get a silent compile.\n\nSystem: Debian 4.9.303-1 (2022-03-07) x86_64 GNU/Linux\nCompiling with gcc 12.1.0 causes the below 'warning' and 'note'.\nCompiling with --enable-cassert --enable-debug is silent, no warnings)\n\nIn function ‘guc_var_compare’,\n inlined from ‘bsearch’ at \n/usr/include/x86_64-linux-gnu/bits/stdlib-bsearch.h:33:23,\n inlined from ‘find_option’ at guc.c:5649:35:\nguc.c:5736:38: warning: array subscript ‘const struct config_generic[0]’ \nis partly outside array bounds of ‘const char[8]’ [-Warray-bounds]\n 5736 | return guc_name_compare(confa->name, confb->name);\n | ~~~~~^~~~~~\n\nguc.c: In function ‘find_option’:\nguc.c:5636:25: note: object ‘name’ of size 8\n 5636 | find_option(const char *name, bool create_placeholders, bool \nskip_errors,\n | ~~~~~~~~~~~~^~~~\n\n(Compiling with gcc 6.3.0 does not complain.)\n\nBelow are the two configure lines, FWIW.\n\n\nErik Rijkers\n\n\n# cassert-build: no warning/note\n./configure \\\n--prefix=/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD \\\n--bindir=/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD/bin \\\n--libdir=/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD/lib \\\n--with-pgport=6515 --quiet --enable-depend \\\n--enable-cassert --enable-debug \\\n--with-openssl --with-perl --with-libxml --with-libxslt --with-zlib \\\n--enable-tap-tests --with-extra-version=_0506_HEAD_701d --with-lz4\n\n\n# normal build: causes warning/note:\n./configure \\\n--prefix=/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD \\\n--bindir=/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD/bin.fast \\\n--libdir=/home/aardvark/pg_stuff/pg_installations/pgsql.HEAD/lib.fast \\\n--with-pgport=6515 --quiet --enable-depend \\\n--with-openssl --with-perl --with-libxml --with-libxslt --with-zlib \\\n--enable-tap-tests --with-extra-version=_0506_HEAD_701d --with-lz4\n\n\n\n",
"msg_date": "Fri, 6 May 2022 16:34:41 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "gcc 12.1.0 warning"
},
{
"msg_contents": "Erik Rijkers <er@xs4all.nl> writes:\n> Not sure if these compiler-mutterings are worth reporting but I guess \n> we're trying to get a silent compile.\n\n> System: Debian 4.9.303-1 (2022-03-07) x86_64 GNU/Linux\n> Compiling with gcc 12.1.0 causes the below 'warning' and 'note'.\n> Compiling with --enable-cassert --enable-debug is silent, no warnings)\n\n> In function ‘guc_var_compare’,\n> inlined from ‘bsearch’ at \n> /usr/include/x86_64-linux-gnu/bits/stdlib-bsearch.h:33:23,\n> inlined from ‘find_option’ at guc.c:5649:35:\n> guc.c:5736:38: warning: array subscript ‘const struct config_generic[0]’ \n> is partly outside array bounds of ‘const char[8]’ [-Warray-bounds]\n> 5736 | return guc_name_compare(confa->name, confb->name);\n> | ~~~~~^~~~~~\n\nI'd call that a compiler bug.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 May 2022 10:46:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: gcc 12.1.0 warning"
},
{
"msg_contents": "Hi,\n\nOn Fri, 27 Oct 2023 at 12:34, Erik Rijkers <er@xs4all.nl> wrote:\n>\n> Hi,\n>\n> Not sure if these compiler-mutterings are worth reporting but I guess\n> we're trying to get a silent compile.\n>\n> System: Debian 4.9.303-1 (2022-03-07) x86_64 GNU/Linux\n> Compiling with gcc 12.1.0 causes the below 'warning' and 'note'.\n> Compiling with --enable-cassert --enable-debug is silent, no warnings)\n>\n> In function ‘guc_var_compare’,\n> inlined from ‘bsearch’ at\n> /usr/include/x86_64-linux-gnu/bits/stdlib-bsearch.h:33:23,\n> inlined from ‘find_option’ at guc.c:5649:35:\n> guc.c:5736:38: warning: array subscript ‘const struct config_generic[0]’\n> is partly outside array bounds of ‘const char[8]’ [-Warray-bounds]\n> 5736 | return guc_name_compare(confa->name, confb->name);\n> | ~~~~~^~~~~~\n>\n> guc.c: In function ‘find_option’:\n> guc.c:5636:25: note: object ‘name’ of size 8\n> 5636 | find_option(const char *name, bool create_placeholders, bool\n> skip_errors,\n> | ~~~~~~~~~~~~^~~~\n>\n> (Compiling with gcc 6.3.0 does not complain.)\n\nI was testing 'upgrading CI Debian images to bookworm'. I tested the\nbookworm on REL_15, REL_16 and upstream. REL_16 and upstream finished\nsuccessfully but the CompilerWarnings task failed on REL_15 with the\nsame error.\n\ngcc version: 12.2.0\n\nCI Run: https://cirrus-ci.com/task/6151742664998912\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Fri, 27 Oct 2023 13:09:01 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: gcc 12.1.0 warning"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-27 13:09:01 +0300, Nazir Bilal Yavuz wrote:\n> I was testing 'upgrading CI Debian images to bookworm'. I tested the\n> bookworm on REL_15, REL_16 and upstream. REL_16 and upstream finished\n> successfully but the CompilerWarnings task failed on REL_15 with the\n> same error.\n\nIs that still the case? I briefly tried to repro this outside of CI and\ncouldn't reproduce the warning.\n\nI'd really like to upgrade the CI images, it doesn't seem great to continue\nusing oldstable.\n\n\n> gcc version: 12.2.0\n> \n> CI Run: https://cirrus-ci.com/task/6151742664998912\n\nUnfortunately the logs aren't accessible anymore, so I can't check the precise\npatch level of the compiler and/or the precise invocation used.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 12 Apr 2024 19:25:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: gcc 12.1.0 warning"
},
{
"msg_contents": "Hi,\n\nOn Sat, 13 Apr 2024 at 05:25, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-10-27 13:09:01 +0300, Nazir Bilal Yavuz wrote:\n> > I was testing 'upgrading CI Debian images to bookworm'. I tested the\n> > bookworm on REL_15, REL_16 and upstream. REL_16 and upstream finished\n> > successfully but the CompilerWarnings task failed on REL_15 with the\n> > same error.\n>\n> Is that still the case? I briefly tried to repro this outside of CI and\n> couldn't reproduce the warning.\n>\n> I'd really like to upgrade the CI images, it doesn't seem great to continue\n> using oldstable.\n>\n>\n> > gcc version: 12.2.0\n> >\n> > CI Run: https://cirrus-ci.com/task/6151742664998912\n>\n> Unfortunately the logs aren't accessible anymore, so I can't check the precise\n> patch level of the compiler and/or the precise invocation used.\n\nI am able to reproduce this. I regenerated the debian bookworm image\nand ran CI on REL_15_STABLE with this image.\n\nCI Run: https://cirrus-ci.com/task/4978799442395136\n\n--\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Mon, 15 Apr 2024 11:25:05 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: gcc 12.1.0 warning"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-15 11:25:05 +0300, Nazir Bilal Yavuz wrote:\n> I am able to reproduce this. I regenerated the debian bookworm image\n> and ran CI on REL_15_STABLE with this image.\n> \n> CI Run: https://cirrus-ci.com/task/4978799442395136\n\nHm, not sure why I wasn't able to repro - now I can.\n\nIt actually seems like a legitimate warning: The caller allocates the key as\n\nstatic struct config_generic *\nfind_option(const char *name, bool create_placeholders, bool skip_errors,\n\t\t\tint elevel)\n{\n\tconst char **key = &name;\n\nand then does\n\tres = (struct config_generic **) bsearch((void *) &key,\n\t\t\t\t\t\t\t\t\t\t\t (void *) guc_variables,\n\t\t\t\t\t\t\t\t\t\t\t num_guc_variables,\n\t\t\t\t\t\t\t\t\t\t\t sizeof(struct config_generic *),\n\t\t\t\t\t\t\t\t\t\t\t guc_var_compare);\n\nwhile guc_var_compare() assume it's being passed a full config_generic:\n\nstatic int\nguc_var_compare(const void *a, const void *b)\n{\n\tconst struct config_generic *confa = *(struct config_generic *const *) a;\n\tconst struct config_generic *confb = *(struct config_generic *const *) b;\n\treturn guc_name_compare(confa->name, confb->name);\n}\n\n\nwhich several versions of gcc then complain about:\n\nIn function ‘guc_var_compare’,\n inlined from ‘bsearch’ at /usr/include/x86_64-linux-gnu/bits/stdlib-bsearch.h:33:23,\n inlined from ‘find_option’ at /home/andres/src/postgresql-15/src/backend/utils/misc/guc.c:5640:35:\n/home/andres/src/postgresql-15/src/backend/utils/misc/guc.c:5727:38: warning: array subscript ‘const struct config_generic[0]’ is partly outside array bounds of ‘const char[8]’ [-Warray-bounds=]\n 5727 | return guc_name_compare(confa->name, confb->name);\n | ~~~~~^~~~~~\n/home/andres/src/postgresql-15/src/backend/utils/misc/guc.c: In function ‘find_option’:\n/home/andres/src/postgresql-15/src/backend/utils/misc/guc.c:5627:25: note: object ‘name’ of size 8\n 5627 | find_option(const char *name, bool create_placeholders, bool skip_errors,\n\n\nWhich seems entirely legitimate. ISTM that guc_var_compare() ought to only\ncast the pointers to the key type, i.e. char *. And incidentally that does\nprevent the warning.\n\nThe reason it doesn't happen in newer versions of postgres is that we aren't\nusing guc_var_compare() in the relevant places anymore...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Apr 2024 09:59:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: gcc 12.1.0 warning"
},
{
"msg_contents": "Hi,\n\nOn Tue, 23 Apr 2024 at 19:59, Andres Freund <andres@anarazel.de> wrote:\n>\n>\n> Which seems entirely legitimate. ISTM that guc_var_compare() ought to only\n> cast the pointers to the key type, i.e. char *. And incidentally that does\n> prevent the warning.\n>\n> The reason it doesn't happen in newer versions of postgres is that we aren't\n> using guc_var_compare() in the relevant places anymore...\n\nThe fix is attached. It cleanly applies from REL_15_STABLE to\nREL_12_STABLE, fixes the warnings and the tests pass.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft",
"msg_date": "Fri, 10 May 2024 12:13:21 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: gcc 12.1.0 warning"
},
{
"msg_contents": "Hi,\n\nOn 2024-05-10 12:13:21 +0300, Nazir Bilal Yavuz wrote:\n> On Tue, 23 Apr 2024 at 19:59, Andres Freund <andres@anarazel.de> wrote:\n> >\n> >\n> > Which seems entirely legitimate. ISTM that guc_var_compare() ought to only\n> > cast the pointers to the key type, i.e. char *. And incidentally that does\n> > prevent the warning.\n> >\n> > The reason it doesn't happen in newer versions of postgres is that we aren't\n> > using guc_var_compare() in the relevant places anymore...\n> \n> The fix is attached. It cleanly applies from REL_15_STABLE to\n> REL_12_STABLE, fixes the warnings and the tests pass.\n\nThanks! I've applied it to all branches - while it's not required to avoid a\nwarning in newer versions, it's still not correct as it was...\n\nGreetings,\n\nAndres\n\n\n",
"msg_date": "Mon, 15 Jul 2024 09:41:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: gcc 12.1.0 warning"
},
{
"msg_contents": "On Mon, Jul 15, 2024 at 09:41:55AM -0700, Andres Freund wrote:\n> On 2024-05-10 12:13:21 +0300, Nazir Bilal Yavuz wrote:\n>> The fix is attached. It cleanly applies from REL_15_STABLE to\n>> REL_12_STABLE, fixes the warnings and the tests pass.\n> \n> Thanks! I've applied it to all branches - while it's not required to avoid a\n> warning in newer versions, it's still not correct as it was...\n\nnitpick: pgindent thinks one of the spaces is unnecessary.\n\ndiff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\nindex a043d529ef..b0947a4cf1 100644\n--- a/src/backend/utils/misc/guc.c\n+++ b/src/backend/utils/misc/guc.c\n@@ -1289,8 +1289,8 @@ find_option(const char *name, bool create_placeholders, bool skip_errors,\n static int\n guc_var_compare(const void *a, const void *b)\n {\n- const char *namea = **(const char ** const *) a;\n- const char *nameb = **(const char ** const *) b;\n+ const char *namea = **(const char **const *) a;\n+ const char *nameb = **(const char **const *) b;\n\n return guc_name_compare(namea, nameb);\n }\n\n-- \nnathan\n\n\n",
"msg_date": "Mon, 15 Jul 2024 12:14:47 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: gcc 12.1.0 warning"
},
{
"msg_contents": "On 2024-07-15 12:14:47 -0500, Nathan Bossart wrote:\n> On Mon, Jul 15, 2024 at 09:41:55AM -0700, Andres Freund wrote:\n> > On 2024-05-10 12:13:21 +0300, Nazir Bilal Yavuz wrote:\n> >> The fix is attached. It cleanly applies from REL_15_STABLE to\n> >> REL_12_STABLE, fixes the warnings and the tests pass.\n> > \n> > Thanks! I've applied it to all branches - while it's not required to avoid a\n> > warning in newer versions, it's still not correct as it was...\n> \n> nitpick: pgindent thinks one of the spaces is unnecessary.\n\nUgh. Sorry for that, will take a look at fixing that :(\n\n\n",
"msg_date": "Mon, 15 Jul 2024 13:36:27 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: gcc 12.1.0 warning"
}
] |
[
{
"msg_contents": "Hey,\n\nLooking again at the SELECT Reference page while helping a novice user I\nwas once again annoyed but how the most common query syntax form for the\nFROM clause is buried within a bunch of \"how to generate a table\" detail.\n\nIn this specific case I also was trying to describe why when you have three\ntables to join that you can build a join tree and only actually use a\nsingle from_item (IOW, no commas are required/allowed). From there it\nbecame clear that from_item is serving two roles here and introducing a\nstructural element (join_expression) to represent the commonly used join\ntree query form made sense. I then modelled the tree-like nature (using\nthe term recursive for now) explicitly as the left side of the join can be\neither a new join_expression, extending the tree, or a from_item, bringing\nthe tree to its end.\n\nIn the description I had to move LATERAL up above all the join stuff since\nit pertains to from_item and I wanted to call out the final set of\nparameters as all being related to the join tree.\n\nI'll do a pass at making sure we are using consistent terminology if this\nhas a shot of getting committed.\n\nI've attached the patch and the resulting html page.\n\nDavid J.",
"msg_date": "Fri, 6 May 2022 09:19:23 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Select Reference Page - Make Join Syntax More Prominent"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Looking again at the SELECT Reference page while helping a novice user I\n> was once again annoyed but how the most common query syntax form for the\n> FROM clause is buried within a bunch of \"how to generate a table\" detail.\n\nHmm ... I'm good with the concept of making JOIN syntax more prominent,\nbut I don't much like this patch. I think it's fundamentally wrong to\ndescribe from_item as disjoint from join_expression, and you're\ngoing to make people more confused not less so by doing that.\n\nIMO there's nothing really wrong with the synopsis. The problem is\nin the \"FROM Clause\" section, which dives headfirst into the weedy\ndetails without any context. What do you think of adding an\nintroductory para or two in that section, saying that the FROM\nclause is built from base tables possibly joined into join expressions?\nYou sort of have that here, but it's pretty terse still. Maybe it\nwould be helpful also to separate the subsequent list of syntax\ndetails into base-table options and join syntax.\n\nNot sure what I think about moving LATERAL up. That's a sufficiently\nweird/obtuse thing that I think we're best off dealing with it last,\neven if that means we need a forward reference or two. I'd almost\nput it into its own sub-section of \"FROM Clause\".\n\nThere's also an argument that the reference page *should* be terse\nand the place to cater to novices is 7.2's \"Table Expressions\"\ndiscussion (which we could, but don't, link from the ref page).\nI'm not sure if there's any good way to rework that material to make\nit clearer, but there are definitely bits of it that I don't find\nvery well-written. There might be an argument for jettisoning\nsome details (like the obsolete \"table*\" notation) from 7.2\naltogether and covering those only in the ref page.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 07 Jul 2022 17:33:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Select Reference Page - Make Join Syntax More Prominent"
},
{
"msg_contents": "On Thu, Jul 7, 2022 at 2:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > Looking again at the SELECT Reference page while helping a novice user I\n> > was once again annoyed but how the most common query syntax form for the\n> > FROM clause is buried within a bunch of \"how to generate a table\" detail.\n>\n> Hmm ... I'm good with the concept of making JOIN syntax more prominent,\n> but I don't much like this patch. I think it's fundamentally wrong to\n> describe from_item as disjoint from join_expression, and you're\n> going to make people more confused not less so by doing that.\n>\n\nI'm not so sure about this - but in any case, as you note below, some of\nthis probably would be better placed in Chapter 7. My impression is that\nthis aspect is dense and confusing enough as-is that people are learning\nthe syntax elsewhere and not worrying about how we express it here. My\ndesire is to bring some design theory aspects to the docs and have the\nsyntax expression align with the design. I'll do another pass to try and\nhopefully find some middle ground here; or be more convincing.\n\n\n> IMO there's nothing really wrong with the synopsis. The problem is\n> in the \"FROM Clause\" section, which dives headfirst into the weedy\n> details without any context. What do you think of adding an\n> introductory para or two in that section, saying that the FROM\n> clause is built from base tables possibly joined into join expressions?\n> You sort of have that here, but it's pretty terse still. Maybe it\n> would be helpful also to separate the subsequent list of syntax\n> details into base-table options and join syntax.\n>\n\nI really don't think focusing on base tables is the best choice here.\nFrankly, those end up being fairly trivial. When you start adding lateral\n(especially if introduced by comma instead of a join), and subqueries more\ngenerally, that understanding how the pieces compose becomes more useful.\nAnd by disenjoining the from_item and join_item it becomes easier to give\neach a purpose and describe how they relate to each other, rather than\ntrying to paper over their differences. If anything I think that having\nfrom_item be used within the join_item syntax and directly under FROM maybe\nbe detrimental. Instead, have a term just for the items comma-separated in\nthe FROM clause, the ones that are CROSS JOINed and use the WHERE clause\nfor restrictions (or maybe not...).\n\n\n>\n> Not sure what I think about moving LATERAL up. That's a sufficiently\n> weird/obtuse thing that I think we're best off dealing with it last,\n>\n\nStrongly disagree given how useful set returning functions can be. Part of\nthe reason I'm doing this now is a recent spate of questions where the\nadvice that I gave was to write an SRF in a joined lateral, what I consider\nat least to be the idiomatic query structure for that use case.\n\n\n\n> There's also an argument that the reference page *should* be terse\n> and the place to cater to novices is 7.2's \"Table Expressions\"\n> discussion (which we could, but don't, link from the ref page).\n> I'm not sure if there's any good way to rework that material to make\n> it clearer, but there are definitely bits of it that I don't find\n> very well-written. There might be an argument for jettisoning\n> some details (like the obsolete \"table*\" notation) from 7.2\n> altogether and covering those only in the ref page.\n>\n>\nAgreed, I will see about allocating material between the two sections\nbetter on the next pass.\n\nI probably need to understand ROWS FROM syntax better as well to make sure\nit fits into the mental model I am trying to create.\n\nDavid J.\n\nOn Thu, Jul 7, 2022 at 2:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Looking again at the SELECT Reference page while helping a novice user I\n> was once again annoyed but how the most common query syntax form for the\n> FROM clause is buried within a bunch of \"how to generate a table\" detail.\n\nHmm ... I'm good with the concept of making JOIN syntax more prominent,\nbut I don't much like this patch. I think it's fundamentally wrong to\ndescribe from_item as disjoint from join_expression, and you're\ngoing to make people more confused not less so by doing that.I'm not so sure about this - but in any case, as you note below, some of this probably would be better placed in Chapter 7. My impression is that this aspect is dense and confusing enough as-is that people are learning the syntax elsewhere and not worrying about how we express it here. My desire is to bring some design theory aspects to the docs and have the syntax expression align with the design. I'll do another pass to try and hopefully find some middle ground here; or be more convincing.\n\nIMO there's nothing really wrong with the synopsis. The problem is\nin the \"FROM Clause\" section, which dives headfirst into the weedy\ndetails without any context. What do you think of adding an\nintroductory para or two in that section, saying that the FROM\nclause is built from base tables possibly joined into join expressions?\nYou sort of have that here, but it's pretty terse still. Maybe it\nwould be helpful also to separate the subsequent list of syntax\ndetails into base-table options and join syntax.I really don't think focusing on base tables is the best choice here. Frankly, those end up being fairly trivial. When you start adding lateral (especially if introduced by comma instead of a join), and subqueries more generally, that understanding how the pieces compose becomes more useful. And by disenjoining the from_item and join_item it becomes easier to give each a purpose and describe how they relate to each other, rather than trying to paper over their differences. If anything I think that having from_item be used within the join_item syntax and directly under FROM maybe be detrimental. Instead, have a term just for the items comma-separated in the FROM clause, the ones that are CROSS JOINed and use the WHERE clause for restrictions (or maybe not...). \n\nNot sure what I think about moving LATERAL up. That's a sufficiently\nweird/obtuse thing that I think we're best off dealing with it last,Strongly disagree given how useful set returning functions can be. Part of the reason I'm doing this now is a recent spate of questions where the advice that I gave was to write an SRF in a joined lateral, what I consider at least to be the idiomatic query structure for that use case. \nThere's also an argument that the reference page *should* be terse\nand the place to cater to novices is 7.2's \"Table Expressions\"\ndiscussion (which we could, but don't, link from the ref page).\nI'm not sure if there's any good way to rework that material to make\nit clearer, but there are definitely bits of it that I don't find\nvery well-written. There might be an argument for jettisoning\nsome details (like the obsolete \"table*\" notation) from 7.2\naltogether and covering those only in the ref page.Agreed, I will see about allocating material between the two sections better on the next pass.I probably need to understand ROWS FROM syntax better as well to make sure it fits into the mental model I am trying to create.David J.",
"msg_date": "Fri, 15 Jul 2022 17:38:24 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Select Reference Page - Make Join Syntax More Prominent"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a scenario where max_wal_size = 10GB and wal_recycle = on, the\npostgres starts to recycle and keep WAL files for future use,\neventually around 600~ WAL files have been kept in the pg_wal\ndirectory. The checkpoints were happening at regular intervals. But\nthe disk was about to get full (of course scaling up disk is an\noption) but to avoid \"no space left on device\" crashes, changed\nmax_wal_size = 5GB, and issued a checkpoint, thinking that postgres\nwill free up the 5GB of disk space. It seems like that's not the case\nbecause postgres will not remove future WAL files even after\nmax_wal_size is reduced, but if it can delete the future WAL file(s)\nimmediately, the server would have had 5GB free disk space to keep the\nserver up avoiding crash and meanwhile disk scaling can be performed.\n\nCan postgres delete the recycled future WAL files once max_wal_size is\nreduced and/or wal_recycle is set to off?\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 6 May 2022 21:50:26 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Can postgres ever delete the recycled future WAL files to free-up\n disk space if max_wal_size is reduced or wal_recycle is set to off?"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> Can postgres delete the recycled future WAL files once max_wal_size is\n> reduced and/or wal_recycle is set to off?\n\nA checkpoint should do that, see RemoveOldXlogFiles.\n\nMaybe you have a broken WAL archiving setup, or something else preventing\nremoval of old WAL files?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 May 2022 12:50:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Can postgres ever delete the recycled future WAL files to free-up\n disk space if max_wal_size is reduced or wal_recycle is set to off?"
},
{
"msg_contents": "On Fri, May 6, 2022 at 10:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > Can postgres delete the recycled future WAL files once max_wal_size is\n> > reduced and/or wal_recycle is set to off?\n>\n> A checkpoint should do that, see RemoveOldXlogFiles.\n>\n> Maybe you have a broken WAL archiving setup, or something else preventing\n> removal of old WAL files?\n\nThanks Tom. My test case is simple [1], no archiving, no replication\nslots - just plain initdb-ed cluster. My expectation is that whenever\nmax_wal_size/wal_recycle is changed from the last checkpoint value,\npostgres must be able to delete \"optionally\" \"all or some of\" the\nfuture WAL files to free-up some disk space (which is about to get\nfull) so that I can avoid server crashes and I will have some time to\ngo scale the disk.\n\n[1]\nshow min_wal_size;\nshow max_wal_size;\nshow wal_recycle;\n\ndrop table foo;\ncreate table foo(col int);\n\n-- run below pg_switch_wal and insert statements 9 times.\nselect pg_switch_wal();\ninsert into foo select * from generate_series(1, 1000);\n\nselect redo_wal_file from pg_control_checkpoint();\n\ncheckpoint;\n--there will be around 10 recycled WAL future WAL files.\n\nalter system set max_wal_size to '240MB';\nselect pg_reload_conf();\nshow max_wal_size;\n\ncheckpoint;\n--future WAL files will not be deleted.\n\nalter system set min_wal_size to '24MB';\nselect pg_reload_conf();\nshow min_wal_size;\n\ncheckpoint;\n--future WAL files will not be deleted.\n\nalter system set wal_recycle to off;\nselect pg_reload_conf();\nshow wal_recycle;\n\ncheckpoint;\n--future WAL files will not be deleted.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 9 May 2022 18:47:20 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Can postgres ever delete the recycled future WAL files to free-up\n disk space if max_wal_size is reduced or wal_recycle is set to off?"
},
{
"msg_contents": "On Mon, May 9, 2022 at 6:47 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, May 6, 2022 at 10:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > > Can postgres delete the recycled future WAL files once max_wal_size is\n> > > reduced and/or wal_recycle is set to off?\n> >\n> > A checkpoint should do that, see RemoveOldXlogFiles.\n> >\n> > Maybe you have a broken WAL archiving setup, or something else preventing\n> > removal of old WAL files?\n>\n> Thanks Tom. My test case is simple [1], no archiving, no replication\n> slots - just plain initdb-ed cluster. My expectation is that whenever\n> max_wal_size/wal_recycle is changed from the last checkpoint value,\n> postgres must be able to delete \"optionally\" \"all or some of\" the\n> future WAL files to free-up some disk space (which is about to get\n> full) so that I can avoid server crashes and I will have some time to\n> go scale the disk.\n>\n> [1]\n> show min_wal_size;\n> show max_wal_size;\n> show wal_recycle;\n>\n> drop table foo;\n> create table foo(col int);\n>\n> -- run below pg_switch_wal and insert statements 9 times.\n> select pg_switch_wal();\n> insert into foo select * from generate_series(1, 1000);\n>\n> select redo_wal_file from pg_control_checkpoint();\n>\n> checkpoint;\n> --there will be around 10 recycled WAL future WAL files.\n>\n> alter system set max_wal_size to '240MB';\n> select pg_reload_conf();\n> show max_wal_size;\n>\n> checkpoint;\n> --future WAL files will not be deleted.\n>\n> alter system set min_wal_size to '24MB';\n> select pg_reload_conf();\n> show min_wal_size;\n>\n> checkpoint;\n> --future WAL files will not be deleted.\n>\n> alter system set wal_recycle to off;\n> select pg_reload_conf();\n> show wal_recycle;\n>\n> checkpoint;\n> --future WAL files will not be deleted.\n\nHi, I'm thinking out loud - can we add all the recycled WAL files to a\nsorted list (oldest recycled WAL file to new recycled WAL file) and\nthen during checkpoint, if the max_wal_size is reduced or wal_recycle\nis set to off, then start deleting the future WAL files from the end\nof the sorted list. Upon restart of the server, if required, the\nsorted list of future WAL files can be rebuilt.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 13 May 2022 18:05:53 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Can postgres ever delete the recycled future WAL files to free-up\n disk space if max_wal_size is reduced or wal_recycle is set to off?"
},
{
"msg_contents": "On Fri, May 06, 2022 at 09:50:26PM +0530, Bharath Rupireddy wrote:\n> It seems like that's not the case because postgres will not remove future WAL\n> files even after max_wal_size is reduced,\n\nIn the past, I've had to generate synthetic write traffic and checkpoints to\nget WAL to shrink. +1 to make it respect max_wal_size on its own.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 13 May 2022 07:41:13 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Can postgres ever delete the recycled future WAL files to\n free-up disk space if max_wal_size is reduced or wal_recycle is set to off?"
},
{
"msg_contents": "On 5/13/22 05:35, Bharath Rupireddy wrote:\n> Hi, I'm thinking out loud - can we add all the recycled WAL files to a\n> sorted list (oldest recycled WAL file to new recycled WAL file) and\n> then during checkpoint, if the max_wal_size is reduced or wal_recycle\n> is set to off, then start deleting the future WAL files from the end\n> of the sorted list. Upon restart of the server, if required, the\n> sorted list of future WAL files can be rebuilt.\n\n(This is registered in CF, but there doesn't seem to be a patch as part\nof this thread, so I don't think there's anything to review. I've closed\nit out to avoid further triage, but if I've somehow missed a patch feel\nfree to resurrect it -- and maybe point the patch out in an annotation\nor something.)\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Mon, 1 Aug 2022 15:03:19 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Can postgres ever delete the recycled future WAL files to free-up\n disk space if max_wal_size is reduced or wal_recycle is set to off?"
}
] |
[
{
"msg_contents": "Folks,\n\nPlease find attached a patch to change the sub-second granularity of\nlog timestamps from milliseconds to microseconds.\n\nI started out working on a longer patch that will give people\nmore choices than whole seconds and microseconds, but there were a lot\nof odd corner cases, including what I believe might have been a\nrequirement for C11, should be wish to get sub-microsecond\ngranularity.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sun, 8 May 2022 20:44:51 +0000",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Finer grain log timestamps"
},
{
"msg_contents": "On Sun, May 08, 2022 at 08:44:51PM +0000, David Fetter wrote:\n> CREATE TABLE postgres_log\n> (\n> - log_time timestamp(3) with time zone,\n> + log_time timestamp(6) with time zone,\n\nPlease also update the corresponding thing in doc/src/sgml/file-fdw.sgml\n\nIt looks like the patch I suggested to include a reminder about this was never\napplied.\nhttps://www.postgresql.org/message-id/10995044.nUPlyArG6x@aivenronan\n\nSee also:\ne568ed0eb07239b7e53d948565ebaeb6f379630f\n0830d21f5b01064837dc8bd910ab31a5b7a1101a\n\n\n",
"msg_date": "Sun, 8 May 2022 16:12:27 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Finer grain log timestamps"
},
{
"msg_contents": "On Sun, May 08, 2022 at 04:12:27PM -0500, Justin Pryzby wrote:\n> On Sun, May 08, 2022 at 08:44:51PM +0000, David Fetter wrote:\n> > CREATE TABLE postgres_log\n> > (\n> > - log_time timestamp(3) with time zone,\n> > + log_time timestamp(6) with time zone,\n> \n> Please also update the corresponding thing in doc/src/sgml/file-fdw.sgml\n\nThanks for looking this over, and please find attached the next\nversion.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sun, 8 May 2022 22:02:22 +0000",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Finer grain log timestamps"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n\n> diff --git src/backend/utils/error/elog.c src/backend/utils/error/elog.c\n> index 55ee5423af..4698e32ab7 100644\n> --- src/backend/utils/error/elog.c\n> +++ src/backend/utils/error/elog.c\n> @@ -2295,7 +2295,7 @@ char *\n> get_formatted_log_time(void)\n> {\n> \tpg_time_t\tstamp_time;\n> -\tchar\t\tmsbuf[13];\n> +\tchar\t\tmsbuf[16];\n\nNow that it holds microseconds (µs), not milliseconds (ms), should it\nnot be renamed to `usbuf`?\n\n- ilmari\n\n\n",
"msg_date": "Mon, 09 May 2022 11:21:26 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: Finer grain log timestamps"
},
{
"msg_contents": "On Mon, May 09, 2022 at 11:21:26AM +0100, Dagfinn Ilmari Manns�ker wrote:\n> David Fetter <david@fetter.org> writes:\n> \n> > diff --git src/backend/utils/error/elog.c src/backend/utils/error/elog.c\n> > index 55ee5423af..4698e32ab7 100644\n> > --- src/backend/utils/error/elog.c\n> > +++ src/backend/utils/error/elog.c\n> > @@ -2295,7 +2295,7 @@ char *\n> > get_formatted_log_time(void)\n> > {\n> > \tpg_time_t\tstamp_time;\n> > -\tchar\t\tmsbuf[13];\n> > +\tchar\t\tmsbuf[16];\n> \n> Now that it holds microseconds (�s), not milliseconds (ms), should it\n> not be renamed to `usbuf`?\n\nGood point.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Mon, 13 Jun 2022 07:03:27 +0000",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Finer grain log timestamps"
},
{
"msg_contents": "On Sun, May 8, 2022 at 4:45 PM David Fetter <david@fetter.org> wrote:\n> Please find attached a patch to change the sub-second granularity of\n> log timestamps from milliseconds to microseconds.\n\nWhy is this a good idea?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 13 Jun 2022 15:55:51 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Finer grain log timestamps"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sun, May 8, 2022 at 4:45 PM David Fetter <david@fetter.org> wrote:\n>> Please find attached a patch to change the sub-second granularity of\n>> log timestamps from milliseconds to microseconds.\n\n> Why is this a good idea?\n\nI can imagine that some people would have a use for microsecond\nresolution in log files, and I can also imagine that as machines\nget faster more people will want that. As against that, this\nwill bloat log files by a non-microscopic amount, and it's pretty\nlikely to break some log-scanning tools too. It's unclear to me\nthat that's a tradeoff we should force on everyone.\n\nI think a proposal less likely to have push-back would be to invent\na different log_line_prefix %-escape to produce microseconds.\nSadly, \"%u\" is already taken, but perhaps we could use \"%U\"?\n\nA different line of thought is to extend %t to provide a precision\nfield a la sprintf, so that for example \"%.3t\" is equivalent to\n\"%m\" and \"%.6t\" does what David wants, and we won't have to\nsearch for a new escape letter when the day arrives that\nsomebody wants nanosecond resolution. The same could be done\nwith %n, avoiding the need to find a different escape letter\nfor that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Jun 2022 16:22:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Finer grain log timestamps"
},
{
"msg_contents": "On Mon, Jun 13, 2022 at 04:22:42PM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Sun, May 8, 2022 at 4:45 PM David Fetter <david@fetter.org> wrote:\n> >> Please find attached a patch to change the sub-second granularity of\n> >> log timestamps from milliseconds to microseconds.\n> \n> > Why is this a good idea?\n> \n> I can imagine that some people would have a use for microsecond\n> resolution in log files, and I can also imagine that as machines\n> get faster more people will want that.\n\nYour imagination matches situations I've seen in production where\nthere was some ambiguity as to what happened when inside a millisecond\nboundary, and I'm sure I'm not alone in this. I've gotten this\nrequest from at least three people who to my knowledge knew nothing\nabout each other, and as I recall, the first time someone brought it\nup to me was over five years back.\n\n> As against that, this will bloat log files by a non-microscopic\n> amount, and it's pretty likely to break some log-scanning tools too.\n\nThree bytes per line, and log-scanning parsers that finicky are\nalready breaking all the time, respectively.\n\n> It's unclear to me that that's a tradeoff we should force on\n> everyone.\n\nThe tradeoff we're forcing on people at the moment is a loss of\nprecision they didn't ask for, implemented by some extra instructions\nthey didn't ask us to execute in a part of the code that's a hot path\nat exactly the times when the machine is busiest.\n\n> I think a proposal less likely to have push-back would be to invent\n> a different log_line_prefix %-escape to produce microseconds.\n> Sadly, \"%u\" is already taken, but perhaps we could use \"%U\"?\n> \n> A different line of thought is to extend %t to provide a precision\n> field a la sprintf, so that for example \"%.3t\" is equivalent to\n> \"%m\" and \"%.6t\" does what David wants, and we won't have to\n> search for a new escape letter when the day arrives that\n> somebody wants nanosecond resolution. The same could be done\n> with %n, avoiding the need to find a different escape letter\n> for that.\n\nI'll build this more sprintf-like thing if not doing so prevents the\nchange from happening, but frankly, I don't really see a point in it\nbecause the next \"log timestamps at some random negative power of 10\nsecond granularity\" requirement I see will be the first.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Tue, 14 Jun 2022 00:05:45 +0000",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Finer grain log timestamps"
},
{
"msg_contents": "On 2022-Jun-14, David Fetter wrote:\n\n> On Mon, Jun 13, 2022 at 04:22:42PM -0400, Tom Lane wrote:\n\n> > A different line of thought is to extend %t to provide a precision\n> > field a la sprintf, so that for example \"%.3t\" is equivalent to\n> > \"%m\" and \"%.6t\" does what David wants, and we won't have to\n> > search for a new escape letter when the day arrives that\n> > somebody wants nanosecond resolution. The same could be done\n> > with %n, avoiding the need to find a different escape letter\n> > for that.\n> \n> I'll build this more sprintf-like thing if not doing so prevents the\n> change from happening, but frankly, I don't really see a point in it\n> because the next \"log timestamps at some random negative power of 10\n> second granularity\" requirement I see will be the first.\n\nDo we *have* to provide support for arbitrary numbers of digits, though?\nWe could provide support for only %.3t and %.6t specifically, and not\nworry about other cases (error: width not supported). When somebody\nwants %.9t in ten years, we won't have to fight for which letter to\npick. And I agree that widening %m for everybody without recourse is\nnot great.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 20 Jun 2022 13:05:14 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Finer grain log timestamps"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Do we *have* to provide support for arbitrary numbers of digits, though?\n> We could provide support for only %.3t and %.6t specifically, and not\n> worry about other cases (error: width not supported).\n\nIf I were coding it, I would allow only exactly 1 digit (%.Nt) to simplify\nthe parsing side of things and bound the required buffer size. Without\nhaving written it, it's not clear to me whether further restricting the\nset of supported values would save much code. I will point out, though,\nthat throwing an error during log_line_prefix processing will lead\nstraight to infinite recursion.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Jun 2022 11:01:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Finer grain log timestamps"
},
{
"msg_contents": "On Mon, 20 Jun 2022 at 11:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>\n\n\n> If I were coding it, I would allow only exactly 1 digit (%.Nt) to simplify\n> the parsing side of things and bound the required buffer size. Without\n> having written it, it's not clear to me whether further restricting the\n> set of supported values would save much code. I will point out, though,\n> that throwing an error during log_line_prefix processing will lead\n> straight to infinite recursion.\n>\n\nI would parse the log_line_prefix when it is set. Then if there is a\nproblem you just log it using whatever format is in effect and don't change\nthe setting. Then the worst that happens is that logs show up in a format\nlog processing isn't prepared to accept.\n\nThat being said, I think I fall in the “just start putting more digits in\nthe log” camp, although it is conceivable the counter arguments might be\nconvincing.\n\nOn Mon, 20 Jun 2022 at 11:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n \nIf I were coding it, I would allow only exactly 1 digit (%.Nt) to simplify\nthe parsing side of things and bound the required buffer size. Without\nhaving written it, it's not clear to me whether further restricting the\nset of supported values would save much code. I will point out, though,\nthat throwing an error during log_line_prefix processing will lead\nstraight to infinite recursion.\nI would parse the log_line_prefix when it is set. Then if there is a problem you just log it using whatever format is in effect and don't change the setting. Then the worst that happens is that logs show up in a format log processing isn't prepared to accept.That being said, I think I fall in the “just start putting more digits in the log” camp, although it is conceivable the counter arguments might be convincing.",
"msg_date": "Mon, 20 Jun 2022 13:19:05 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Finer grain log timestamps"
},
{
"msg_contents": "This entry has been waiting on author input for a while (our current\nthreshold is roughly two weeks), so I've marked it Returned with\nFeedback.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can resurrect the patch entry by visiting\n\n https://commitfest.postgresql.org/38/3683/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Tue, 2 Aug 2022 11:27:33 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Finer grain log timestamps"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nPlease see attached draft for the 2022-05-12 release announcement.\r\n\r\nOne change: while normally we start the EOL notices for \r\n$NEXT_VERSION_EOL ~6 months from $EOL_DATE, I moved the EOL section up \r\nto be closer to the top of the announcement. This comes from feedback \r\naround ensuring we're giving users enough notice (reminders) in advance \r\nabout the EOL.\r\n\r\nPlease provide feedback on accuracy and if there are any notable omissions.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Sun, 8 May 2022 16:51:05 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "2022-05-12 release announcement draft"
},
{
"msg_contents": "On Sun, May 8, 2022, at 5:51 PM, Jonathan S. Katz wrote:\n> Hi,\n> \n> Please see attached draft for the 2022-05-12 release announcement.\nLGTM.\n\n> Please provide feedback on accuracy and if there are any notable omissions.\n* Several fixes for `contrib/pageinspect` to improve overall stability.\n* Disable batch insertion in `contrib/postgres_fdw` when\n`BEFORE INSERT ... FOR EACH ROW` triggers exist on the foreign table.\n\nShould you omit 'contrib'? You mentioned ltree but don't say 'contrib/ltree'. It\nalso isn't used in a previous release announcement (see postgres_fdw).\n\n\n[1] https://www.postgresql.org/about/news/postgresql-142-136-1210-1115-and-1020-released-2402/\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Sun, May 8, 2022, at 5:51 PM, Jonathan S. Katz wrote:Hi,Please see attached draft for the 2022-05-12 release announcement.LGTM.Please provide feedback on accuracy and if there are any notable omissions.* Several fixes for `contrib/pageinspect` to improve overall stability.* Disable batch insertion in `contrib/postgres_fdw` when`BEFORE INSERT ... FOR EACH ROW` triggers exist on the foreign table.Should you omit 'contrib'? You mentioned ltree but don't say 'contrib/ltree'. Italso isn't used in a previous release announcement (see postgres_fdw).[1] https://www.postgresql.org/about/news/postgresql-142-136-1210-1115-and-1020-released-2402/--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 09 May 2022 10:43:34 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: 2022-05-12 release announcement draft"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> Please provide feedback on accuracy and if there are any notable omissions.\n\nPlease remove this bit:\n\n> * Infinite endpoints are now disallowed in the timestamp variants of\n> `generate_series()`.\n\nPer last-minute discussion [1], I'm going to revert that change for\nnow. Maybe we'll still end up doing it, but there is more there\nthan meets the eye, and there's no time left for leisurely discussion.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/3603504.1652068977%40sss.pgh.pa.us\n\n\n",
"msg_date": "Mon, 09 May 2022 10:06:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 2022-05-12 release announcement draft"
},
{
"msg_contents": "On 5/9/22 9:43 AM, Euler Taveira wrote:\r\n> On Sun, May 8, 2022, at 5:51 PM, Jonathan S. Katz wrote:\r\n>> Hi,\r\n>>\r\n>> Please see attached draft for the 2022-05-12 release announcement.\r\n> LGTM.\r\n> \r\n>> Please provide feedback on accuracy and if there are any notable \r\n>> omissions.\r\n> * Several fixes for `contrib/pageinspect` to improve overall stability.\r\n> * Disable batch insertion in `contrib/postgres_fdw` when\r\n> `BEFORE INSERT ... FOR EACH ROW` triggers exist on the foreign table.\r\n> \r\n> Should you omit 'contrib'? You mentioned ltree but don't say \r\n> 'contrib/ltree'. It\r\n> also isn't used in a previous release announcement (see postgres_fdw).\r\n> \r\n> \r\n> [1] \r\n> https://www.postgresql.org/about/news/postgresql-142-136-1210-1115-and-1020-released-2402/ \r\n> <https://www.postgresql.org/about/news/postgresql-142-136-1210-1115-and-1020-released-2402/>\r\n\r\nHm, it looks like this is inconsistent with other announcements, as \r\nthose have the \"contrib\" prefix[1][2].\r\n\r\nI think in this specific case, `ltree` is the data type found in the \r\n`contrib/ltree` extension.\r\n\r\nYou raise a good point, which is in the release announcement, should we \r\nprefix contrib modules with \"contrib\"? Perhaps instead of shorthand we \r\nspell it out, e.g. \"Several fixes for the `pageinspect` module...\"\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/about/news/postgresql-134-128-1113-1018-9623-and-14-beta-3-released-2277/\r\n[2] \r\nhttps://www.postgresql.org/about/news/postgresql-132-126-1111-1016-9621-and-9525-released-2165/",
"msg_date": "Mon, 9 May 2022 10:54:27 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: 2022-05-12 release announcement draft"
},
{
"msg_contents": "On 5/9/22 10:06 AM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>> Please provide feedback on accuracy and if there are any notable omissions.\r\n> \r\n> Please remove this bit:\r\n> \r\n>> * Infinite endpoints are now disallowed in the timestamp variants of\r\n>> `generate_series()`.\r\n> \r\n> Per last-minute discussion [1], I'm going to revert that change for\r\n> now. Maybe we'll still end up doing it, but there is more there\r\n> than meets the eye, and there's no time left for leisurely discussion.\r\n\r\nGot it. Removed it from the canonical copy. I'll post the draft later \r\npending outcome of the \"how to refer to contrib modules\" discussion.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Mon, 9 May 2022 10:54:56 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: 2022-05-12 release announcement draft"
},
{
"msg_contents": "On Mon, May 9, 2022, at 11:54 AM, Jonathan S. Katz wrote:\n> You raise a good point, which is in the release announcement, should we \n> prefix contrib modules with \"contrib\"? Perhaps instead of shorthand we \n> spell it out, e.g. \"Several fixes for the `pageinspect` module...\"\nNowadays 'contrib' refers to the physical structure (directory). I wouldn't\nmention it because it is a development/packaging detail. We use the terminology\nadditional supplied module/program to refer to this piece of software which is\nkept in the Postgres repository but can be optionally installed. pageinspect\n(possibly with the URL) is clear enough. However, if you don't like the\nshorthand, 'pageinspect extension' or 'pageinspect module' are good options.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, May 9, 2022, at 11:54 AM, Jonathan S. Katz wrote:You raise a good point, which is in the release announcement, should we prefix contrib modules with \"contrib\"? Perhaps instead of shorthand we spell it out, e.g. \"Several fixes for the `pageinspect` module...\"Nowadays 'contrib' refers to the physical structure (directory). I wouldn'tmention it because it is a development/packaging detail. We use the terminologyadditional supplied module/program to refer to this piece of software which iskept in the Postgres repository but can be optionally installed. pageinspect(possibly with the URL) is clear enough. However, if you don't like theshorthand, 'pageinspect extension' or 'pageinspect module' are good options.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 09 May 2022 12:15:52 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: 2022-05-12 release announcement draft"
},
{
"msg_contents": "On Mon, May 9, 2022 at 11:16 AM Euler Taveira <euler@eulerto.com> wrote:\n> On Mon, May 9, 2022, at 11:54 AM, Jonathan S. Katz wrote:\n> You raise a good point, which is in the release announcement, should we\n> prefix contrib modules with \"contrib\"? Perhaps instead of shorthand we\n> spell it out, e.g. \"Several fixes for the `pageinspect` module...\"\n>\n> Nowadays 'contrib' refers to the physical structure (directory). I wouldn't\n> mention it because it is a development/packaging detail. We use the terminology\n> additional supplied module/program to refer to this piece of software which is\n> kept in the Postgres repository but can be optionally installed. pageinspect\n> (possibly with the URL) is clear enough. However, if you don't like the\n> shorthand, 'pageinspect extension' or 'pageinspect module' are good options.\n>\n\n+1 on this line of thinking from my pov.\n\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Mon, 9 May 2022 11:19:43 -0400",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: 2022-05-12 release announcement draft"
},
{
"msg_contents": "On 5/9/22 11:19 AM, Robert Treat wrote:\r\n> On Mon, May 9, 2022 at 11:16 AM Euler Taveira <euler@eulerto.com> wrote:\r\n>> On Mon, May 9, 2022, at 11:54 AM, Jonathan S. Katz wrote:\r\n>> You raise a good point, which is in the release announcement, should we\r\n>> prefix contrib modules with \"contrib\"? Perhaps instead of shorthand we\r\n>> spell it out, e.g. \"Several fixes for the `pageinspect` module...\"\r\n>>\r\n>> Nowadays 'contrib' refers to the physical structure (directory). I wouldn't\r\n>> mention it because it is a development/packaging detail. We use the terminology\r\n>> additional supplied module/program to refer to this piece of software which is\r\n>> kept in the Postgres repository but can be optionally installed. pageinspect\r\n>> (possibly with the URL) is clear enough. However, if you don't like the\r\n>> shorthand, 'pageinspect extension' or 'pageinspect module' are good options.\r\n>>\r\n> \r\n> +1 on this line of thinking from my pov.\r\n\r\nPer this line of thinking, here is the next revision that drops \r\n\"contrib\" and instead links directly to the docs.\r\n\r\nJonathan",
"msg_date": "Mon, 9 May 2022 22:06:08 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: 2022-05-12 release announcement draft"
}
] |
[
{
"msg_contents": "Restarting a large instance took twice as long as I expected due to not\nchecking interrupts in (at least) statext_ndistinct_build. Long enough that I\nattached (and was able to attach) a debugger to verify, which I think is too\nlong. I think it could cause issues for an high-availability cluster or other\nscript if it takes too long to shut down.\n\nThe tables being auto-analyzed have 9 exteneded stats objects, each with stats\ntarget=10. 7 of those are (ndistinct) stats on 4 simple columns plus 1\nexpression (5 total). And the other 2 stats objects are expressional stats\n(necessarily on a single expression).\n\n\n",
"msg_date": "Sun, 8 May 2022 19:01:08 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "should check interrupts in BuildRelationExtStatistics ?"
},
{
"msg_contents": "On Sun, May 08, 2022 at 07:01:08PM -0500, Justin Pryzby wrote:\n> Restarting a large instance took twice as long as I expected due to not\n> checking interrupts in (at least) statext_ndistinct_build. Long enough that I\n> attached (and was able to attach) a debugger to verify, which I think is too\n> long. I think it could cause issues for an high-availability cluster or other\n> script if it takes too long to shut down.\n\nHmm. That's annoying.\n\n> The tables being auto-analyzed have 9 exteneded stats objects, each with stats\n> target=10. 7 of those are (ndistinct) stats on 4 simple columns plus 1\n> expression (5 total). And the other 2 stats objects are expressional stats\n> (necessarily on a single expression).\n\nHow long can the backend remain unresponsive? I don't think that\nanybody would object to the addition of some CHECK_FOR_INTERRUPTS() in\nareas where it would be efficient to make the shutdown quicker, but\nwe need to think carefully about the places where we'd want to add\nthese.\n--\nMichael",
"msg_date": "Mon, 9 May 2022 12:31:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: should check interrupts in BuildRelationExtStatistics ?"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> How long can the backend remain unresponsive? I don't think that\n> anybody would object to the addition of some CHECK_FOR_INTERRUPTS() in\n> areas where it would be efficient to make the shutdown quicker, but\n> we need to think carefully about the places where we'd want to add\n> these.\n\nCHECK_FOR_INTERRUPTS is really quite cheap, just a test-and-branch.\nI wouldn't put it in a *very* tight loop, but one test per row\nprocessed while gathering stats is unlikely to be a problem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 08 May 2022 23:36:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: should check interrupts in BuildRelationExtStatistics ?"
},
{
"msg_contents": "On Sun, May 8, 2022 at 11:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > How long can the backend remain unresponsive? I don't think that\n> > anybody would object to the addition of some CHECK_FOR_INTERRUPTS() in\n> > areas where it would be efficient to make the shutdown quicker, but\n> > we need to think carefully about the places where we'd want to add\n> > these.\n>\n> CHECK_FOR_INTERRUPTS is really quite cheap, just a test-and-branch.\n> I wouldn't put it in a *very* tight loop, but one test per row\n> processed while gathering stats is unlikely to be a problem.\n\n+1. If we're finding things stalling that would be fixed by adding\nCHECK_FOR_INTERRUPTS(), we should generally just add it. In the\nunlikely event that this causes a performance problem, we can try to\nfigure out some other solution, but not responding to interrupts isn't\nthe right way to economize.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 May 2022 09:11:37 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should check interrupts in BuildRelationExtStatistics ?"
},
{
"msg_contents": "On Mon, May 09, 2022 at 09:11:37AM -0400, Robert Haas wrote:\n> On Sun, May 8, 2022 at 11:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Michael Paquier <michael@paquier.xyz> writes:\n> > > How long can the backend remain unresponsive? I don't think that\n> > > anybody would object to the addition of some CHECK_FOR_INTERRUPTS() in\n> > > areas where it would be efficient to make the shutdown quicker, but\n> > > we need to think carefully about the places where we'd want to add\n> > > these.\n> >\n> > CHECK_FOR_INTERRUPTS is really quite cheap, just a test-and-branch.\n> > I wouldn't put it in a *very* tight loop, but one test per row\n> > processed while gathering stats is unlikely to be a problem.\n> \n> +1. If we're finding things stalling that would be fixed by adding\n> CHECK_FOR_INTERRUPTS(), we should generally just add it. In the\n> unlikely event that this causes a performance problem, we can try to\n> figure out some other solution, but not responding to interrupts isn't\n> the right way to economize.\n\nReproduce the problem for ndistinct and dependencies like:\n\nDROP TABLE t; CREATE TABLE t AS SELECT i A,1+i B,2+i C,3+i D,4+i E,5+i F, now() AS ts FROM generate_series(1.0, 99999.0)i; VACUUM t;\nDROP STATISTICS stxx; CREATE STATISTICS stxx (ndistinct) ON mod(a,14),mod(b,15),mod(c,16),mod(d,17),mod(e,18),mod(f,19) FROM t;\nANALYZE VERBOSE t;\n\nMaybe this should actually call vacuum_delay_point(), like\ncompute_scalar_stats(). For MCV, there seems to be no issue, since those\nfunctions are being called (but only for expressional stats). But maybe I've\njust failed to make a large enough, non-expressional MCV list for the problem\nto be apparent.\n\nThe patch is WIP, but whatever we end up with should probably be backpatched at\nleast to v14, where expressional indexes were introduced, since they're likely\nto have more columns, and are slower to compute.\n\ndiff --git a/src/backend/statistics/dependencies.c b/src/backend/statistics/dependencies.c\nindex e8f71567b4e..e5538dcd4e1 100644\n--- a/src/backend/statistics/dependencies.c\n+++ b/src/backend/statistics/dependencies.c\n@@ -19,6 +19,7 @@\n #include \"catalog/pg_statistic_ext.h\"\n #include \"catalog/pg_statistic_ext_data.h\"\n #include \"lib/stringinfo.h\"\n+#include \"miscadmin.h\"\n #include \"nodes/nodeFuncs.h\"\n #include \"nodes/nodes.h\"\n #include \"nodes/pathnodes.h\"\n@@ -383,6 +384,8 @@ statext_dependencies_build(StatsBuildData *data)\n \t\t\tMVDependency *d;\n \t\t\tMemoryContext oldcxt;\n \n+\t\t\tCHECK_FOR_INTERRUPTS();\n+\n \t\t\t/* release memory used by dependency degree calculation */\n \t\t\toldcxt = MemoryContextSwitchTo(cxt);\n \ndiff --git a/src/backend/statistics/mcv.c b/src/backend/statistics/mcv.c\nindex dd67b19b6fa..9db1d0325cd 100644\n--- a/src/backend/statistics/mcv.c\n+++ b/src/backend/statistics/mcv.c\n@@ -269,6 +269,8 @@ statext_mcv_build(StatsBuildData *data, double totalrows, int stattarget)\n \t\tSortItem **freqs;\n \t\tint\t\t *nfreqs;\n \n+\t\t// CHECK_FOR_INTERRUPTS();\n+\n \t\t/* used to search values */\n \t\ttmp = (MultiSortSupport) palloc(offsetof(MultiSortSupportData, ssup)\n \t\t\t\t\t\t\t\t\t\t+ sizeof(SortSupportData));\ndiff --git a/src/backend/statistics/mvdistinct.c b/src/backend/statistics/mvdistinct.c\nindex 6ade5eff78c..3b739ab7ca0 100644\n--- a/src/backend/statistics/mvdistinct.c\n+++ b/src/backend/statistics/mvdistinct.c\n@@ -29,6 +29,7 @@\n #include \"catalog/pg_statistic_ext.h\"\n #include \"catalog/pg_statistic_ext_data.h\"\n #include \"lib/stringinfo.h\"\n+#include \"miscadmin.h\"\n #include \"statistics/extended_stats_internal.h\"\n #include \"statistics/statistics.h\"\n #include \"utils/fmgrprotos.h\"\n@@ -114,6 +115,8 @@ statext_ndistinct_build(double totalrows, StatsBuildData *data)\n \t\t\tMVNDistinctItem *item = &result->items[itemcnt];\n \t\t\tint\t\t\tj;\n \n+\t\t\tCHECK_FOR_INTERRUPTS();\n+\n \t\t\titem->attributes = palloc(sizeof(AttrNumber) * k);\n \t\t\titem->nattributes = k;\n \n\n\n",
"msg_date": "Fri, 3 Jun 2022 10:28:37 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: should check interrupts in BuildRelationExtStatistics ?"
},
{
"msg_contents": "On Fri, Jun 03, 2022 at 10:28:37AM -0500, Justin Pryzby wrote:\n> Maybe this should actually call vacuum_delay_point(), like\n> compute_scalar_stats().\n\nI think vacuum_delay_point() would be wrong for these cases, since they don't\ncall \"fetchfunc()\", like the other places which use vacuum_delay_point.\n\n> For MCV, there seems to be no issue, since those\n> functions are being called (but only for expressional stats). But maybe I've\n> just failed to make a large enough, non-expressional MCV list for the problem\n> to be apparent.\n\nI reproduced the issue with MCV like this:\n\nDROP TABLE IF EXISTS t; CREATE TABLE t AS SELECT a::text,b::text,c::text,d::text,e::text,f::text,g::text FROM generate_series(1000001,1000006)a,generate_series(1000001,1000006)b,generate_series(1000001,1000006)c,generate_series(1000001,1000006)d,generate_series(1000001,1000006)e,generate_series(1000001,1000006)f,generate_series(1000001,1000006)g,generate_series(1000001,1000006)h; VACUUM t; \nDROP STATISTICS IF EXISTS stxx; CREATE STATISTICS stxx (mcv) ON a,b,c,d,e,f FROM t; ALTER STATISTICS stxx SET STATISTICS 9999; ANALYZE VERBOSE t;\n\nThis is slow (25 seconds) inside qsort:\n\n(gdb) bt\n#0 __memcmp_sse4_1 () at ../sysdeps/x86_64/multiarch/memcmp-sse4.S:1020\n#1 0x00005653d8686fac in varstrfastcmp_locale (a1p=0x5653dce67d54 \"1000004~\", len1=7, a2p=0x5653e895ffa4 \"1000004~\", len2=7, ssup=ssup@entry=0x5653d98c37b8) at varlena.c:2444\n#2 0x00005653d8687161 in varlenafastcmp_locale (x=94918188367184, y=94918384418720, ssup=0x5653d98c37b8) at varlena.c:2270\n#3 0x00005653d85134d8 in ApplySortComparator (ssup=0x5653d98c37b8, isNull2=<optimized out>, datum2=<optimized out>, isNull1=<optimized out>, datum1=<optimized out>) at ../../../src/include/utils/sortsupport.h:224\n#4 multi_sort_compare (a=0x7fa587b44e58, b=0x7fa5875f0dd0, arg=0x5653d98c37b0) at extended_stats.c:903\n#5 0x00005653d8712eed in qsort_arg (data=data@entry=0x7fa5875f0050, n=<optimized out>, n@entry=1679616, element_size=element_size@entry=24, compare=compare@entry=0x5653d8513483 <multi_sort_compare>, \n arg=arg@entry=0x5653d98c37b0) at ../../src/include/lib/sort_template.h:349\n#6 0x00005653d851415f in build_sorted_items (data=data@entry=0x7fa58f2e1050, nitems=nitems@entry=0x7ffe4f764e5c, mss=mss@entry=0x5653d98c37b0, numattrs=6, attnums=0x7fa58f2e1078) at extended_stats.c:1134\n#7 0x00005653d8515d84 in statext_mcv_build (data=data@entry=0x7fa58f2e1050, totalrows=totalrows@entry=1679616, stattarget=stattarget@entry=9999) at mcv.c:204\n#8 0x00005653d8513819 in BuildRelationExtStatistics (onerel=onerel@entry=0x7fa5b26ef658, inh=inh@entry=false, totalrows=1679616, numrows=numrows@entry=1679616, rows=rows@entry=0x7fa5a4103050, natts=natts@entry=7, \n vacattrstats=vacattrstats@entry=0x5653d98b76b0) at extended_stats.c:213\n\nThe fix seems to be to CHECK_FOR_INTERRUPTS() within multi_sort_compare().\nThat would supercede the other two CHECK_FOR_INTERRUPTS I'd proposed, and\nhandle mcv, depends, and ndistinct all at once.\n\nDoes that sound right ?\n\nFor MCV, there's also ~0.6sec spent in build_column_frequencies(), which (if\nneeded) would be addressed by adding CFI in sort_item_compare.\n\n\n",
"msg_date": "Sat, 4 Jun 2022 20:42:33 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: should check interrupts in BuildRelationExtStatistics ?"
},
{
"msg_contents": "On Sat, Jun 04, 2022 at 08:42:33PM -0500, Justin Pryzby wrote:\n> The fix seems to be to CHECK_FOR_INTERRUPTS() within multi_sort_compare().\n> That would supercede the other two CHECK_FOR_INTERRUPTS I'd proposed, and\n> handle mcv, depends, and ndistinct all at once.\n\nHmm. I have to admit that adding a CFI() in multi_sort_compare()\nstresses me a bit as it is dependent on the number of rows involved,\nand it can be used as a qsort routine.\n--\nMichael",
"msg_date": "Mon, 6 Jun 2022 16:23:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: should check interrupts in BuildRelationExtStatistics ?"
},
{
"msg_contents": "On Mon, Jun 06, 2022 at 04:23:34PM +0900, Michael Paquier wrote:\n> On Sat, Jun 04, 2022 at 08:42:33PM -0500, Justin Pryzby wrote:\n> > The fix seems to be to CHECK_FOR_INTERRUPTS() within multi_sort_compare().\n> > That would supercede the other two CHECK_FOR_INTERRUPTS I'd proposed, and\n> > handle mcv, depends, and ndistinct all at once.\n> \n> Hmm. I have to admit that adding a CFI() in multi_sort_compare()\n> stresses me a bit as it is dependent on the number of rows involved,\n> and it can be used as a qsort routine.\n\nThat's exactly the problem for which I showed a backtrace - it took 10s of\nseconds to do qsort, which is (uh) a human timescale and too long to be\nunresponsive, even if I create on a table with many rows a stats object with a\nlot of columns and a high stats target.",
"msg_date": "Fri, 17 Jun 2022 17:25:49 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: should check interrupts in BuildRelationExtStatistics ?"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Mon, Jun 06, 2022 at 04:23:34PM +0900, Michael Paquier wrote:\n>> Hmm. I have to admit that adding a CFI() in multi_sort_compare()\n>> stresses me a bit as it is dependent on the number of rows involved,\n>> and it can be used as a qsort routine.\n\n> That's exactly the problem for which I showed a backtrace - it took 10s of\n> seconds to do qsort, which is (uh) a human timescale and too long to be\n> unresponsive, even if I create on a table with many rows a stats object with a\n> lot of columns and a high stats target.\n\nHmm. On my machine, the example last shown upthread takes about 9\nseconds, which I agree is a mighty long time to be unresponsive\n--- but it appears that fully half of that elapses before we\nreach multi_sort_compare for the first time. The first half of\nthe ANALYZE run does seem to contain some CFI calls, but they\nare not exactly thick on the ground there either. So I'm feeling\nlike this patch isn't ambitious enough.\n\nI tried interrupting at a random point and then stepping, and\nlook what I hit after just a couple of steps:\n\n(gdb) s\nqsort_arg (data=data@entry=0x13161410, n=<optimized out>, n@entry=1679616, \n element_size=element_size@entry=16, \n compare=compare@entry=0x649450 <compare_scalars>, \n arg=arg@entry=0x7ffec539c0f0) at ../../src/include/lib/sort_template.h:353\n353 if (r == 0)\n(gdb) \n358 pc -= ST_POINTER_STEP;\n(gdb) \n359 DO_CHECK_FOR_INTERRUPTS();\n\nThat, um, piqued my interest. After a bit of digging,\nI modestly propose the attached. I'm not sure if it's\nokay to back-patch this, because maybe someone out there\nis relying on qsort() to be incapable of throwing an error\n--- but it'd solve the problem at hand and a bunch of other\nissues of the same ilk.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 01 Jul 2022 19:19:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: should check interrupts in BuildRelationExtStatistics ?"
},
{
"msg_contents": "On Fri, Jul 01, 2022 at 07:19:11PM -0400, Tom Lane wrote:\n> I tried interrupting at a random point and then stepping, and\n> look what I hit after just a couple of steps:\n\nI'd come up with the trick of setting\n SET backtrace_functions='ProcessInterrupts';\n\n> That, um, piqued my interest. After a bit of digging,\n> I modestly propose the attached. I'm not sure if it's\n> okay to back-patch this, because maybe someone out there\n> is relying on qsort() to be incapable of throwing an error\n> --- but it'd solve the problem at hand and a bunch of other\n> issues of the same ilk.\n\nConfirmed this fixes the 3 types of extended stats, at least for the example\ncases I made.\n\nIf it's okay to backpatch, v14 seems adequate since the problem is more\nprominent with expressional statistics (I'm sure that's why I hit it) ...\notherwise v15 would do.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 1 Jul 2022 19:07:04 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: should check interrupts in BuildRelationExtStatistics ?"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Fri, Jul 01, 2022 at 07:19:11PM -0400, Tom Lane wrote:\n>> That, um, piqued my interest. After a bit of digging,\n>> I modestly propose the attached. I'm not sure if it's\n>> okay to back-patch this, because maybe someone out there\n>> is relying on qsort() to be incapable of throwing an error\n>> --- but it'd solve the problem at hand and a bunch of other\n>> issues of the same ilk.\n\n> Confirmed this fixes the 3 types of extended stats, at least for the example\n> cases I made.\n\n> If it's okay to backpatch, v14 seems adequate since the problem is more\n> prominent with expressional statistics (I'm sure that's why I hit it) ...\n> otherwise v15 would do.\n\nAfter thinking for awhile, my inclination is to apply my patch in\nHEAD and yours in v15/v14. We have a year to find out if there's\nany problem with the more invasive check if we do it in HEAD;\nbut there's a lot less margin for error in v14, and not that\nmuch in v15 either.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Jul 2022 18:48:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: should check interrupts in BuildRelationExtStatistics ?"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-01 19:19:11 -0400, Tom Lane wrote:\n> That, um, piqued my interest. After a bit of digging,\n> I modestly propose the attached. I'm not sure if it's\n> okay to back-patch this, because maybe someone out there\n> is relying on qsort() to be incapable of throwing an error\n> --- but it'd solve the problem at hand and a bunch of other\n> issues of the same ilk.\n\nI'm worried about this. Interrupting random qsorts all over seems like it\ncould end up corrupting state. We do things like qsort in building snapshots\netc. Are we confident that we handle interrupts reliably in all those places?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 5 Jul 2022 16:20:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: should check interrupts in BuildRelationExtStatistics ?"
},
{
"msg_contents": "I wrote:\n>>> I modestly propose the attached. I'm not sure if it's\n>>> okay to back-patch this, because maybe someone out there\n>>> is relying on qsort() to be incapable of throwing an error\n\nI thought I'd better try to check that, and I soon found several\nplaces that *are* relying on qsort() to not be any smarter than the\nlibc version. Notably, guc.c qsorts its persistent-state GUC array\nduring add_guc_variable --- an interrupt there would leave the GUC\ndata structure in an inconsistent state. There are lesser hazards,\ntypically memory leaks, elsewhere. So I fear this can't fly as-is.\n\nNonetheless, it'd be a good idea to use an interruptible sort in\nas many places as we can. If we don't mind YA copy of the qsort\ncode, I'd be inclined to propose inventing qsort_interruptible\nwhich is like qsort_arg but also does CHECK_FOR_INTERRUPTS.\n(There seems no reason to support a non-arg version, since you\ncan just pass NULL.) Or we could redefine qsort_arg as allowing\ninterrupts, but that might still carry some surprises for existing\ncode.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Jul 2022 19:37:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: should check interrupts in BuildRelationExtStatistics ?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I'm worried about this. Interrupting random qsorts all over seems like it\n> could end up corrupting state. We do things like qsort in building snapshots\n> etc. Are we confident that we handle interrupts reliably in all those places?\n\nNope ... see my followup just now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Jul 2022 19:38:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: should check interrupts in BuildRelationExtStatistics ?"
},
{
"msg_contents": "On Tue, Jul 05, 2022 at 07:37:03PM -0400, Tom Lane wrote:\n> Nonetheless, it'd be a good idea to use an interruptible sort in\n> as many places as we can. If we don't mind YA copy of the qsort\n> code, I'd be inclined to propose inventing qsort_interruptible\n> which is like qsort_arg but also does CHECK_FOR_INTERRUPTS.\n> (There seems no reason to support a non-arg version, since you\n> can just pass NULL.) Or we could redefine qsort_arg as allowing\n> interrupts, but that might still carry some surprises for existing\n> code.\n\nAgreed to use a new and different API for this purpose. It seems like\na tool where one could write some new code and use an interruptible\nqsort() thinking that it is fine, still a preperly-documented API has\nclear benefits.\n--\nMichael",
"msg_date": "Wed, 6 Jul 2022 08:49:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: should check interrupts in BuildRelationExtStatistics ?"
},
{
"msg_contents": "On Wed, Jul 6, 2022 at 11:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> qsort_interruptible\n\n+1\n\nFWIW compute_scalar_stats() was already noted as hot and perhaps a\ncandidate for specialisation (with more study needed), but\nqsort_interruptible() seems like a sane use of ~4KB of text to me.\n\n\n",
"msg_date": "Wed, 6 Jul 2022 12:44:08 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should check interrupts in BuildRelationExtStatistics ?"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Wed, Jul 6, 2022 at 11:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> qsort_interruptible\n\n> +1\n\nSo here's a patch that does it that way. I first meant to put the\nnew file into src/port/, but after remembering that that directory\nhas no backend-only functions, I went with src/backend/utils/sort/\ninstead.\n\nFor the moment I contented myself with changing qsort[_arg] calls\nthat occur during statistics collection. We have a lot more of course,\nbut many of them can be expected to not be dealing with much data,\nand in some cases we might want some closer analysis to be sure there's\nno performance hit. So I'm inclined to not worry too much about the\nremaining qsort calls until somebody complains.\n\nThis could be back-patched to v14 without much worry, I should think.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 07 Jul 2022 15:50:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: should check interrupts in BuildRelationExtStatistics ?"
},
{
"msg_contents": "I wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> On Wed, Jul 6, 2022 at 11:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> qsort_interruptible\n\n>> +1\n\n> So here's a patch that does it that way.\n\nHearing no comments, pushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Jul 2022 16:31:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: should check interrupts in BuildRelationExtStatistics ?"
},
{
"msg_contents": "On Tue, Jul 12, 2022 at 1:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> >> On Wed, Jul 6, 2022 at 11:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> qsort_interruptible\n>\n> >> +1\n>\n> > So here's a patch that does it that way.\n>\n> Hearing no comments, pushed.\n>\n> regards, tom lane\n>\n>\n> Hi,\nLooking at the files under src/backend/utils/sort/, looks like license\nheader is missing from qsort_interruptible.c\n\nPlease consider the patch which adds license header to qsort_interruptible.c\n\nCheers",
"msg_date": "Tue, 12 Jul 2022 14:03:26 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: should check interrupts in BuildRelationExtStatistics ?"
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> Looking at the files under src/backend/utils/sort/, looks like license\n> header is missing from qsort_interruptible.c\n\n[ shrug ... ] qsort.c and qsort_arg.c don't have that either.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Jul 2022 17:09:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: should check interrupts in BuildRelationExtStatistics ?"
}
] |
[
{
"msg_contents": "\nI suppose we'll do pgindent soon.\n\nI have semi-manually gone through the various .l and .y files and fixed \nup the formatting of the C code to be more in line with pgindent style. \nMost of that code was old, so I don't expect this to be such a large \neffort going forward. I also think a lot of that code started as \ncopy-and-paste, so having \"correct\" style will also help future code use \nbetter style by default.\n\nThe patch is rather large, so I won't post it here, but you can see it here:\n\nhttps://github.com/petere/postgresql/compare/master...petere:indent-gram-scan.patch\n\nI propose to apply this after the main pgindent run is done.\n\n\n",
"msg_date": "Mon, 9 May 2022 10:02:06 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Indent C code in flex and bison files"
}
] |
[
{
"msg_contents": "\nprotocol.sgml has some unusual indentation that keeps getting added on \nto with new additions in that file. I have gone through and reindented \nit to be more in line with the style elsewhere, so future editing \nhopefully won't be that painful on the eye. ;-)\n\nThe diff can be seen here: \nhttps://github.com/petere/postgresql/compare/master...petere:doc-protocol-indent.patch\n\nI'm not aware of any pending patches in this area. I propose to apply \nthis around the time we do pgindent.\n\n\n",
"msg_date": "Mon, 9 May 2022 10:56:15 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Indent protocol.sgml"
}
] |
[
{
"msg_contents": "Now that the user can specify rows and columns to be omitted from the logical\nreplication [1], I suppose hiding rows and columns from the subscriber is an\nimportant use case. However, since the subscription connection user (i.e. the\nuser specified in the CREATE SUBSCRIPTION ... CONNECTION ... command) needs\nSELECT permission on the replicated table (on the publication side), he can\njust use another publication (which has different filters or no filters at\nall) to get the supposedly-hidden data replicated.\n\nDon't we need privileges on publication (e.g GRANT USAGE ON PUBLICATION ...)\nnow?\n\n[1] https://www.postgresql.org/docs/devel/sql-createpublication.html\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Mon, 09 May 2022 16:09:57 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Privileges on PUBLICATION"
},
{
"msg_contents": "On Mon, May 9, 2022, at 11:09 AM, Antonin Houska wrote:\n> Now that the user can specify rows and columns to be omitted from the logical\n> replication [1], I suppose hiding rows and columns from the subscriber is an\n> important use case. However, since the subscription connection user (i.e. the\n> user specified in the CREATE SUBSCRIPTION ... CONNECTION ... command) needs\n> SELECT permission on the replicated table (on the publication side), he can\n> just use another publication (which has different filters or no filters at\n> all) to get the supposedly-hidden data replicated.\nThe required privileges were not relaxed on publisher after the row filter and \ncolumn list features. It is not just to \"create another publication\". Create\npublications require CREATE privilege on databases (that is *not* granted to\nPUBLIC).If you have an untrusted user that could bypass your rules about hidden\ndata, it is better to review your user privileges.\n\npostgres=# CREATE ROLE foo REPLICATION LOGIN;\nCREATE ROLE\npostgres=# \\c - foo\nYou are now connected to database \"postgres\" as user \"foo\".\npostgres=> CREATE PUBLICATION pub1;\nERROR: permission denied for database postgres\n\nThe documentation [1] says\n\n\"The role used for the replication connection must have the REPLICATION\nattribute (or be a superuser).\"\n\nYou can use role foo for the replication connection but role foo couldn't be a\nsuperuser. In this case, even if role foo open a connection to database\npostgres, a publication cannot be created due to lack of privileges.\n\n> Don't we need privileges on publication (e.g GRANT USAGE ON PUBLICATION ...)\n> now?\nMaybe. We rely on CREATE privilege on databases right now. If you say that\nGRANT USAGE ON PUBLICATION is just a command that will have the same effect as\nREPLICATION property [1] has right now, I would say it won't. Are you aiming a\nfine-grained access control on publisher?\n\n\n[1] https://www.postgresql.org/docs/devel/logical-replication-security.html\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, May 9, 2022, at 11:09 AM, Antonin Houska wrote:Now that the user can specify rows and columns to be omitted from the logicalreplication [1], I suppose hiding rows and columns from the subscriber is animportant use case. However, since the subscription connection user (i.e. theuser specified in the CREATE SUBSCRIPTION ... CONNECTION ... command) needsSELECT permission on the replicated table (on the publication side), he canjust use another publication (which has different filters or no filters atall) to get the supposedly-hidden data replicated.The required privileges were not relaxed on publisher after the row filter and column list features. It is not just to \"create another publication\". Createpublications require CREATE privilege on databases (that is *not* granted toPUBLIC).If you have an untrusted user that could bypass your rules about hiddendata, it is better to review your user privileges.postgres=# CREATE ROLE foo REPLICATION LOGIN;CREATE ROLEpostgres=# \\c - fooYou are now connected to database \"postgres\" as user \"foo\".postgres=> CREATE PUBLICATION pub1;ERROR: permission denied for database postgresThe documentation [1] says\"The role used for the replication connection must have the REPLICATIONattribute (or be a superuser).\"You can use role foo for the replication connection but role foo couldn't be asuperuser. In this case, even if role foo open a connection to databasepostgres, a publication cannot be created due to lack of privileges.Don't we need privileges on publication (e.g GRANT USAGE ON PUBLICATION ...)now?Maybe. We rely on CREATE privilege on databases right now. If you say thatGRANT USAGE ON PUBLICATION is just a command that will have the same effect asREPLICATION property [1] has right now, I would say it won't. Are you aiming afine-grained access control on publisher?[1] https://www.postgresql.org/docs/devel/logical-replication-security.html--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 09 May 2022 15:44:45 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "On Tue, May 10, 2022 at 12:16 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Mon, May 9, 2022, at 11:09 AM, Antonin Houska wrote:\n>\n> Now that the user can specify rows and columns to be omitted from the logical\n> replication [1], I suppose hiding rows and columns from the subscriber is an\n> important use case. However, since the subscription connection user (i.e. the\n> user specified in the CREATE SUBSCRIPTION ... CONNECTION ... command) needs\n> SELECT permission on the replicated table (on the publication side), he can\n> just use another publication (which has different filters or no filters at\n> all) to get the supposedly-hidden data replicated.\n>\n> The required privileges were not relaxed on publisher after the row filter and\n> column list features. It is not just to \"create another publication\". Create\n> publications require CREATE privilege on databases (that is *not* granted to\n> PUBLIC).If you have an untrusted user that could bypass your rules about hidden\n> data, it is better to review your user privileges.\n>\n\nAlso, to create a subscription (which combines multiple publications\nto bypass rules), a user must be a superuser. So, isn't that a\nsufficient guarantee that users shouldn't be able to bypass such\nrules?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 10 May 2022 09:19:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "Euler Taveira <euler@eulerto.com> wrote:\n\n> On Mon, May 9, 2022, at 11:09 AM, Antonin Houska wrote:\n> \n> Now that the user can specify rows and columns to be omitted from the logical\n> replication [1], I suppose hiding rows and columns from the subscriber is an\n> important use case. However, since the subscription connection user (i.e. the\n> user specified in the CREATE SUBSCRIPTION ... CONNECTION ... command) needs\n> SELECT permission on the replicated table (on the publication side), he can\n> just use another publication (which has different filters or no filters at\n> all) to get the supposedly-hidden data replicated.\n> \n> The required privileges were not relaxed on publisher after the row filter and \n> column list features. It is not just to \"create another publication\". Create\n> publications require CREATE privilege on databases (that is *not* granted to\n> PUBLIC).If you have an untrusted user that could bypass your rules about hidden\n> data, it is better to review your user privileges.\n> \n> postgres=# CREATE ROLE foo REPLICATION LOGIN;\n> CREATE ROLE\n> postgres=# \\c - foo\n> You are now connected to database \"postgres\" as user \"foo\".\n> postgres=> CREATE PUBLICATION pub1;\n> ERROR: permission denied for database postgres\n> \n> The documentation [1] says\n> \n> \"The role used for the replication connection must have the REPLICATION\n> attribute (or be a superuser).\"\n> \n> You can use role foo for the replication connection but role foo couldn't be a\n> superuser. In this case, even if role foo open a connection to database\n> postgres, a publication cannot be created due to lack of privileges.\n> \n> Don't we need privileges on publication (e.g GRANT USAGE ON PUBLICATION ...)\n> now?\n> \n> Maybe. We rely on CREATE privilege on databases right now. If you say that\n> GRANT USAGE ON PUBLICATION is just a command that will have the same effect as\n> REPLICATION property [1] has right now, I would say it won't. Are you aiming a\n> fine-grained access control on publisher?\n\nThe configuration I'm thinking of is multiple replicas reading data from the\nsame master.\n\nFor example, consider \"foo\" and \"bar\" roles, used by \"subscr_foo\" and\n\"subscr_bar\" subscriptions respectively. (Therefore, both roles need the\nREPLICATION option.) The subscriptions \"subscr_foo\" and \"subscr_bar\" are\nlocated in \"db_foo\" and \"db_bar\" databases respectively.\n\nOn the master side, there are two publications: \"pub_foo\" and \"pub_bar\", to be\nused by \"subscr_foo\" and \"subscr_bar\" subscriptions respectively. The\npublications replicate the same table, but each with a different row filter.\n\nThe problem is that the admin of \"db_foo\" can add the \"pub_bar\" publication to\nthe \"subscr_foo\" subscription, and thus get the data that his \"pub_foo\" would\nfilter out. Likewise, the admin of \"db_bar\" can \"steal\" the data from\n\"pub_foo\" by adding that publication to \"subscr_bar\".\n\nIn this case, the existing publications are misused, so the CREATE PUBLICATION\nprivileges do not help. Since the REPLICATION option of a role is\ncluster-wide, but I need specific roles to be restricted to specific\npublications, it can actually be called fine-grained access control as you\nsay.\n\n\n> [1] https://www.postgresql.org/docs/devel/logical-replication-security.html\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Tue, 10 May 2022 10:02:58 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Tue, May 10, 2022 at 12:16 AM Euler Taveira <euler@eulerto.com> wrote:\n> >\n> > On Mon, May 9, 2022, at 11:09 AM, Antonin Houska wrote:\n> >\n> > Now that the user can specify rows and columns to be omitted from the logical\n> > replication [1], I suppose hiding rows and columns from the subscriber is an\n> > important use case. However, since the subscription connection user (i.e. the\n> > user specified in the CREATE SUBSCRIPTION ... CONNECTION ... command) needs\n> > SELECT permission on the replicated table (on the publication side), he can\n> > just use another publication (which has different filters or no filters at\n> > all) to get the supposedly-hidden data replicated.\n> >\n> > The required privileges were not relaxed on publisher after the row filter and\n> > column list features. It is not just to \"create another publication\". Create\n> > publications require CREATE privilege on databases (that is *not* granted to\n> > PUBLIC).If you have an untrusted user that could bypass your rules about hidden\n> > data, it is better to review your user privileges.\n> >\n> \n> Also, to create a subscription (which combines multiple publications\n> to bypass rules), a user must be a superuser. So, isn't that a\n> sufficient guarantee that users shouldn't be able to bypass such\n> rules?\n\nMy understanding is that the rows/columns filtering is a way for the\n*publisher* to control which data is available to particular replica. From\nthis point of view, the publication privileges would just make the control\ncomplete.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Tue, 10 May 2022 10:37:24 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "On 10.05.22 10:37, Antonin Houska wrote:\n> My understanding is that the rows/columns filtering is a way for the\n> *publisher* to control which data is available to particular replica. From\n> this point of view, the publication privileges would just make the control\n> complete.\n\nI think privileges on publications would eventually be useful. But are \nyou arguing that we need them for PG15 to make the new features usable \nsafely?\n\n\n\n",
"msg_date": "Thu, 12 May 2022 07:46:56 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "On Tue, May 10, 2022, at 5:37 AM, Antonin Houska wrote:\n> My understanding is that the rows/columns filtering is a way for the\n> *publisher* to control which data is available to particular replica. From\n> this point of view, the publication privileges would just make the control\n> complete.\nI agree. IMO it is a new feature. We already require high privilege for logical\nreplication. Hence, we expect the replication user to have access to all data.\nUnfortunately, nobody mentioned about this requirement during the row filter /\ncolumn list development; someone could have written a patch for GRANT ... ON\nPUBLICATION.\n\nI understand your concern. Like I said in my last sentence in the previous\nemail: it is a fine-grained access control on the publisher. Keep in mind that\nit will *only* work for non-superusers (REPLICATION attribute). It is not\nexposing something that we didn't expose before. In this particular case, there\nis no mechanism to prevent the subscriber to obtain data provided by the\nvarious row filters if they know the publication names. We could probably add a\nsentence to \"Logical Replication > Security\" section:\n\nThere is no privileges for publications. If you have multiple publications in a\ndatabase, a subscription can use all publications available.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, May 10, 2022, at 5:37 AM, Antonin Houska wrote:My understanding is that the rows/columns filtering is a way for the*publisher* to control which data is available to particular replica. Fromthis point of view, the publication privileges would just make the controlcomplete.I agree. IMO it is a new feature. We already require high privilege for logicalreplication. Hence, we expect the replication user to have access to all data.Unfortunately, nobody mentioned about this requirement during the row filter /column list development; someone could have written a patch for GRANT ... ONPUBLICATION.I understand your concern. Like I said in my last sentence in the previousemail: it is a fine-grained access control on the publisher. Keep in mind thatit will *only* work for non-superusers (REPLICATION attribute). It is notexposing something that we didn't expose before. In this particular case, thereis no mechanism to prevent the subscriber to obtain data provided by thevarious row filters if they know the publication names. We could probably add asentence to \"Logical Replication > Security\" section:There is no privileges for publications. If you have multiple publications in adatabase, a subscription can use all publications available.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 12 May 2022 09:48:28 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "Euler Taveira <euler@eulerto.com> wrote:\n\n> On Tue, May 10, 2022, at 5:37 AM, Antonin Houska wrote:\n> \n> My understanding is that the rows/columns filtering is a way for the\n> *publisher* to control which data is available to particular replica. From\n> this point of view, the publication privileges would just make the control\n> complete.\n> \n> I agree. IMO it is a new feature. We already require high privilege for logical\n> replication. Hence, we expect the replication user to have access to all data.\n> Unfortunately, nobody mentioned about this requirement during the row filter /\n> column list development; someone could have written a patch for GRANT ... ON\n> PUBLICATION.\n\nI can try that for PG 16, unless someone is already working on it.\n\n> I understand your concern. Like I said in my last sentence in the previous\n> email: it is a fine-grained access control on the publisher. Keep in mind that\n> it will *only* work for non-superusers (REPLICATION attribute). It is not\n> exposing something that we didn't expose before. In this particular case, there\n> is no mechanism to prevent the subscriber to obtain data provided by the\n> various row filters if they know the publication names. We could probably add a\n> sentence to \"Logical Replication > Security\" section:\n> \n> There is no privileges for publications. If you have multiple publications in a\n> database, a subscription can use all publications available.\n\nAttached is my proposal. It tries to be more specific and does not mention the\nabsence of the privileges explicitly.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Fri, 13 May 2022 08:36:37 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> On 10.05.22 10:37, Antonin Houska wrote:\n> > My understanding is that the rows/columns filtering is a way for the\n> > *publisher* to control which data is available to particular replica. From\n> > this point of view, the publication privileges would just make the control\n> > complete.\n> \n> I think privileges on publications would eventually be useful. But are you\n> arguing that we need them for PG15 to make the new features usable safely?\n\nI didn't think that far, but user should be aware of the problem. My proposal\nof documentation is in https://www.postgresql.org/message-id/5859.1652423797%40antos\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Fri, 13 May 2022 08:38:55 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "On Fri, May 13, 2022, at 3:36 AM, Antonin Houska wrote:\n> Attached is my proposal. It tries to be more specific and does not mention the\n> absence of the privileges explicitly.\nYou explained the current issue but say nothing about the limitation. This\ninformation will trigger a question possibly in one of the MLs. IMO if you say\nsomething like the sentence above at the end, it will make it clear why that\nsetup expose all data (there is no access control to publications) and\nexplicitly say there is a TODO here.\n\nAdditional privileges might be added to control access to table data in a\nfuture version of <productname>PostgreSQL</productname>.\n\nI also wouldn't use the warning tag because it fits in the same category as the\nother restrictions listed in the page.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Fri, May 13, 2022, at 3:36 AM, Antonin Houska wrote:Attached is my proposal. It tries to be more specific and does not mention theabsence of the privileges explicitly.You explained the current issue but say nothing about the limitation. Thisinformation will trigger a question possibly in one of the MLs. IMO if you saysomething like the sentence above at the end, it will make it clear why thatsetup expose all data (there is no access control to publications) andexplicitly say there is a TODO here.Additional privileges might be added to control access to table data in afuture version of <productname>PostgreSQL</productname>.I also wouldn't use the warning tag because it fits in the same category as theother restrictions listed in the page.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Fri, 13 May 2022 16:28:45 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "Antonin Houska <ah@cybertec.at> wrote:\n\n> Euler Taveira <euler@eulerto.com> wrote:\n> \n> > On Tue, May 10, 2022, at 5:37 AM, Antonin Houska wrote:\n> > \n> > My understanding is that the rows/columns filtering is a way for the\n> > *publisher* to control which data is available to particular replica. From\n> > this point of view, the publication privileges would just make the control\n> > complete.\n> > \n> > I agree. IMO it is a new feature. We already require high privilege for logical\n> > replication. Hence, we expect the replication user to have access to all data.\n> > Unfortunately, nobody mentioned about this requirement during the row filter /\n> > column list development; someone could have written a patch for GRANT ... ON\n> > PUBLICATION.\n> \n> I can try that for PG 16, unless someone is already working on it.\n\nThe patch is attached to this message.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Wed, 18 May 2022 11:16:10 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "Euler Taveira <euler@eulerto.com> wrote:\n\n> On Fri, May 13, 2022, at 3:36 AM, Antonin Houska wrote:\n> \n> Attached is my proposal. It tries to be more specific and does not mention the\n> absence of the privileges explicitly.\n> \n> You explained the current issue but say nothing about the limitation. This\n> information will trigger a question possibly in one of the MLs. IMO if you say\n> something like the sentence above at the end, it will make it clear why that\n> setup expose all data (there is no access control to publications) and\n> explicitly say there is a TODO here.\n> \n> Additional privileges might be added to control access to table data in a\n> future version of <productname>PostgreSQL</productname>.\n\nI thought it sound too negative if absence of some feature was mentioned\nexplicitly. However it makes sense to be clear from technical point of view.\n\n> I also wouldn't use the warning tag because it fits in the same category as the\n> other restrictions listed in the page.\n\nok, please see the next version.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Wed, 18 May 2022 11:44:57 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "On Wed, May 18, 2022, at 6:16 AM, Antonin Houska wrote:\n> The patch is attached to this message.\nGreat. Add it to the next CF. I'll review it when I have some spare time.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, May 18, 2022, at 6:16 AM, Antonin Houska wrote:The patch is attached to this message.Great. Add it to the next CF. I'll review it when I have some spare time.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 18 May 2022 15:04:04 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "On Wed, May 18, 2022, at 6:44 AM, Antonin Houska wrote:\n> ok, please see the next version.\nThe new paragraph looks good to me. I'm not sure if the CREATE PUBLICATION is\nthe right place to provide such information. As I suggested in a previous email\n[1], you could add it to \"Logical Replication > Security\".\n\n[1] https://postgr.es/m/d96103fe-99e2-4119-bd76-952d326b7539@www.fastmail.com\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, May 18, 2022, at 6:44 AM, Antonin Houska wrote:ok, please see the next version.The new paragraph looks good to me. I'm not sure if the CREATE PUBLICATION isthe right place to provide such information. As I suggested in a previous email[1], you could add it to \"Logical Replication > Security\".[1] https://postgr.es/m/d96103fe-99e2-4119-bd76-952d326b7539@www.fastmail.com--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 18 May 2022 15:19:29 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "Euler Taveira <euler@eulerto.com> wrote:\n\n> On Wed, May 18, 2022, at 6:16 AM, Antonin Houska wrote:\n> \n> The patch is attached to this message.\n> \n> Great. Add it to the next CF. I'll review it when I have some spare time.\n\nhttps://commitfest.postgresql.org/38/3641/\n\nThanks!\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Thu, 19 May 2022 20:40:55 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "Euler Taveira <euler@eulerto.com> wrote:\n\n> --eeab359ad6094efd84562cddd7fb9e89\n> Content-Type: text/plain\n> \n> On Wed, May 18, 2022, at 6:44 AM, Antonin Houska wrote:\n> > ok, please see the next version.\n> The new paragraph looks good to me. I'm not sure if the CREATE PUBLICATION is\n> the right place to provide such information. As I suggested in a previous email\n> [1], you could add it to \"Logical Replication > Security\".\n\nok, I missed that. The next version moves the text there.\n\n> [1] https://postgr.es/m/d96103fe-99e2-4119-bd76-952d326b7539@www.fastmail.com\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Mon, 20 Jun 2022 16:01:35 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "On 20.06.22 16:01, Antonin Houska wrote:\n>> On Wed, May 18, 2022, at 6:44 AM, Antonin Houska wrote:\n>>> ok, please see the next version.\n>> The new paragraph looks good to me. I'm not sure if the CREATE PUBLICATION is\n>> the right place to provide such information. As I suggested in a previous email\n>> [1], you could add it to \"Logical Replication > Security\".\n> \n> ok, I missed that. The next version moves the text there.\n\nI have committed this patch that adds the additional documentation.\n\nThe CF entry is about privileges on publications. Please rebase that \npatch and repost it so that the CF app and the CF bot are up to date.\n\n\n",
"msg_date": "Tue, 1 Nov 2022 14:23:51 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> The CF entry is about privileges on publications. Please rebase that patch\n> and repost it so that the CF app and the CF bot are up to date.\n\nThe rebased patch (with regression tests added) is attached here.\n\nThere's still one design issue that I haven't mentioned yet: if the USAGE\nprivilege on a publication is revoked after the synchronization phase\ncompleted, the missing privilege on a publication causes ERROR in the output\nplugin. If the privilege is then granted, the error does not disappear because\nthe same (historical) snapshot we use to decode the failed data change again\nis also used to check the privileges in the catalog, so the output plugin does\nnot see that the privilege has already been granted.\n\nThe only solution seems to be to drop the publication from the subscription\nand add it again, or to drop and re-create the whole subscription. I haven't\nadded a note about this problem to the documentation yet, in case someone has\nbetter idea how to approach the problem.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Thu, 03 Nov 2022 06:43:12 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "On 03.11.22 01:43, Antonin Houska wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n>> The CF entry is about privileges on publications. Please rebase that patch\n>> and repost it so that the CF app and the CF bot are up to date.\n> \n> The rebased patch (with regression tests added) is attached here.\n\nSome preliminary discussion:\n\nWhat is the upgrade strategy? I suppose the options are either that \npublications have a default acl that makes them publicly accessible, \nthus preserving the existing behavior by default, or pg_dump would need \nto create additional GRANT statements when upgrading from pre-PG16. I \ndon't see anything like either of these mentioned in the patch. What is \nyour plan?\n\nYou might be interested in this patch, which relates to yours: \nhttps://commitfest.postgresql.org/40/3955/\n\nLooking at your patch, I would also like to find a way to refactor away \nthe ExecGrant_Publication() function. I'll think about that.\n\nI think you should add some tests under src/test/regress/ for the new \nGRANT and REVOKE statements, just to have some basic coverage that it \nworks. sql/publication.sql would be appropriate, I think.\n\n\n\n",
"msg_date": "Thu, 3 Nov 2022 11:49:51 -0400",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "On Thu, Nov 3, 2022 at 11:12 AM Antonin Houska <ah@cybertec.at> wrote:\n>\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>\n> > The CF entry is about privileges on publications. Please rebase that patch\n> > and repost it so that the CF app and the CF bot are up to date.\n>\n> The rebased patch (with regression tests added) is attached here.\n>\n> There's still one design issue that I haven't mentioned yet: if the USAGE\n> privilege on a publication is revoked after the synchronization phase\n> completed, the missing privilege on a publication causes ERROR in the output\n> plugin. If the privilege is then granted, the error does not disappear because\n> the same (historical) snapshot we use to decode the failed data change again\n> is also used to check the privileges in the catalog, so the output plugin does\n> not see that the privilege has already been granted.\n>\n\nWe have a similar problem even when publication is dropped/created.\nThe replication won't be able to proceed.\n\n> The only solution seems to be to drop the publication from the subscription\n> and add it again, or to drop and re-create the whole subscription. I haven't\n> added a note about this problem to the documentation yet, in case someone has\n> better idea how to approach the problem.\n>\n\nI think one possibility is that the user advances the slot used in\nreplication by using pg_replication_slot_advance() at or after the\nlocation where the privilege is granted. Some other ideas have been\ndiscussed in the thread [1], in particular, see email [2] and\ndiscussion after that but we didn't reach any conclusion.\n\n[1] - https://www.postgresql.org/message-id/CAHut%2BPvMbCsL8PAz1Qc6LNoL0Ag0y3YJtPVJ8V0xVXJOPb%2B0xw%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAA4eK1JTwOAniPua04o2EcOXfzRa8ANax%3D3bpx4H-8dH7M2p%3DA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 4 Nov 2022 09:59:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "On Thu, Nov 3, 2022 at 9:19 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 03.11.22 01:43, Antonin Houska wrote:\n> > Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> >> The CF entry is about privileges on publications. Please rebase that patch\n> >> and repost it so that the CF app and the CF bot are up to date.\n> >\n> > The rebased patch (with regression tests added) is attached here.\n>\n> Some preliminary discussion:\n>\n> What is the upgrade strategy? I suppose the options are either that\n> publications have a default acl that makes them publicly accessible,\n> thus preserving the existing behavior by default, or pg_dump would need\n> to create additional GRANT statements when upgrading from pre-PG16.\n>\n\nI think making them publicly accessible is a better option.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 4 Nov 2022 10:01:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> On 03.11.22 01:43, Antonin Houska wrote:\n> > Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> > \n> >> The CF entry is about privileges on publications. Please rebase that patch\n> >> and repost it so that the CF app and the CF bot are up to date.\n> > The rebased patch (with regression tests added) is attached here.\n> \n> Some preliminary discussion:\n> \n> What is the upgrade strategy? I suppose the options are either that\n> publications have a default acl that makes them publicly accessible, \n> thus preserving the existing behavior by default, or pg_dump would need to\n> create additional GRANT statements when upgrading from pre-PG16. I don't see\n> anything like either of these mentioned in the patch. What is your plan?\n\nSo far I considered the first\n\n> You might be interested in this patch, which relates to yours:\n> https://commitfest.postgresql.org/40/3955/\n\nok, I'll check.\n\n> Looking at your patch, I would also like to find a way to refactor away the\n> ExecGrant_Publication() function. I'll think about that.\n> \n> I think you should add some tests under src/test/regress/ for the new GRANT\n> and REVOKE statements, just to have some basic coverage that it works.\n> sql/publication.sql would be appropriate, I think.\n\nI thought about the whole concept a bit more and I doubt if the PUBLICATION\nprivilege is the best approach. In particular, the user specified in CREATE\nSUBSCRIPTION ... CONNECTION ... (say \"subscription user\") needs to have SELECT\nprivilege on the tables replicated. So if the DBA excludes some columns from\nthe publication's column list and sets the (publication) privileges in such a\nway that the user cannot get the column values via other publications, the\nuser still can connect to the database directly and get values of the excluded\ncolumns.\n\nAs an alternative to the publication privileges, I think that the CREATE\nSUBSCRIPTION command could grant ACL_SELECT automatically to the subscription\nuser on the individual columns contained in the publication column list, and\nDROP SUBSCRIPTION would revoke that privilege.\n\nOf course a question is what to do if the replication user already has that\nprivilege on some columns: either the CREATE SUBSCRIPTION command should raise\nERROR, or we should introduce a new privilege (e.g. ACL_SELECT_PUB) for this\npurpose, which would effectivelly be ACL_SELECT, but hidden from the users of\nGRANT / REVOKE.\n\nIf this approach was taken, the USAGE privilege on schema would need to be\ngranted / revoked in a similar way.\n\nWhat do you think about that?\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Fri, 04 Nov 2022 08:28:50 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Thu, Nov 3, 2022 at 11:12 AM Antonin Houska <ah@cybertec.at> wrote:\n> >\n> > Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > > The CF entry is about privileges on publications. Please rebase that patch\n> > > and repost it so that the CF app and the CF bot are up to date.\n> >\n> > The rebased patch (with regression tests added) is attached here.\n> >\n> > There's still one design issue that I haven't mentioned yet: if the USAGE\n> > privilege on a publication is revoked after the synchronization phase\n> > completed, the missing privilege on a publication causes ERROR in the output\n> > plugin. If the privilege is then granted, the error does not disappear because\n> > the same (historical) snapshot we use to decode the failed data change again\n> > is also used to check the privileges in the catalog, so the output plugin does\n> > not see that the privilege has already been granted.\n> >\n> \n> We have a similar problem even when publication is dropped/created.\n> The replication won't be able to proceed.\n> \n> > The only solution seems to be to drop the publication from the subscription\n> > and add it again, or to drop and re-create the whole subscription. I haven't\n> > added a note about this problem to the documentation yet, in case someone has\n> > better idea how to approach the problem.\n> >\n> \n> I think one possibility is that the user advances the slot used in\n> replication by using pg_replication_slot_advance() at or after the\n> location where the privilege is granted. Some other ideas have been\n> discussed in the thread [1], in particular, see email [2] and\n> discussion after that but we didn't reach any conclusion.\n\nThanks for feedback. Regarding the publications, I'm not sure what the user\nshould expect if he revokes the permissions in the middle of processing. In\nsuch a situation, he should be aware that some privileged data could already\nhave been replicated. My preference in such a situation would be to truncate\nthe table(s) on the subscriber side and to restart the replication, rather\nthan fix the replication and keep the subscriber's access to the leaked data.\n\n> [1] - https://www.postgresql.org/message-id/CAHut%2BPvMbCsL8PAz1Qc6LNoL0Ag0y3YJtPVJ8V0xVXJOPb%2B0xw%40mail.gmail.com\n> [2] - https://www.postgresql.org/message-id/CAA4eK1JTwOAniPua04o2EcOXfzRa8ANax%3D3bpx4H-8dH7M2p%3DA%40mail.gmail.com\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Fri, 04 Nov 2022 09:37:29 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "\n\n> On Nov 4, 2022, at 12:28 AM, Antonin Houska <ah@cybertec.at> wrote:\n> \n> I thought about the whole concept a bit more and I doubt if the PUBLICATION\n> privilege is the best approach. In particular, the user specified in CREATE\n> SUBSCRIPTION ... CONNECTION ... (say \"subscription user\") needs to have SELECT\n> privilege on the tables replicated. So if the DBA excludes some columns from\n> the publication's column list and sets the (publication) privileges in such a\n> way that the user cannot get the column values via other publications, the\n> user still can connect to the database directly and get values of the excluded\n> columns.\n> \n> As an alternative to the publication privileges, I think that the CREATE\n> SUBSCRIPTION command could grant ACL_SELECT automatically to the subscription\n> user on the individual columns contained in the publication column list, and\n> DROP SUBSCRIPTION would revoke that privilege.\n> \n> Of course a question is what to do if the replication user already has that\n> privilege on some columns: either the CREATE SUBSCRIPTION command should raise\n> ERROR, or we should introduce a new privilege (e.g. ACL_SELECT_PUB) for this\n> purpose, which would effectivelly be ACL_SELECT, but hidden from the users of\n> GRANT / REVOKE.\n> \n> If this approach was taken, the USAGE privilege on schema would need to be\n> granted / revoked in a similar way.\n\nWhen you talk about a user needing to have privileges, it sounds like you mean privileges on the publishing database. But then you talk about CREATE SUBSCRIPTION granting privileges, which would necessarily be on the subscriber database. Can you clarify what you have in mind?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 4 Nov 2022 10:17:17 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n\n> > On Nov 4, 2022, at 12:28 AM, Antonin Houska <ah@cybertec.at> wrote:\n> > \n> > I thought about the whole concept a bit more and I doubt if the PUBLICATION\n> > privilege is the best approach. In particular, the user specified in CREATE\n> > SUBSCRIPTION ... CONNECTION ... (say \"subscription user\") needs to have SELECT\n> > privilege on the tables replicated. So if the DBA excludes some columns from\n> > the publication's column list and sets the (publication) privileges in such a\n> > way that the user cannot get the column values via other publications, the\n> > user still can connect to the database directly and get values of the excluded\n> > columns.\n> > \n> > As an alternative to the publication privileges, I think that the CREATE\n> > SUBSCRIPTION command could grant ACL_SELECT automatically to the subscription\n> > user on the individual columns contained in the publication column list, and\n> > DROP SUBSCRIPTION would revoke that privilege.\n> > \n> > Of course a question is what to do if the replication user already has that\n> > privilege on some columns: either the CREATE SUBSCRIPTION command should raise\n> > ERROR, or we should introduce a new privilege (e.g. ACL_SELECT_PUB) for this\n> > purpose, which would effectivelly be ACL_SELECT, but hidden from the users of\n> > GRANT / REVOKE.\n> > \n> > If this approach was taken, the USAGE privilege on schema would need to be\n> > granted / revoked in a similar way.\n> \n> When you talk about a user needing to have privileges, it sounds like you\n> mean privileges on the publishing database. But then you talk about CREATE\n> SUBSCRIPTION granting privileges, which would necessarily be on the\n> subscriber database. Can you clarify what you have in mind?\n\nRight, the privileges need to be added on the publishing side, but the user\nthat needs those privileges is specified on the subscription side. I didn't\nthink much in detail how it would work. The \"subscription user\" certainly\ncannot connect to the publisher database and add grant the privileges to\nitself. Perhaps some of the workers on the publisher side could do it on\nstartup.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Fri, 04 Nov 2022 18:37:31 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "On 04.11.22 08:28, Antonin Houska wrote:\n> I thought about the whole concept a bit more and I doubt if the PUBLICATION\n> privilege is the best approach. In particular, the user specified in CREATE\n> SUBSCRIPTION ... CONNECTION ... (say \"subscription user\") needs to have SELECT\n> privilege on the tables replicated. So if the DBA excludes some columns from\n> the publication's column list and sets the (publication) privileges in such a\n> way that the user cannot get the column values via other publications, the\n> user still can connect to the database directly and get values of the excluded\n> columns.\n\nWhy are the SELECT privileges needed? Maybe that's something to think \nabout and maybe change.\n\n> As an alternative to the publication privileges, I think that the CREATE\n> SUBSCRIPTION command could grant ACL_SELECT automatically to the subscription\n> user on the individual columns contained in the publication column list, and\n> DROP SUBSCRIPTION would revoke that privilege.\n\nI think that approach is weird and unusual. Privileges and object \ncreation should be separate operations.\n\n\n\n",
"msg_date": "Fri, 11 Nov 2022 15:08:16 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> On 04.11.22 08:28, Antonin Houska wrote:\n> > I thought about the whole concept a bit more and I doubt if the PUBLICATION\n> > privilege is the best approach. In particular, the user specified in CREATE\n> > SUBSCRIPTION ... CONNECTION ... (say \"subscription user\") needs to have SELECT\n> > privilege on the tables replicated. So if the DBA excludes some columns from\n> > the publication's column list and sets the (publication) privileges in such a\n> > way that the user cannot get the column values via other publications, the\n> > user still can connect to the database directly and get values of the excluded\n> > columns.\n> \n> Why are the SELECT privileges needed? Maybe that's something to think about\n> and maybe change.\n\nI haven't noticed an explanation in comments nor did I search in the mailing\nlist archives, but the question makes sense: the REPLICATION attribute of a\nrole is sufficient for streaming replication, so why should the logical\nreplication require additional privileges?\n\nTechnically the SELECT privilege is needed because the sync worker does\nactually execute SELECT query on the published tables. However, I realize now\nthat it's not checked by the output plugin. Thus if SELECT is revoked from the\n\"subscription user\" after the table has been synchronized, the replication\ncontinues to work. So the necessity for the SELECT privilege might be an\nomission rather than a design choice. (Even the documentation says that the\nSELECT privilege is needed only for the initial synchronization [1], however\nit does not tell why.)\n\n> > As an alternative to the publication privileges, I think that the CREATE\n> > SUBSCRIPTION command could grant ACL_SELECT automatically to the subscription\n> > user on the individual columns contained in the publication column list, and\n> > DROP SUBSCRIPTION would revoke that privilege.\n> \n> I think that approach is weird and unusual. Privileges and object creation\n> should be separate operations.\n\nok. Another approach would be to skip the check for the SELECT privilege (as\nwell as the check for the USAGE privilege on the corresponding schema) if\ngiven column is being accessed via a publication which has it on its column\nlist and if the subscription user has the USAGE privilege on that publication.\n\nSo far I wasn't sure if we can do that because, if pg_upgrade grants the USAGE\nprivilege on all publications to the \"public\" role, the DBAs who relied on the\nSELECT privileges might not notice that any role having the REPLICATION\nattribute can access all the published tables after the upgrade. (pg_upgrade\ncan hardly do anything else because it has no information on the \"subscription\nusers\", so it cannot convert the SELECT privilege on tables to the USAGE\nprivileges on publications.)\n\nBut now that I see that the logical replication doesn't check the SELECT\nprivilege properly anyway, I think we can get rid of it.\n\n\n[1] https://www.postgresql.org/docs/current/logical-replication-security.html\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Mon, 14 Nov 2022 12:07:48 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "Antonin Houska <ah@cybertec.at> wrote:\n\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> > On 04.11.22 08:28, Antonin Houska wrote:\n> > > I thought about the whole concept a bit more and I doubt if the PUBLICATION\n> > > privilege is the best approach. In particular, the user specified in CREATE\n> > > SUBSCRIPTION ... CONNECTION ... (say \"subscription user\") needs to have SELECT\n> > > privilege on the tables replicated. So if the DBA excludes some columns from\n> > > the publication's column list and sets the (publication) privileges in such a\n> > > way that the user cannot get the column values via other publications, the\n> > > user still can connect to the database directly and get values of the excluded\n> > > columns.\n> > \n> > Why are the SELECT privileges needed? Maybe that's something to think about\n> > and maybe change.\n> \n> I haven't noticed an explanation in comments nor did I search in the mailing\n> list archives, but the question makes sense: the REPLICATION attribute of a\n> role is sufficient for streaming replication, so why should the logical\n> replication require additional privileges?\n> \n> Technically the SELECT privilege is needed because the sync worker does\n> actually execute SELECT query on the published tables. However, I realize now\n> that it's not checked by the output plugin. Thus if SELECT is revoked from the\n> \"subscription user\" after the table has been synchronized, the replication\n> continues to work. So the necessity for the SELECT privilege might be an\n> omission rather than a design choice. (Even the documentation says that the\n> SELECT privilege is needed only for the initial synchronization [1], however\n> it does not tell why.)\n> \n> > > As an alternative to the publication privileges, I think that the CREATE\n> > > SUBSCRIPTION command could grant ACL_SELECT automatically to the subscription\n> > > user on the individual columns contained in the publication column list, and\n> > > DROP SUBSCRIPTION would revoke that privilege.\n> > \n> > I think that approach is weird and unusual. Privileges and object creation\n> > should be separate operations.\n> \n> ok. Another approach would be to skip the check for the SELECT privilege (as\n> well as the check for the USAGE privilege on the corresponding schema) if\n> given column is being accessed via a publication which has it on its column\n> list and if the subscription user has the USAGE privilege on that publication.\n> \n> So far I wasn't sure if we can do that because, if pg_upgrade grants the USAGE\n> privilege on all publications to the \"public\" role, the DBAs who relied on the\n> SELECT privileges might not notice that any role having the REPLICATION\n> attribute can access all the published tables after the upgrade. (pg_upgrade\n> can hardly do anything else because it has no information on the \"subscription\n> users\", so it cannot convert the SELECT privilege on tables to the USAGE\n> privileges on publications.)\n> \n> But now that I see that the logical replication doesn't check the SELECT\n> privilege properly anyway, I think we can get rid of it.\n\nThe attached version tries to do that - as you can see in 0001, the SELECT\nprivilege is not required for the walsender process.\n\nI also added PUBLICATION_NAMES option to the COPY TO command so that the\npublisher knows which publications are subject to the ACL check. Only data of\nthose publications are returned to the subscriber. (In the previous patch\nversion the ACL checks were performed on the subscriber side, but I that's not\nideal in terms of security.)\n\nI also added the regression tests for publications, enhanced psql (the \\dRp+\ncommand) so that it displays the publication ACL and added a few missing\npieces of documentation.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Tue, 29 Nov 2022 15:35:12 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "Antonin Houska <ah@cybertec.at> wrote:\n\n> Antonin Houska <ah@cybertec.at> wrote:\n> \n> > Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> > \n> > > On 04.11.22 08:28, Antonin Houska wrote:\n> > > > I thought about the whole concept a bit more and I doubt if the PUBLICATION\n> > > > privilege is the best approach. In particular, the user specified in CREATE\n> > > > SUBSCRIPTION ... CONNECTION ... (say \"subscription user\") needs to have SELECT\n> > > > privilege on the tables replicated. So if the DBA excludes some columns from\n> > > > the publication's column list and sets the (publication) privileges in such a\n> > > > way that the user cannot get the column values via other publications, the\n> > > > user still can connect to the database directly and get values of the excluded\n> > > > columns.\n> > > \n> > > Why are the SELECT privileges needed? Maybe that's something to think about\n> > > and maybe change.\n> > \n> > I haven't noticed an explanation in comments nor did I search in the mailing\n> > list archives, but the question makes sense: the REPLICATION attribute of a\n> > role is sufficient for streaming replication, so why should the logical\n> > replication require additional privileges?\n> > \n> > Technically the SELECT privilege is needed because the sync worker does\n> > actually execute SELECT query on the published tables. However, I realize now\n> > that it's not checked by the output plugin. Thus if SELECT is revoked from the\n> > \"subscription user\" after the table has been synchronized, the replication\n> > continues to work. So the necessity for the SELECT privilege might be an\n> > omission rather than a design choice. (Even the documentation says that the\n> > SELECT privilege is needed only for the initial synchronization [1], however\n> > it does not tell why.)\n> > \n> > > > As an alternative to the publication privileges, I think that the CREATE\n> > > > SUBSCRIPTION command could grant ACL_SELECT automatically to the subscription\n> > > > user on the individual columns contained in the publication column list, and\n> > > > DROP SUBSCRIPTION would revoke that privilege.\n> > > \n> > > I think that approach is weird and unusual. Privileges and object creation\n> > > should be separate operations.\n> > \n> > ok. Another approach would be to skip the check for the SELECT privilege (as\n> > well as the check for the USAGE privilege on the corresponding schema) if\n> > given column is being accessed via a publication which has it on its column\n> > list and if the subscription user has the USAGE privilege on that publication.\n> > \n> > So far I wasn't sure if we can do that because, if pg_upgrade grants the USAGE\n> > privilege on all publications to the \"public\" role, the DBAs who relied on the\n> > SELECT privileges might not notice that any role having the REPLICATION\n> > attribute can access all the published tables after the upgrade. (pg_upgrade\n> > can hardly do anything else because it has no information on the \"subscription\n> > users\", so it cannot convert the SELECT privilege on tables to the USAGE\n> > privileges on publications.)\n> > \n> > But now that I see that the logical replication doesn't check the SELECT\n> > privilege properly anyway, I think we can get rid of it.\n> \n> The attached version tries to do that - as you can see in 0001, the SELECT\n> privilege is not required for the walsender process.\n> \n> I also added PUBLICATION_NAMES option to the COPY TO command so that the\n> publisher knows which publications are subject to the ACL check. Only data of\n> those publications are returned to the subscriber. (In the previous patch\n> version the ACL checks were performed on the subscriber side, but I that's not\n> ideal in terms of security.)\n> \n> I also added the regression tests for publications, enhanced psql (the \\dRp+\n> command) so that it displays the publication ACL and added a few missing\n> pieces of documentation.\n> \n\nThis is v4. The patch had to be rebased due to the commit 369f09e420.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Fri, 16 Dec 2022 17:37:07 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "On 16.12.22 17:37, Antonin Houska wrote:\n> This is v4. The patch had to be rebased due to the commit 369f09e420.\n\nI think what this patch set needs first of all is a comprehensive \ndescription of what it is trying to do, exactly what commands and \nbehaviors it adds, what are some of the subtleties and corner cases, \nwhat are open issues and questions. Some of that can be pieced together \nfrom this thread, but it should really be put in one place somewhere, \nideally in the commit message and/or the documentation. (The main 0002 \npatch does not have any documentation.) It looks like you have a lot of \nbases covered, but without a full description, it's difficult to tell.\n\nSome points on the details:\n\n* You can combine all five patches into one. I don't think they are \nmeant to be applied separately. The 0001 looks like it was maybe meant \nto be used separately, but it's not clear. Again, the overall \ndescription would help.\n\n* There is a lot of code that is contingent on am_db_walsender. We \nshould avoid that. In most cases, it doesn't seem necessary. Or at \nleast document the reasons.\n\n* The term \"aware\" (of a publication ACL, of a relation) is used a bunch \nof times. That's not a technical term, and the meaning of those phrases \nis not clear. Make sure the documentation/comments are precise.\n\n* I don't think using SPI is warranted here. You can get the required \ninformation directly from the underlying functions.\n\n* The places the privileges are ultimately checked is too unprincipled. \nThe 0001 patch overrides a very low-level function, but the 0002 on the \nother hand checks the privileges by digging through the query structures \nby hand instead of letting the executor do it. We need to find ways to \nhandle that that is more consistent with what the code is currently \ndoing instead of adding more layers to it above and below.\n\n* The misc_sanity.out test output means you need to add a TOAST table to \npg_publication.\n\n\n\n",
"msg_date": "Mon, 9 Jan 2023 10:23:32 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> On 16.12.22 17:37, Antonin Houska wrote:\n> > This is v4. The patch had to be rebased due to the commit 369f09e420.\n> \n> I think what this patch set needs first of all is a comprehensive description\n> of what it is trying to do, exactly what commands and behaviors it adds, what\n> are some of the subtleties and corner cases, what are open issues and\n> questions. Some of that can be pieced together from this thread, but it\n> should really be put in one place somewhere, ideally in the commit message\n> and/or the documentation. (The main 0002 patch does not have any\n> documentation.) It looks like you have a lot of bases covered, but without a\n> full description, it's difficult to tell.\n> \n> Some points on the details:\n> \n> * You can combine all five patches into one. I don't think they are meant to\n> be applied separately. The 0001 looks like it was maybe meant to be used\n> separately, but it's not clear. Again, the overall description would help.\n> \n> * There is a lot of code that is contingent on am_db_walsender. We should\n> avoid that. In most cases, it doesn't seem necessary. Or at least document\n> the reasons.\n> \n> * The term \"aware\" (of a publication ACL, of a relation) is used a bunch of\n> times. That's not a technical term, and the meaning of those phrases is not\n> clear. Make sure the documentation/comments are precise.\n> \n> * I don't think using SPI is warranted here. You can get the required\n> information directly from the underlying functions.\n> \n> * The places the privileges are ultimately checked is too unprincipled. The\n> 0001 patch overrides a very low-level function, but the 0002 on the other\n> hand checks the privileges by digging through the query structures by hand\n> instead of letting the executor do it. We need to find ways to handle that\n> that is more consistent with what the code is currently doing instead of\n> adding more layers to it above and below.\n> \n> * The misc_sanity.out test output means you need to add a TOAST table to\n> pg_publication.\n> \n\nThanks for your review. Attached is a new version that tries to address your\nfindings.\n\nI reworked the patch a bit, especially the handling of the PUBLICATION_NAMES\nof the COPY TO command, so that compatibility with older subscribers is not\nbroken. The compatibility is actually the hardest part.\n\n0001 only move some code into functions. I think it's better to do\nthis kind of thing separate so that the actual changes are easier to read.\n\nThe TODO comment in 0002 is related to [1].\n\n[1] https://www.postgresql.org/message-id/3472.1675251957%40antos\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n\n From f29b1daf4dfaba99facf321500f83dae414f9fa7 Mon Sep 17 00:00:00 2001\nFrom: Antonin Houska <ah@cybertec.at>\nDate: Wed, 1 Feb 2023 12:49:58 +0100\nSubject: [PATCH 2/2] Implement the USAGE privilege on PUBLICATION.\n\nPublication row filters and column lists can be used to prevent subsets of\ndata from being replicated via the logical replication system. These features\ncan address performance issues, but currently should not be used for security\npurposes, such as hiding sensitive data from the subscribers. The problem is\nthat any subscriber can get the data from any publication, so if the sensitive\ndata is deliberately not published via one publication, it can still be\navailable via another one (supposedly created for another subscriber).\n\nThis patch adds an ACL column to the pg_publication catalog, implements the\ncorresponding checks and enhances the GRANT and REVOKE commands to grant and\nrevoke the USAGE privilege on publication to / from roles. The USAGE privilege\nis initially granted to the PUBLIC group (so that existing configurations\ndon't get broken) but the user can revoke it and grant it only to individual\nsubscription users (i.e. users mentioned in the subscription connection\nconfiguration). Thus the publisher instance can reject to send data that given\nsubscriber is not supposed to receive.\n\nObviously, the publication privileges are checked on the publisher side,\notherwise the implementation wouldn't be secure. The output plugin\n(pgoutput.c) is the easy part because it already does receive the list of\npublications whose data it should send to the subscriber. The initial table\nsynchronization is a little bit tricky because so far the \"tablesync worker\"\n(running on the subscriber side) was responsible for constructing the SQL\nquery for the COPY TO command, which is executed on the publisher side.\n\nThis patch adds a new option PUBLICATION_NAMES to the COPY TO command. The\nsubscriber uses it to pass a list of publications to the publisher. The\npublisher checks if the subscription user has the USAGE privilege on each\npublication, retrieves the corresponding data (i.e. rows matching the row\nfilters of the publications) and sends it to the subscriber.\n\nSince the publisher and subscriber instances can be on different major\nversions of postgres, and since old subscribers cannot send the publication\nnames during the initial table synchronization, a new configuration variable\n\"publication_security\" was added. The default value is \"off\", meaning that the\npublisher does not require the COPY TO command to contain the\nPUBLICATION_NAMES option. If the option is passed yet, the publisher does not\ncheck the privileges on the listed publications, but it does perform row\nfiltering according to the publication filters. Thus upgrade of the publisher\ninstance does not break anything.\n\nOnce all the subscribers have migrated to the postgres version that supports\nthis feature, this variable should be set to \"on\". At that moment the\npublisher starts to require the presence of the PUBLICATION_NAMES option in\nthe COPY TO command, as long as the COPY TO is executed by a role which has\nthe REPLICATION privilege. (Role w/o the REPLICATION privilege aren't\ncurrently allowed to use the PUBLICATION_NAMES option.)\n---\n doc/src/sgml/catalogs.sgml | 9 +\n doc/src/sgml/config.sgml | 28 ++\n doc/src/sgml/ddl.sgml | 14 +\n doc/src/sgml/logical-replication.sgml | 72 +--\n doc/src/sgml/ref/copy.sgml | 36 ++\n doc/src/sgml/ref/grant.sgml | 9 +-\n src/backend/catalog/aclchk.c | 22 +\n src/backend/catalog/namespace.c | 34 +-\n src/backend/catalog/objectaddress.c | 2 +-\n src/backend/catalog/pg_publication.c | 20 +-\n src/backend/commands/copy.c | 169 ++++++-\n src/backend/commands/copyto.c | 224 ++++++++-\n src/backend/commands/publicationcmds.c | 2 +\n src/backend/executor/execMain.c | 4 +-\n src/backend/parser/gram.y | 8 +\n src/backend/replication/logical/tablesync.c | 91 ++--\n src/backend/replication/pgoutput/pgoutput.c | 3 +\n src/backend/replication/walsender.c | 6 +\n src/backend/utils/adt/acl.c | 51 +++\n src/backend/utils/misc/guc_tables.c | 12 +\n src/backend/utils/misc/postgresql.conf.sample | 6 +\n src/bin/pg_dump/dumputils.c | 2 +\n src/bin/pg_dump/pg_dump.c | 47 +-\n src/bin/pg_dump/pg_dump.h | 1 +\n src/bin/psql/describe.c | 11 +\n src/bin/psql/tab-complete.c | 3 +\n src/include/catalog/pg_proc.dat | 3 +\n src/include/catalog/pg_publication.h | 10 +\n src/include/commands/copy.h | 7 +-\n src/include/nodes/parsenodes.h | 4 +-\n src/include/replication/logicalproto.h | 1 +\n src/include/utils/acl.h | 1 +\n src/include/utils/guc_tables.h | 1 +\n .../test_copy_callbacks/test_copy_callbacks.c | 2 +-\n src/test/regress/expected/copy.out | 52 +++\n src/test/regress/expected/publication.out | 424 ++++++++++--------\n src/test/regress/sql/copy.sql | 36 ++\n src/test/regress/sql/publication.sql | 28 ++\n src/test/subscription/t/027_nosuperuser.pl | 58 ++-\n 39 files changed, 1205 insertions(+), 308 deletions(-)\n\ndiff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml\nindex c1e4048054..acc8db53c3 100644\n--- a/doc/src/sgml/catalogs.sgml\n+++ b/doc/src/sgml/catalogs.sgml\n@@ -6343,6 +6343,15 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l\n publication instead of its own.\n </para></entry>\n </row>\n+\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>pubacl</structfield> <type>aclitem[]</type>\n+ </para>\n+ <para>\n+ Access privileges; see <xref linkend=\"ddl-priv\"/> for details\n+ </para></entry>\n+ </row>\n </tbody>\n </tgroup>\n </table>\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\nindex 1cf53c74ea..a1797463aa 100644\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -4958,6 +4958,34 @@ ANY <replaceable class=\"parameter\">num_sync</replaceable> ( <replaceable class=\"\n </variablelist>\n </sect2>\n \n+ <sect2 id=\"runtime-config-replication-publisher\">\n+ <title>Publishers</title>\n+\n+ <para>\n+ These settings control the behavior of a logical replication publisher.\n+ Their values on the subscriber are irrelevant.\n+ </para>\n+\n+ <variablelist>\n+\n+ <varlistentry id=\"guc-publication-security\" xreflabel=\"publication_security\">\n+ <term><varname>publication_security</varname> (<type>boolean</type>)\n+ <indexterm>\n+ <primary><varname>publication_security</varname> configuration parameter</primary>\n+ <secondary>in a publisher</secondary>\n+ </indexterm>\n+ </term>\n+ <listitem>\n+ <para>\n+ Specifies whether the publisher should check the publication\n+ privileges before it sends data to the subscriber. See\n+ <xref linkend=\"logical-replication-security\"/> for more details.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ </variablelist>\n+ </sect2>\n+\n <sect2 id=\"runtime-config-replication-subscriber\">\n <title>Subscribers</title>\n \ndiff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml\nindex 8dc8d7a0ce..2ebc6d8e32 100644\n--- a/doc/src/sgml/ddl.sgml\n+++ b/doc/src/sgml/ddl.sgml\n@@ -1962,6 +1962,13 @@ REVOKE ALL ON accounts FROM PUBLIC;\n statements that have previously performed this lookup, so this is not\n a completely secure way to prevent object access.\n </para>\n+ <para>\n+ For publications, allows logical replication via particular\n+ publication. The user specified in\n+ the <link linkend=\"sql-createsubscription\"><command>CREATE\n+ SUBSCRIPTION</command></link> command must have this privilege on all\n+ publications listed in that command.\n+ </para>\n <para>\n For sequences, allows use of the\n <function>currval</function> and <function>nextval</function> functions.\n@@ -2155,6 +2162,7 @@ REVOKE ALL ON accounts FROM PUBLIC;\n <literal>FOREIGN DATA WRAPPER</literal>,\n <literal>FOREIGN SERVER</literal>,\n <literal>LANGUAGE</literal>,\n+ <literal>PUBLICATION</literal>,\n <literal>SCHEMA</literal>,\n <literal>SEQUENCE</literal>,\n <literal>TYPE</literal>\n@@ -2251,6 +2259,12 @@ REVOKE ALL ON accounts FROM PUBLIC;\n <entry>none</entry>\n <entry><literal>\\dconfig+</literal></entry>\n </row>\n+ <row>\n+ <entry><literal>PUBLICATION</literal></entry>\n+ <entry><literal>U</literal></entry>\n+ <entry>U</entry>\n+ <entry><literal>\\dRp+</literal></entry>\n+ </row>\n <row>\n <entry><literal>SCHEMA</literal></entry>\n <entry><literal>UC</literal></entry>\ndiff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml\nindex 1bd5660c87..64774e68cd 100644\n--- a/doc/src/sgml/logical-replication.sgml\n+++ b/doc/src/sgml/logical-replication.sgml\n@@ -898,26 +898,26 @@ CREATE PUBLICATION\n <command>psql</command> can be used to show the row filter expressions (if\n defined) for each publication.\n <programlisting>\n-test_pub=# \\dRp+\n+ test_pub=# \\dRp+\n Publication p1\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root\n-----------+------------+---------+---------+---------+-----------+----------\n- postgres | f | t | t | t | t | f\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges\n+----------+------------+---------+---------+---------+-----------+------------------------------\n+ postgres | f | t | t | t | t | f |\n Tables:\n \"public.t1\" WHERE ((a > 5) AND (c = 'NSW'::text))\n \n Publication p2\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root\n-----------+------------+---------+---------+---------+-----------+----------\n- postgres | f | t | t | t | t | f\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges\n+----------+------------+---------+---------+---------+-----------+------------------------------\n+ postgres | f | t | t | t | t | f |\n Tables:\n \"public.t1\"\n \"public.t2\" WHERE (e = 99)\n \n Publication p3\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root\n-----------+------------+---------+---------+---------+-----------+----------\n- postgres | f | t | t | t | t | f\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges\n+----------+------------+---------+---------+---------+-----------+------------------------------\n+ postgres | f | t | t | t | t | f |\n Tables:\n \"public.t2\" WHERE (d = 10)\n \"public.t3\" WHERE (g = 10)\n@@ -1259,10 +1259,11 @@ test_sub=# SELECT * FROM child ORDER BY a;\n \n <para>\n The choice of columns can be based on behavioral or performance reasons.\n- However, do not rely on this feature for security: a malicious subscriber\n- is able to obtain data from columns that are not specifically\n- published. If security is a consideration, protections can be applied\n- at the publisher side.\n+ However, if you want to use this feature for security, please consider\n+ using the privileges on publication, as explained in\n+ <xref linkend=\"logical-replication-security\"/>. Otherwise a malicious\n+ subscriber may be able to use other publications to obtain data from\n+ columns that are not specifically published via your publication.\n </para>\n \n <para>\n@@ -1360,9 +1361,9 @@ CREATE PUBLICATION\n <programlisting>\n test_pub=# \\dRp+\n Publication p1\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root\n-----------+------------+---------+---------+---------+-----------+----------\n- postgres | f | t | t | t | t | f\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges\n+----------+------------+---------+---------+---------+-----------+------------------------------\n+ postgres | f | t | t | t | t | f |\n Tables:\n \"public.t1\" (id, a, b, d)\n </programlisting></para>\n@@ -1724,12 +1725,6 @@ CONTEXT: processing remote data for replication origin \"pg_16395\" during \"INSER\n and it must have the <literal>LOGIN</literal> attribute.\n </para>\n \n- <para>\n- In order to be able to copy the initial table data, the role used for the\n- replication connection must have the <literal>SELECT</literal> privilege on\n- a published table (or be a superuser).\n- </para>\n-\n <para>\n To create a publication, the user must have the <literal>CREATE</literal>\n privilege in the database.\n@@ -1743,16 +1738,25 @@ CONTEXT: processing remote data for replication origin \"pg_16395\" during \"INSER\n </para>\n \n <para>\n- There are currently no privileges on publications. Any subscription (that\n- is able to connect) can access any publication. Thus, if you intend to\n- hide some information from particular subscribers, such as by using row\n- filters or column lists, or by not adding the whole table to the\n- publication, be aware that other publications in the same database could\n- expose the same information. Publication privileges might be added to\n- <productname>PostgreSQL</productname> in the future to allow for\n- finer-grained access control.\n+ To replicate data, the role used for the replication connection must have\n+ the <literal>USAGE</literal> privilege on the publication. In such a case,\n+ the subscription role needs neither the <literal>SELECT</literal>\n+ privileges on the replicated tables nor the <literal>USAGE</literal>\n+ privilege on the containing schemas.\n </para>\n \n+ <note>\n+ <para>\n+ The <literal>USAGE</literal> privilege on publication is only checked if\n+ the <link linkend=\"guc-publication-security\"><varname>publication_security</varname></link>\n+ configuration parameter is set. The default is <literal>off</literal>. It\n+ should only be set to <literal>on</literal> if all the subscribers are\n+ on <productname>PostgreSQL</productname> server version 16 or later. The\n+ older versions do not send the publication names for the initial table\n+ synchronization, so they would fail to receive the data.\n+ </para>\n+ </note>\n+\n <para>\n To create a subscription, the user must be a superuser.\n </para>\n@@ -1812,6 +1816,12 @@ CONTEXT: processing remote data for replication origin \"pg_16395\" during \"INSER\n <link linkend=\"guc-wal-sender-timeout\"><varname>wal_sender_timeout</varname></link>.\n </para>\n \n+ <para>\n+ <link linkend=\"guc-publication-security\"><varname>publication_security</varname></link>\n+ must be set to <literal>on</literal> if the publisher is supposed to check\n+ the publication privileges.\n+ </para>\n+\n </sect2>\n \n <sect2 id=\"logical-replication-config-subscriber\">\ndiff --git a/doc/src/sgml/ref/copy.sgml b/doc/src/sgml/ref/copy.sgml\nindex c25b52d0cb..87898b57f7 100644\n--- a/doc/src/sgml/ref/copy.sgml\n+++ b/doc/src/sgml/ref/copy.sgml\n@@ -43,6 +43,7 @@ COPY { <replaceable class=\"parameter\">table_name</replaceable> [ ( <replaceable\n FORCE_NOT_NULL ( <replaceable class=\"parameter\">column_name</replaceable> [, ...] )\n FORCE_NULL ( <replaceable class=\"parameter\">column_name</replaceable> [, ...] )\n ENCODING '<replaceable class=\"parameter\">encoding_name</replaceable>'\n+ PUBLICATION_NAMES ( <replaceable class=\"parameter\">publication_name</replaceable> [, ...] )\n </synopsis>\n </refsynopsisdiv>\n \n@@ -368,6 +369,41 @@ COPY { <replaceable class=\"parameter\">table_name</replaceable> [ ( <replaceable\n </listitem>\n </varlistentry>\n \n+ <varlistentry>\n+ <term><replaceable class=\"parameter\">publication_name</replaceable></term>\n+ <listitem>\n+ <para>\n+ The name of an\n+ existing <link linkend=\"logical-replication-publication\">publication</link>.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+\n+ <varlistentry>\n+ <term><literal>PUBLICATION_NAMES</literal></term>\n+ <listitem>\n+ <para>\n+ Specifies a list of publications. Only rows that match the\n+ <link linkend=\"logical-replication-row-filter\">row filter</link> of at\n+ least one the publications are copied. If at least one publication in\n+ the list has no row filter, the whole table contents will be copied.\n+ </para>\n+ <para>\n+ If\n+ the <link linkend=\"guc-publication-security\">publication_security</link>\n+ configuration parameter is <literal>on</literal>, the list is required.\n+ and the user needs to have the <literal>USAGE</literal> privilege on all\n+ the publications in the list which are actually used to retrieve the\n+ data from given table.\n+ </para>\n+ <para>\n+ This option is allowed only in <command>COPY TO</command>. Currently,\n+ only the users with the <literal>REPLICATION</literal> privilege can use\n+ this option.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+\n <varlistentry>\n <term><literal>WHERE</literal></term>\n <listitem>\ndiff --git a/doc/src/sgml/ref/grant.sgml b/doc/src/sgml/ref/grant.sgml\nindex 35bf0332c8..329a4f9023 100644\n--- a/doc/src/sgml/ref/grant.sgml\n+++ b/doc/src/sgml/ref/grant.sgml\n@@ -82,6 +82,11 @@ GRANT { { SET | ALTER SYSTEM } [, ... ] | ALL [ PRIVILEGES ] }\n TO <replaceable class=\"parameter\">role_specification</replaceable> [, ...] [ WITH GRANT OPTION ]\n [ GRANTED BY <replaceable class=\"parameter\">role_specification</replaceable> ]\n \n+GRANT { USAGE [, ... ] | ALL [ PRIVILEGES ] }\n+ ON PUBLICATION <replaceable class=\"parameter\">publication_name</replaceable> [, ...]\n+ TO <replaceable class=\"parameter\">role_specification</replaceable> [, ...] [ WITH GRANT OPTION ]\n+ [ GRANTED BY <replaceable class=\"parameter\">role_specification</replaceable> ]\n+\n GRANT { { CREATE | USAGE } [, ...] | ALL [ PRIVILEGES ] }\n ON SCHEMA <replaceable>schema_name</replaceable> [, ...]\n TO <replaceable class=\"parameter\">role_specification</replaceable> [, ...] [ WITH GRANT OPTION ]\n@@ -513,8 +518,8 @@ GRANT admins TO joe;\n </para>\n \n <para>\n- Privileges on databases, tablespaces, schemas, languages, and\n- configuration parameters are\n+ Privileges on databases, tablespaces, schemas, languages, configuration\n+ parameters and publications are\n <productname>PostgreSQL</productname> extensions.\n </para>\n </refsect1>\ndiff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.c\nindex c4232344aa..b7dc203859 100644\n--- a/src/backend/catalog/aclchk.c\n+++ b/src/backend/catalog/aclchk.c\n@@ -253,6 +253,9 @@ restrict_and_check_grant(bool is_grant, AclMode avail_goptions, bool all_privs,\n \t\tcase OBJECT_FUNCTION:\n \t\t\twhole_mask = ACL_ALL_RIGHTS_FUNCTION;\n \t\t\tbreak;\n+\t\tcase OBJECT_PUBLICATION:\n+\t\t\twhole_mask = ACL_ALL_RIGHTS_PUBLICATION;\n+\t\t\tbreak;\n \t\tcase OBJECT_LANGUAGE:\n \t\t\twhole_mask = ACL_ALL_RIGHTS_LANGUAGE;\n \t\t\tbreak;\n@@ -485,6 +488,10 @@ ExecuteGrantStmt(GrantStmt *stmt)\n \t\t\tall_privileges = ACL_ALL_RIGHTS_FUNCTION;\n \t\t\terrormsg = gettext_noop(\"invalid privilege type %s for function\");\n \t\t\tbreak;\n+\t\tcase OBJECT_PUBLICATION:\n+\t\t\tall_privileges = ACL_ALL_RIGHTS_PUBLICATION;\n+\t\t\terrormsg = gettext_noop(\"invalid privilege type %s for publication\");\n+\t\t\tbreak;\n \t\tcase OBJECT_LANGUAGE:\n \t\t\tall_privileges = ACL_ALL_RIGHTS_LANGUAGE;\n \t\t\terrormsg = gettext_noop(\"invalid privilege type %s for language\");\n@@ -621,6 +628,9 @@ ExecGrantStmt_oids(InternalGrant *istmt)\n \t\tcase OBJECT_LARGEOBJECT:\n \t\t\tExecGrant_Largeobject(istmt);\n \t\t\tbreak;\n+\t\tcase OBJECT_PUBLICATION:\n+\t\t\tExecGrant_common(istmt, PublicationRelationId, ACL_ALL_RIGHTS_PUBLICATION, NULL);\n+\t\t\tbreak;\n \t\tcase OBJECT_SCHEMA:\n \t\t\tExecGrant_common(istmt, NamespaceRelationId, ACL_ALL_RIGHTS_SCHEMA, NULL);\n \t\t\tbreak;\n@@ -731,6 +741,16 @@ objectNamesToOids(ObjectType objtype, List *objnames, bool is_grant)\n \t\t\t\tobjects = lappend_oid(objects, lobjOid);\n \t\t\t}\n \t\t\tbreak;\n+\t\tcase OBJECT_PUBLICATION:\n+\t\t\tforeach(cell, objnames)\n+\t\t\t{\n+\t\t\t\tchar\t *nspname = strVal(lfirst(cell));\n+\t\t\t\tOid\t\t\toid;\n+\n+\t\t\t\toid = get_publication_oid(nspname, false);\n+\t\t\t\tobjects = lappend_oid(objects, oid);\n+\t\t\t}\n+\t\t\tbreak;\n \t\tcase OBJECT_SCHEMA:\n \t\t\tforeach(cell, objnames)\n \t\t\t{\n@@ -3023,6 +3043,8 @@ pg_aclmask(ObjectType objtype, Oid object_oid, AttrNumber attnum, Oid roleid,\n \t\t\treturn object_aclmask(DatabaseRelationId, object_oid, roleid, mask, how);\n \t\tcase OBJECT_FUNCTION:\n \t\t\treturn object_aclmask(ProcedureRelationId, object_oid, roleid, mask, how);\n+\t\tcase OBJECT_PUBLICATION:\n+\t\t\treturn object_aclmask(PublicationRelationId, object_oid, roleid, mask, how);\n \t\tcase OBJECT_LANGUAGE:\n \t\t\treturn object_aclmask(LanguageRelationId, object_oid, roleid, mask, how);\n \t\tcase OBJECT_LARGEOBJECT:\ndiff --git a/src/backend/catalog/namespace.c b/src/backend/catalog/namespace.c\nindex 14e57adee2..d76b052059 100644\n--- a/src/backend/catalog/namespace.c\n+++ b/src/backend/catalog/namespace.c\n@@ -2936,7 +2936,6 @@ Oid\n LookupExplicitNamespace(const char *nspname, bool missing_ok)\n {\n \tOid\t\t\tnamespaceId;\n-\tAclResult\taclresult;\n \n \t/* check for pg_temp alias */\n \tif (strcmp(nspname, \"pg_temp\") == 0)\n@@ -2955,10 +2954,20 @@ LookupExplicitNamespace(const char *nspname, bool missing_ok)\n \tif (missing_ok && !OidIsValid(namespaceId))\n \t\treturn InvalidOid;\n \n-\taclresult = object_aclcheck(NamespaceRelationId, namespaceId, GetUserId(), ACL_USAGE);\n-\tif (aclresult != ACLCHECK_OK)\n-\t\taclcheck_error(aclresult, OBJECT_SCHEMA,\n-\t\t\t\t\t nspname);\n+\t/*\n+\t * If the publication security is active, bypass the standard security\n+\t * checks.\n+\t */\n+\tif (!publication_security)\n+\t{\n+\t\tAclResult\taclresult;\n+\n+\t\taclresult = object_aclcheck(NamespaceRelationId, namespaceId, GetUserId(),\n+\t\t\t\t\t\t\t\t\tACL_USAGE);\n+\t\tif (aclresult != ACLCHECK_OK)\n+\t\t\taclcheck_error(aclresult, OBJECT_SCHEMA,\n+\t\t\t\t\t\t nspname);\n+\t}\n \t/* Schema search hook for this lookup */\n \tInvokeNamespaceSearchHook(namespaceId, true);\n \n@@ -3835,10 +3844,16 @@ recomputeNamespacePath(void)\n \t\t\t\trname = NameStr(((Form_pg_authid) GETSTRUCT(tuple))->rolname);\n \t\t\t\tnamespaceId = get_namespace_oid(rname, true);\n \t\t\t\tReleaseSysCache(tuple);\n+\n+\t\t\t\t/*\n+\t\t\t\t * If the publication security is active, bypass the standard\n+\t\t\t\t * security checks.\n+\t\t\t\t */\n \t\t\t\tif (OidIsValid(namespaceId) &&\n \t\t\t\t\t!list_member_oid(oidlist, namespaceId) &&\n-\t\t\t\t\tobject_aclcheck(NamespaceRelationId, namespaceId, roleid,\n-\t\t\t\t\t\t\t\t\t\t ACL_USAGE) == ACLCHECK_OK &&\n+\t\t\t\t\t(publication_security ||\n+\t\t\t\t\t object_aclcheck(NamespaceRelationId, namespaceId, roleid,\n+\t\t\t\t\t\t\t\t\t ACL_USAGE) == ACLCHECK_OK) &&\n \t\t\t\t\tInvokeNamespaceSearchHook(namespaceId, false))\n \t\t\t\t\toidlist = lappend_oid(oidlist, namespaceId);\n \t\t\t}\n@@ -3865,8 +3880,9 @@ recomputeNamespacePath(void)\n \t\t\tnamespaceId = get_namespace_oid(curname, true);\n \t\t\tif (OidIsValid(namespaceId) &&\n \t\t\t\t!list_member_oid(oidlist, namespaceId) &&\n-\t\t\t\tobject_aclcheck(NamespaceRelationId, namespaceId, roleid,\n-\t\t\t\t\t\t\t\t\t ACL_USAGE) == ACLCHECK_OK &&\n+\t\t\t\t(publication_security ||\n+\t\t\t\t object_aclcheck(NamespaceRelationId, namespaceId, roleid,\n+\t\t\t\t\t\t\t\t ACL_USAGE) == ACLCHECK_OK) &&\n \t\t\t\tInvokeNamespaceSearchHook(namespaceId, false))\n \t\t\t\toidlist = lappend_oid(oidlist, namespaceId);\n \t\t}\ndiff --git a/src/backend/catalog/objectaddress.c b/src/backend/catalog/objectaddress.c\nindex 25c50d66fd..ce482a88a9 100644\n--- a/src/backend/catalog/objectaddress.c\n+++ b/src/backend/catalog/objectaddress.c\n@@ -587,7 +587,7 @@ static const ObjectPropertyType ObjectProperty[] =\n \t\tAnum_pg_publication_pubname,\n \t\tInvalidAttrNumber,\n \t\tAnum_pg_publication_pubowner,\n-\t\tInvalidAttrNumber,\n+\t\tAnum_pg_publication_pubacl,\n \t\tOBJECT_PUBLICATION,\n \t\ttrue\n \t},\ndiff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c\nindex 7f6024b7a5..93793b1fa4 100644\n--- a/src/backend/catalog/pg_publication.c\n+++ b/src/backend/catalog/pg_publication.c\n@@ -1069,9 +1069,11 @@ GetPublicationRelationMapping(Oid pubid, Oid relid,\n \t\t*qual_isnull = true;\n \t}\n }\n+\n /*\n * Pick those publications from a list which should actually be used to\n- * publish given relation and return them.\n+ * publish given relation, check their USAGE privilege is needed and return\n+ * them.\n *\n * If publish_as_relid_p is passed, the relation whose tuple descriptor should\n * be used to publish the data is stored in *publish_as_relid_p.\n@@ -1165,6 +1167,22 @@ GetEffectiveRelationPublications(Oid relid, List *publications,\n \t\t\t\tpublish = true;\n \t\t}\n \n+\t\t/*\n+\t\t * Check privileges before we use any information of the\n+\t\t * publication.\n+\t\t */\n+\t\tif (publication_security && publish)\n+\t\t{\n+\t\t\tOid\t\t\troleid = GetUserId();\n+\t\t\tAclResult\taclresult;\n+\n+\t\t\taclresult = object_aclcheck(PublicationRelationId, pub->oid,\n+\t\t\t\t\t\t\t\t\t\troleid, ACL_USAGE);\n+\t\t\tif (aclresult != ACLCHECK_OK)\n+\t\t\t\taclcheck_error(aclresult, OBJECT_PUBLICATION,\n+\t\t\t\t\t\t\t get_publication_name(pub->oid, false));\n+\t\t}\n+\n \t\t/*\n \t\t * If the relation is to be published, determine actions to publish,\n \t\t * and list of columns, if appropriate.\ndiff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c\nindex 153eae0379..f47ecd760b 100644\n--- a/src/backend/commands/copy.c\n+++ b/src/backend/commands/copy.c\n@@ -41,6 +41,8 @@\n #include \"utils/rel.h\"\n #include \"utils/rls.h\"\n \n+static bool isReplicationUser(void);\n+\n /*\n *\t DoCopy executes the SQL COPY statement\n *\n@@ -71,6 +73,7 @@ DoCopy(ParseState *pstate, const CopyStmt *stmt,\n \tOid\t\t\trelid;\n \tRawStmt *query = NULL;\n \tNode\t *whereClause = NULL;\n+\tList\t\t*publication_names = NIL;\n \n \t/*\n \t * Disallow COPY to/from file or program except to users with the\n@@ -105,14 +108,23 @@ DoCopy(ParseState *pstate, const CopyStmt *stmt,\n \t\t}\n \t}\n \n+\t/*\n+\t * It seems more useful to tell the user immediately that something is\n+\t * wrong about the use of the PUBLICATION_NAMES option than to complain\n+\t * about missing SELECT privilege below: whoever is authorized to use this\n+\t * option shouldn't need the SELECT privilege at all. Therefore check the\n+\t * PUBLICATION_NAMES option earlier than the other options. XXX Shouldn't\n+\t * we check all the options here anyway?\n+\t */\n+\tpublication_names = ProcessCopyToPublicationOptions(pstate,\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\tstmt->options,\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\tstmt->is_from);\n+\n \tif (stmt->relation)\n \t{\n \t\tLOCKMODE\tlockmode = is_from ? RowExclusiveLock : AccessShareLock;\n \t\tParseNamespaceItem *nsitem;\n \t\tRTEPermissionInfo *perminfo;\n-\t\tTupleDesc\ttupDesc;\n-\t\tList\t *attnums;\n-\t\tListCell *cur;\n \n \t\tAssert(!stmt->query);\n \n@@ -127,6 +139,14 @@ DoCopy(ParseState *pstate, const CopyStmt *stmt,\n \t\tperminfo = nsitem->p_perminfo;\n \t\tperminfo->requiredPerms = (is_from ? ACL_INSERT : ACL_SELECT);\n \n+\t\t/*\n+\t\t * The access by a replication user is controlled by the publication\n+\t\t * privileges, ACL_SELECT is not required. The actual checks of the\n+\t\t * publication privileges will take place later.\n+\t\t */\n+\t\tif (!is_from && publication_security)\n+\t\t\tperminfo->requiredPerms &= ~ACL_SELECT;\n+\n \t\tif (stmt->whereClause)\n \t\t{\n \t\t\t/* add nsitem to query namespace */\n@@ -147,19 +167,31 @@ DoCopy(ParseState *pstate, const CopyStmt *stmt,\n \t\t\twhereClause = (Node *) make_ands_implicit((Expr *) whereClause);\n \t\t}\n \n-\t\ttupDesc = RelationGetDescr(rel);\n-\t\tattnums = CopyGetAttnums(tupDesc, rel, stmt->attlist);\n-\t\tforeach(cur, attnums)\n+\t\t/*\n+\t\t * If publication row filters need to be applied, the query form of\n+\t\t * COPY TO is used, so the permissions will be checked by the\n+\t\t * executor. Otherwise check the permissions now.\n+\t\t */\n+\t\tif (publication_names == NIL)\n \t\t{\n-\t\t\tint\t\t\tattno;\n-\t\t\tBitmapset **bms;\n+\t\t\tTupleDesc\ttupDesc;\n+\t\t\tList\t *attnums;\n+\t\t\tListCell *cur;\n \n-\t\t\tattno = lfirst_int(cur) - FirstLowInvalidHeapAttributeNumber;\n-\t\t\tbms = is_from ? &perminfo->insertedCols : &perminfo->selectedCols;\n+\t\t\ttupDesc = RelationGetDescr(rel);\n+\t\t\tattnums = CopyGetAttnums(tupDesc, rel, stmt->attlist);\n+\t\t\tforeach(cur, attnums)\n+\t\t\t{\n+\t\t\t\tint\t\t\tattno;\n+\t\t\t\tBitmapset **bms;\n+\n+\t\t\t\tattno = lfirst_int(cur) - FirstLowInvalidHeapAttributeNumber;\n+\t\t\t\tbms = is_from ? &perminfo->insertedCols : &perminfo->selectedCols;\n \n-\t\t\t*bms = bms_add_member(*bms, attno);\n+\t\t\t\t*bms = bms_add_member(*bms, attno);\n+\t\t\t}\n+\t\t\tExecCheckPermissions(pstate->p_rtable, list_make1(perminfo), true);\n \t\t}\n-\t\tExecCheckPermissions(pstate->p_rtable, list_make1(perminfo), true);\n \n \t\t/*\n \t\t * Permission check for row security policies.\n@@ -183,7 +215,7 @@ DoCopy(ParseState *pstate, const CopyStmt *stmt,\n \t\t\t\t\t\t errmsg(\"COPY FROM not supported with row-level security\"),\n \t\t\t\t\t\t errhint(\"Use INSERT statements instead.\")));\n \n-\t\t\tquery = CreateCopyToQuery(stmt, rel, stmt_location, stmt_len);\n+\t\t\tquery = CreateCopyToQuery(stmt, rel, stmt_location, stmt_len, true);\n \n \t\t\t/*\n \t\t\t * Close the relation for now, but keep the lock on it to prevent\n@@ -233,10 +265,24 @@ DoCopy(ParseState *pstate, const CopyStmt *stmt,\n \telse\n \t{\n \t\tCopyToState cstate;\n+\t\tRelation\trel_loc = rel;\n+\n+\t\t/*\n+\t\t * If publication row filters need to be applied, use the \"COPY query\n+\t\t * TO ...\" form of the command.\n+\t\t */\n+\t\tif (rel && publication_names)\n+\t\t{\n+\t\t\tquery = CreateCopyToQuery(stmt, rel, stmt_location, stmt_len, false);\n+\n+\t\t\t/* BeginCopyTo() should only receive the query. */\n+\t\t\trel_loc = NULL;\n+\t\t}\n \n-\t\tcstate = BeginCopyTo(pstate, rel, query, relid,\n+\t\tcstate = BeginCopyTo(pstate, rel_loc, query, relid,\n \t\t\t\t\t\t\t stmt->filename, stmt->is_program,\n-\t\t\t\t\t\t\t NULL, stmt->attlist, stmt->options);\n+\t\t\t\t\t\t\t NULL, stmt->attlist, stmt->options,\n+\t\t\t\t\t\t\t publication_names);\n \t\t*processed = DoCopyTo(cstate);\t/* copy from database to file */\n \t\tEndCopyTo(cstate);\n \t}\n@@ -477,6 +523,13 @@ ProcessCopyOptions(ParseState *pstate,\n \t\t\t\t\t\t\t\tdefel->defname),\n \t\t\t\t\t\t parser_errposition(pstate, defel->location)));\n \t\t}\n+\t\telse if (strcmp(defel->defname, \"publication_names\") == 0)\n+\t\t{\n+\t\t\t/*\n+\t\t\t * ProcessCopyToPublicationOptions() should have been checked this\n+\t\t\t * already.\n+\t\t\t */\n+\t\t}\n \t\telse\n \t\t\tereport(ERROR,\n \t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n@@ -629,6 +682,78 @@ ProcessCopyOptions(ParseState *pstate,\n \t\t\t\t errmsg(\"CSV quote character must not appear in the NULL specification\")));\n }\n \n+/*\n+ * Check the PUBLICATION_NAMES option of the \"COPY TO\" command.\n+ *\n+ * This option is checked separate from others.\n+ */\n+List *\n+ProcessCopyToPublicationOptions(ParseState *pstate, List *options,\n+\t\t\t\t\t\t\t\tbool is_from)\n+{\n+\tListCell *option;\n+\tbool\tfound = false;\n+\tList\t*result = NIL;\n+\n+\t/* Extract options from the statement node tree */\n+\tforeach(option, options)\n+\t{\n+\t\tDefElem *defel = lfirst_node(DefElem, option);\n+\n+\t\tif (strcmp(defel->defname, \"publication_names\") == 0)\n+\t\t{\n+\t\t\tif (is_from)\n+\t\t\t\tereport(ERROR,\n+\t\t\t\t\t\terrmsg(\"PUBLICATION_NAMES option only available using COPY TO\"));\n+\n+\t\t\tif (result)\n+\t\t\t\terrorConflictingDefElem(defel, pstate);\n+\t\t\tfound = true;\n+\t\t\tif (defel->arg == NULL || IsA(defel->arg, List))\n+\t\t\t\tresult = castNode(List, defel->arg);\n+\t\t\telse\n+\t\t\t\tereport(ERROR,\n+\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+\t\t\t\t\t\t errmsg(\"argument to option \\\"%s\\\" must be a list of publication names\",\n+\t\t\t\t\t\t\t\tdefel->defname),\n+\t\t\t\t\t\t parser_errposition(pstate, defel->location)));\n+\t\t}\n+\t}\n+\n+\t/*\n+\t * If the publication security is enabled, subscriber must send the list\n+\t * of publication in order to tell which subset of the data it is\n+\t * authorized to receive.\n+\t *\n+\t * publication_security does not affect sessions of non-replication users.\n+\t */\n+\tif (!found && publication_security && isReplicationUser())\n+\t{\n+\t\t/*\n+\t\t * This probably means that an old version of subscriber tries to get\n+\t\t * data from a secured publisher.\n+\t\t */\n+\t\tereport(ERROR,\n+\t\t\t\t(errmsg(\"publication security requires the PUBLICATION_NAMES option\")));\n+\t}\n+\n+\t/*\n+\t * The option does only make sense in the context of (logical)\n+\t * replication. We could allow it for non-replication users too, but then\n+\t * we'd have to require it publication_security is on like above and thus\n+\t * break existing client code.\n+\t */\n+\tif (found && !isReplicationUser())\n+\t\tereport(ERROR,\n+\t\t\t\t(errmsg(\"PUBLICATION_NAMES may only be used by roles with the REPLICATION privilege\")));\n+\n+\tif (found && result == NIL)\n+\t\tereport(ERROR,\n+\t\t\t\t(errmsg(\"the value of the PUBLICATION_NAMES option must not be empty\")));\n+\n+\treturn result;\n+}\n+\n /*\n * CopyGetAttnums - build an integer list of attnums to be copied\n *\n@@ -719,3 +844,17 @@ CopyGetAttnums(TupleDesc tupDesc, Relation rel, List *attnamelist)\n \n \treturn attnums;\n }\n+\n+/*\n+ * Check whether the current session can use the USAGE privilege on\n+ * publications instead of the SELECT privileges on tables.\n+ *\n+ * Superuser makes the test pass too so that subscriptions which connect to\n+ * the publisher as superuser work fine.\n+ */\n+static bool\n+isReplicationUser(void)\n+{\n+\treturn has_rolreplication(GetUserId()) || superuser();\n+\n+}\ndiff --git a/src/backend/commands/copyto.c b/src/backend/commands/copyto.c\nindex ad79a56f75..79964cfd85 100644\n--- a/src/backend/commands/copyto.c\n+++ b/src/backend/commands/copyto.c\n@@ -34,13 +34,18 @@\n #include \"miscadmin.h\"\n #include \"nodes/makefuncs.h\"\n #include \"optimizer/optimizer.h\"\n+#include \"parser/parsetree.h\"\n+#include \"parser/parse_relation.h\"\n #include \"pgstat.h\"\n #include \"rewrite/rewriteHandler.h\"\n+#include \"rewrite/rewriteManip.h\"\n #include \"storage/fd.h\"\n #include \"tcop/tcopprot.h\"\n+#include \"utils/builtins.h\"\n #include \"utils/lsyscache.h\"\n #include \"utils/memutils.h\"\n #include \"utils/partcache.h\"\n+#include \"utils/acl.h\"\n #include \"utils/rel.h\"\n #include \"utils/snapmgr.h\"\n \n@@ -132,6 +137,10 @@ static void CopySendEndOfRow(CopyToState cstate);\n static void CopySendInt32(CopyToState cstate, int32 val);\n static void CopySendInt16(CopyToState cstate, int16 val);\n \n+static void AddPublicationFiltersToQuery(CopyToState cstate, Query *query,\n+\t\t\t\t\t\t\t\t\t\t List *publication_names);\n+static Node *GetPublicationFilters(Relation rel, List *publications,\n+\t\t\t\t\t\t\t\t int varno);\n \n /*\n * Send copy start/stop messages for frontend copies. These have changed\n@@ -342,10 +351,13 @@ EndCopy(CopyToState cstate)\n \n /*\n * Turn \"COPY table_name TO\" form into \"COPY (query) TO\".\n+ *\n+ * TODO If it appears that the query for RLS also should have inh=true, remove\n+ * the argument.\n */\n RawStmt *\n CreateCopyToQuery(const CopyStmt *stmt, Relation rel, int stmt_location,\n-\t\t\t\t int stmt_len)\n+\t\t\t\t int stmt_len, bool inh)\n {\n \tSelectStmt *select;\n \tColumnRef *cr;\n@@ -413,6 +425,7 @@ CreateCopyToQuery(const CopyStmt *stmt, Relation rel, int stmt_location,\n \tfrom = makeRangeVar(get_namespace_name(RelationGetNamespace(rel)),\n \t\t\t\t\t\tpstrdup(RelationGetRelationName(rel)),\n \t\t\t\t\t\t-1);\n+\tfrom->inh = inh;\n \n \t/* Build query */\n \tselect = makeNode(SelectStmt);\n@@ -438,6 +451,7 @@ CreateCopyToQuery(const CopyStmt *stmt, Relation rel, int stmt_location,\n * 'data_dest_cb': Callback that processes the output data\n * 'attnamelist': List of char *, columns to include. NIL selects all cols.\n * 'options': List of DefElem. See copy_opt_item in gram.y for selections.\n+ * 'publication_names': PUBLICATION_NAMES option (also contained in 'options')\n *\n * Returns a CopyToState, to be passed to DoCopyTo() and related functions.\n */\n@@ -450,7 +464,8 @@ BeginCopyTo(ParseState *pstate,\n \t\t\tbool is_program,\n \t\t\tcopy_data_dest_cb data_dest_cb,\n \t\t\tList *attnamelist,\n-\t\t\tList *options)\n+\t\t\tList *options,\n+\t\t\tList *publication_names)\n {\n \tCopyToState cstate;\n \tbool\t\tpipe = (filename == NULL && data_dest_cb == NULL);\n@@ -605,6 +620,12 @@ BeginCopyTo(ParseState *pstate,\n \t\t\t\t\t errmsg(\"COPY query must have a RETURNING clause\")));\n \t\t}\n \n+\t\t/*\n+\t\t * If the subscriber passed the publication names, use them.\n+\t\t */\n+\t\tif (publication_names)\n+\t\t\tAddPublicationFiltersToQuery(cstate, query, publication_names);\n+\n \t\t/* plan the query */\n \t\tplan = pg_plan_query(query, pstate->p_sourcetext,\n \t\t\t\t\t\t\t CURSOR_OPT_PARALLEL_OK, NULL);\n@@ -1375,3 +1396,202 @@ CreateCopyDestReceiver(void)\n \n \treturn (DestReceiver *) self;\n }\n+\n+/*\n+ * For each table in the query add the row filters of the related publication\n+ * to the WHERE clause. While doing so, check if the current user has the\n+ * USAGE privilege on the publications.\n+ */\n+static void\n+AddPublicationFiltersToQuery(CopyToState cstate, Query *query,\n+\t\t\t\t\t\t\t List *publication_names)\n+{\n+\tList\t*publications = NIL;\n+\tIndex rtindex;\n+\tFromExpr *from_expr;\n+\tListCell\t*lc;\n+\n+\tAssert(publication_names);\n+\n+\t/* Convert the list of names to a list of OIDs. */\n+\tforeach(lc, publication_names)\n+\t{\n+\t\tchar\t*pubname = strVal(lfirst(lc));\n+\t\tOid\t\tpubid;\n+\t\tPublication\t*pub;\n+\n+\t\tpubid = get_publication_oid(pubname, true);\n+\t\tif (pubid == InvalidOid)\n+\t\t{\n+\t\t\tereport(WARNING,\n+\t\t\t\t(errcode(ERRCODE_UNDEFINED_OBJECT),\n+\t\t\t\t errmsg(\"publication \\\"%s\\\" does not exist\", pubname)));\n+\t\t\tcontinue;\n+\t\t}\n+\n+\t\tpub = GetPublication(pubid);\n+\n+\t\tpublications = lappend(publications, pub);\n+\t}\n+\n+\tif (publications == NIL)\n+\t\tereport(ERROR, errmsg(\"no valid publication received\"));\n+\n+\t/*\n+\t * If the query references at least one table, construct or adjust the\n+\t * WHERE clause according to the publications.\n+\t */\n+\tfrom_expr = query->jointree;\n+\n+\trtindex = 1;\n+\tforeach(lc, query->rtable)\n+\t{\n+\t\tRangeTblEntry *rte;\n+\t\tRelation\tqrel;\n+\t\tList\t*pubs_matched;\n+\t\tNode\t*quals;\n+\n+\t\trte = lfirst_node(RangeTblEntry, lc);\n+\n+\t\t/*\n+\t\t * NoLock because the relation should already be locked due to the\n+\t\t * prior rewriting.\n+\t\t */\n+\t\tqrel = relation_open(rte->relid, NoLock);\n+\n+\t\t/*\n+\t\t * Clear ACL_SELECT on each RTE entry if the ACL_USAGE permission on\n+\t\t * publications should control the access, see below.\n+\t\t */\n+\t\tif (publication_security)\n+\t\t{\n+\t\t\tRTEPermissionInfo *perminfo;\n+\n+\t\t\tperminfo = getRTEPermissionInfo(query->rteperminfos, rte);\n+\t\t\tperminfo->requiredPerms &= ~ACL_SELECT;\n+\t\t}\n+\n+\t\t/*\n+\t\t * Retrieve the publications relevant to this relation, and if needed,\n+\t\t * check if the current user has the USAGE privilege on them.\n+\t\t */\n+\t\tpubs_matched = GetEffectiveRelationPublications(RelationGetRelid(qrel),\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\tpublications, NULL, NULL);\n+\t\tif (pubs_matched == NIL)\n+\t\t\tereport(ERROR,\n+\t\t\t\t\t(errmsg(\"no publication for relation \\\"%s\\\"\",\n+\t\t\t\t\t\t\tget_rel_name(RelationGetRelid(qrel)))));\n+\n+\t\t/* Range table implies there should be a FROM list. */\n+\t\tAssert(from_expr && from_expr->fromlist);\n+\n+\t\t/*\n+\t\t * Use the publication filters to construct the (additional) filter\n+\t\t * expression for this relation.\n+\t\t */\n+\t\tquals = GetPublicationFilters(qrel, pubs_matched, rtindex);\n+\t\tif (quals)\n+\t\t{\n+\t\t\tif (from_expr->quals == NULL)\n+\t\t\t{\n+\t\t\t\t/* Assign a new WHERE clause to the query. */\n+\t\t\t\tfrom_expr->quals = quals;\n+\t\t\t}\n+\t\t\telse\n+\t\t\t{\n+\t\t\t\tList\t*new_quals;\n+\n+\t\t\t\t/*\n+\t\t\t\t * AND the filter for this relation to the existing WHERE\n+\t\t\t\t * clause.\n+\t\t\t\t */\n+\t\t\t\tnew_quals = list_make2(quals, from_expr->quals);\n+\t\t\t\tfrom_expr->quals = (Node *) make_andclause(new_quals);\n+\t\t\t}\n+\t\t}\n+\n+\t\tlist_free(pubs_matched);\n+\t\trelation_close(qrel, NoLock);\n+\t\trtindex++;\n+\t}\n+}\n+\n+/*\n+ * Construct WHERE clause for a relation according to the given list of\n+ * publications.\n+ *\n+ * Return NULL if at least one of the publications has no filter.\n+ */\n+static Node *\n+GetPublicationFilters(Relation rel, List *publications, int varno)\n+{\n+\tOid\t\trelid = RelationGetRelid(rel);\n+\tList\t *filters = NIL;\n+\tNode\t *result = NULL;\n+\tListCell *lc;\n+\tbool\t\tisvarlena;\n+\tFmgrInfo\tfmgrinfo;\n+\tOid\t\t\toutfunc;\n+\n+\tAssert(publications);\n+\n+\t/* Make sure we're ready call the output function for the node values. */\n+\tgetTypeOutputInfo(PG_NODE_TREEOID, &outfunc, &isvarlena);\n+\tAssert(isvarlena);\n+\tfmgr_info(outfunc, &fmgrinfo);\n+\n+\t/* Retrieve the publication filters. */\n+\tforeach(lc, publications)\n+\t{\n+\t\tPublication\t\t*pub = (Publication *) lfirst(lc);\n+\t\tDatum\tattrs, qual;\n+\t\tbool\tattrs_isnull, qual_isnull;\n+\t\tchar\t *nodeStr;\n+\t\tNode\t *node;\n+\n+\t\t/* Get the filter expression. */\n+\t\tGetPublicationRelationMapping(pub->oid, relid, &attrs, &attrs_isnull,\n+\t\t\t\t\t\t\t\t\t &qual, &qual_isnull);\n+\n+\t\t/*\n+\t\t * A single publication w/o expression means that the whole table\n+\t\t * should be published.\n+\t\t */\n+\t\tif (qual_isnull)\n+\t\t{\n+\t\t\tif (filters)\n+\t\t\t{\n+\t\t\t\tlist_free_deep(filters);\n+\t\t\t\tfilters = NIL;\n+\t\t\t}\n+\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\t/* Get the filter expression and add it to the list. */\n+\t\tnodeStr = OutputFunctionCall(&fmgrinfo, qual);\n+\t\tnode = stringToNode(nodeStr);\n+\t\tpfree(nodeStr);\n+\n+\t\t/*\n+\t\t * Adjust varno so that the expression references the correct\n+\t\t * range table entry.\n+\t\t */\n+\t\tChangeVarNodes(node, 1, varno, 0);\n+\n+\t\t/*\n+\t\t * XXX Is it worth checking for duplicate expressions in the list?\n+\t\t */\n+\t\tfilters = lappend(filters, node);\n+\t}\n+\n+\tif (filters)\n+\t{\n+\t\tif (list_length(filters) > 1)\n+\t\t\tresult = (Node *) make_orclause(filters);\n+\t\telse\n+\t\t\tresult = (Node *) linitial(filters);\n+\t}\n+\n+\treturn result;\n+}\ndiff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c\nindex f4ba572697..d9652604c7 100644\n--- a/src/backend/commands/publicationcmds.c\n+++ b/src/backend/commands/publicationcmds.c\n@@ -800,6 +800,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)\n \t\tBoolGetDatum(pubactions.pubtruncate);\n \tvalues[Anum_pg_publication_pubviaroot - 1] =\n \t\tBoolGetDatum(publish_via_partition_root);\n+\tvalues[Anum_pg_publication_pubowner - 1] = ObjectIdGetDatum(GetUserId());\n+\tnulls[Anum_pg_publication_pubacl - 1] = true;\n \n \ttup = heap_form_tuple(RelationGetDescr(rel), values, nulls);\n \ndiff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c\nindex a5115b9c1f..9ae47fa6e3 100644\n--- a/src/backend/executor/execMain.c\n+++ b/src/backend/executor/execMain.c\n@@ -613,7 +613,9 @@ ExecCheckOneRelPerms(RTEPermissionInfo *perminfo)\n \tOid\t\t\trelOid = perminfo->relid;\n \n \trequiredPerms = perminfo->requiredPerms;\n-\tAssert(requiredPerms != 0);\n+\n+\tif (requiredPerms == 0)\n+\t\treturn true;\n \n \t/*\n \t * userid to check as: current user unless we have a setuid indication.\ndiff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\nindex a0138382a1..fc70aa2057 100644\n--- a/src/backend/parser/gram.y\n+++ b/src/backend/parser/gram.y\n@@ -7655,6 +7655,14 @@ privilege_target:\n \t\t\t\t\tn->objs = $2;\n \t\t\t\t\t$$ = n;\n \t\t\t\t}\n+\t\t\t| PUBLICATION name_list\n+\t\t\t\t{\n+\t\t\t\t\tPrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget));\n+\t\t\t\t\tn->targtype = ACL_TARGET_OBJECT;\n+\t\t\t\t\tn->objtype = OBJECT_PUBLICATION;\n+\t\t\t\t\tn->objs = $2;\n+\t\t\t\t\t$$ = n;\n+\t\t\t\t}\n \t\t\t| SCHEMA name_list\n \t\t\t\t{\n \t\t\t\t\tPrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget));\ndiff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c\nindex 07eea504ba..2a8cfc5be0 100644\n--- a/src/backend/replication/logical/tablesync.c\n+++ b/src/backend/replication/logical/tablesync.c\n@@ -753,6 +753,27 @@ copy_read_data(void *outbuf, int minread, int maxread)\n }\n \n \n+/*\n+ * Return a comma-separated list of publications associated with the current\n+ * subscriptions.\n+ */\n+static char *\n+get_publication_names(void)\n+{\n+\tStringInfoData buf;\n+\tListCell *lc;\n+\n+\tinitStringInfo(&buf);\n+\tforeach(lc, MySubscription->publications)\n+\t{\n+\t\tif (foreach_current_index(lc) > 0)\n+\t\t\tappendStringInfoString(&buf, \", \");\n+\t\tappendStringInfoString(&buf, quote_literal_cstr(strVal(lfirst(lc))));\n+\t}\n+\n+\treturn buf.data;\n+}\n+\n /*\n * Get information about remote relation in similar fashion the RELATION\n * message provides during replication. This function also returns the relation\n@@ -770,7 +791,6 @@ fetch_remote_table_info(char *nspname, char *relname,\n \tOid\t\t\tqualRow[] = {TEXTOID};\n \tbool\t\tisnull;\n \tint\t\t\tnatt;\n-\tListCell *lc;\n \tBitmapset *included_cols = NULL;\n \n \tlrel->nspname = nspname;\n@@ -812,7 +832,6 @@ fetch_remote_table_info(char *nspname, char *relname,\n \tExecDropSingleTupleTableSlot(slot);\n \twalrcv_clear_result(res);\n \n-\n \t/*\n \t * Get column lists for each relation.\n \t *\n@@ -824,15 +843,7 @@ fetch_remote_table_info(char *nspname, char *relname,\n \t\tWalRcvExecResult *pubres;\n \t\tTupleTableSlot *tslot;\n \t\tOid\t\t\tattrsRow[] = {INT2VECTOROID};\n-\t\tStringInfoData pub_names;\n-\n-\t\tinitStringInfo(&pub_names);\n-\t\tforeach(lc, MySubscription->publications)\n-\t\t{\n-\t\t\tif (foreach_current_index(lc) > 0)\n-\t\t\t\tappendStringInfoString(&pub_names, \", \");\n-\t\t\tappendStringInfoString(&pub_names, quote_literal_cstr(strVal(lfirst(lc))));\n-\t\t}\n+\t\tchar\t *pub_names = get_publication_names();\n \n \t\t/*\n \t\t * Fetch info about column lists for the relation (from all the\n@@ -849,7 +860,7 @@ fetch_remote_table_info(char *nspname, char *relname,\n \t\t\t\t\t\t \" WHERE gpt.relid = %u AND c.oid = gpt.relid\"\n \t\t\t\t\t\t \" AND p.pubname IN ( %s )\",\n \t\t\t\t\t\t lrel->remoteid,\n-\t\t\t\t\t\t pub_names.data);\n+\t\t\t\t\t\t pub_names);\n \n \t\tpubres = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data,\n \t\t\t\t\t\t\t lengthof(attrsRow), attrsRow);\n@@ -904,8 +915,7 @@ fetch_remote_table_info(char *nspname, char *relname,\n \t\tExecDropSingleTupleTableSlot(tslot);\n \n \t\twalrcv_clear_result(pubres);\n-\n-\t\tpfree(pub_names.data);\n+\t\tpfree(pub_names);\n \t}\n \n \t/*\n@@ -986,6 +996,18 @@ fetch_remote_table_info(char *nspname, char *relname,\n \n \twalrcv_clear_result(res);\n \n+\tlrel->pubnames = NULL;\n+\tif (walrcv_server_version(LogRepWorkerWalRcvConn) >= 160000)\n+\t{\n+\t\t/*\n+\t\t * If the publication ACL is implemented, the publisher is responsible\n+\t\t * for checking. All we need to do is to pass the publication names.\n+\t\t * The publisher should only return the data matching these\n+\t\t * publications and only check the ACLs of these.\n+\t\t */\n+\t\tlrel->pubnames = get_publication_names();\n+\t}\n+\n \t/*\n \t * Get relation's row filter expressions. DISTINCT avoids the same\n \t * expression of a table in multiple publications from being included\n@@ -1005,21 +1027,9 @@ fetch_remote_table_info(char *nspname, char *relname,\n \t * 3) one of the subscribed publications is declared as TABLES IN SCHEMA\n \t * that includes this relation\n \t */\n-\tif (walrcv_server_version(LogRepWorkerWalRcvConn) >= 150000)\n+\telse if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 150000)\n \t{\n-\t\tStringInfoData pub_names;\n-\n-\t\t/* Build the pubname list. */\n-\t\tinitStringInfo(&pub_names);\n-\t\tforeach(lc, MySubscription->publications)\n-\t\t{\n-\t\t\tchar\t *pubname = strVal(lfirst(lc));\n-\n-\t\t\tif (foreach_current_index(lc) > 0)\n-\t\t\t\tappendStringInfoString(&pub_names, \", \");\n-\n-\t\t\tappendStringInfoString(&pub_names, quote_literal_cstr(pubname));\n-\t\t}\n+\t\tchar\t *pub_names = get_publication_names();\n \n \t\t/* Check for row filters. */\n \t\tresetStringInfo(&cmd);\n@@ -1030,7 +1040,7 @@ fetch_remote_table_info(char *nspname, char *relname,\n \t\t\t\t\t\t \" WHERE gpt.relid = %u\"\n \t\t\t\t\t\t \" AND p.pubname IN ( %s )\",\n \t\t\t\t\t\t lrel->remoteid,\n-\t\t\t\t\t\t pub_names.data);\n+\t\t\t\t\t\t pub_names);\n \n \t\tres = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 1, qualRow);\n \n@@ -1069,6 +1079,7 @@ fetch_remote_table_info(char *nspname, char *relname,\n \t\tExecDropSingleTupleTableSlot(slot);\n \n \t\twalrcv_clear_result(res);\n+\t\tpfree(pub_names);\n \t}\n \n \tpfree(cmd.data);\n@@ -1105,7 +1116,12 @@ copy_table(Relation rel)\n \t/* Start copy on the publisher. */\n \tinitStringInfo(&cmd);\n \n-\t/* Regular table with no row filter */\n+\t/*\n+\t * Regular table with no row filter.\n+\t *\n+\t * Note that \"qual\" can also be NIL due to the fact the publisher is\n+\t * supposed to handle the row filters, so that we didn't check them here.\n+\t */\n \tif (lrel.relkind == RELKIND_RELATION && qual == NIL)\n \t{\n \t\tappendStringInfo(&cmd, \"COPY %s (\",\n@@ -1122,8 +1138,6 @@ copy_table(Relation rel)\n \n \t\t\tappendStringInfoString(&cmd, quote_identifier(lrel.attnames[i]));\n \t\t}\n-\n-\t\tappendStringInfoString(&cmd, \") TO STDOUT\");\n \t}\n \telse\n \t{\n@@ -1165,9 +1179,20 @@ copy_table(Relation rel)\n \t\t\t}\n \t\t\tlist_free_deep(qual);\n \t\t}\n+\t}\n+\n+\tappendStringInfoString(&cmd, \") TO STDOUT\");\n \n-\t\tappendStringInfoString(&cmd, \") TO STDOUT\");\n+\tif (lrel.pubnames)\n+\t{\n+\t\t/*\n+\t\t * Tell the publisher which publications we are interested in.\n+\t\t * Publishers of recent versions do need this information to construct\n+\t\t * the query filter and to check publication privileges.\n+\t\t */\n+\t\tappendStringInfo(&cmd, \" (PUBLICATION_NAMES (%s)) \", lrel.pubnames);\n \t}\n+\n \tres = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 0, NULL);\n \tpfree(cmd.data);\n \tif (res->status != WALRCV_OK_COPY_OUT)\ndiff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c\nindex 6e0b51ada4..be3dc79585 100644\n--- a/src/backend/replication/pgoutput/pgoutput.c\n+++ b/src/backend/replication/pgoutput/pgoutput.c\n@@ -14,6 +14,7 @@\n \n #include \"access/tupconvert.h\"\n #include \"catalog/partition.h\"\n+#include \"catalog/pg_authid.h\"\n #include \"catalog/pg_publication.h\"\n #include \"catalog/pg_publication_rel.h\"\n #include \"catalog/pg_subscription.h\"\n@@ -21,12 +22,14 @@\n #include \"commands/subscriptioncmds.h\"\n #include \"executor/executor.h\"\n #include \"fmgr.h\"\n+#include \"miscadmin.h\"\n #include \"nodes/makefuncs.h\"\n #include \"optimizer/optimizer.h\"\n #include \"replication/logical.h\"\n #include \"replication/logicalproto.h\"\n #include \"replication/origin.h\"\n #include \"replication/pgoutput.h\"\n+#include \"utils/acl.h\"\n #include \"utils/builtins.h\"\n #include \"utils/inval.h\"\n #include \"utils/lsyscache.h\"\ndiff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c\nindex 4ed3747e3f..3da005d766 100644\n--- a/src/backend/replication/walsender.c\n+++ b/src/backend/replication/walsender.c\n@@ -125,6 +125,12 @@ int\t\t\twal_sender_timeout = 60 * 1000; /* maximum time to send one WAL\n \t\t\t\t\t\t\t\t\t\t\t * data message */\n bool\t\tlog_replication_commands = false;\n \n+/*\n+ * Should USAGE privilege on publications be checked? Defaults to false so\n+ * that server upgrade does not break existing logical replication.\n+ */\n+bool\t\tpublication_security = false;\n+\n /*\n * State for WalSndWakeupRequest\n */\ndiff --git a/src/backend/utils/adt/acl.c b/src/backend/utils/adt/acl.c\nindex 8f7522d103..8c318676e1 100644\n--- a/src/backend/utils/adt/acl.c\n+++ b/src/backend/utils/adt/acl.c\n@@ -29,6 +29,7 @@\n #include \"catalog/pg_namespace.h\"\n #include \"catalog/pg_parameter_acl.h\"\n #include \"catalog/pg_proc.h\"\n+#include \"catalog/pg_publication.h\"\n #include \"catalog/pg_tablespace.h\"\n #include \"catalog/pg_type.h\"\n #include \"commands/dbcommands.h\"\n@@ -118,6 +119,7 @@ static AclMode convert_tablespace_priv_string(text *priv_type_text);\n static Oid\tconvert_type_name(text *typename);\n static AclMode convert_type_priv_string(text *priv_type_text);\n static AclMode convert_parameter_priv_string(text *priv_text);\n+static AclMode convert_publication_priv_string(text *priv_type_text);\n static AclMode convert_role_priv_string(text *priv_type_text);\n static AclResult pg_role_aclcheck(Oid role_oid, Oid roleid, AclMode mode);\n \n@@ -844,6 +846,10 @@ acldefault(ObjectType objtype, Oid ownerId)\n \t\t\tworld_default = ACL_NO_RIGHTS;\n \t\t\towner_default = ACL_ALL_RIGHTS_PARAMETER_ACL;\n \t\t\tbreak;\n+\t\tcase OBJECT_PUBLICATION:\n+\t\t\tworld_default = ACL_USAGE;\n+\t\t\towner_default = ACL_ALL_RIGHTS_PUBLICATION;\n+\t\t\tbreak;\n \t\tdefault:\n \t\t\telog(ERROR, \"unrecognized object type: %d\", (int) objtype);\n \t\t\tworld_default = ACL_NO_RIGHTS;\t/* keep compiler quiet */\n@@ -929,6 +935,9 @@ acldefault_sql(PG_FUNCTION_ARGS)\n \t\tcase 'p':\n \t\t\tobjtype = OBJECT_PARAMETER_ACL;\n \t\t\tbreak;\n+\t\tcase 'P':\n+\t\t\tobjtype = OBJECT_PUBLICATION;\n+\t\t\tbreak;\n \t\tcase 't':\n \t\t\tobjtype = OBJECT_TABLESPACE;\n \t\t\tbreak;\n@@ -4558,6 +4567,48 @@ convert_parameter_priv_string(text *priv_text)\n \treturn convert_any_priv_string(priv_text, parameter_priv_map);\n }\n \n+/*\n+ * has_publication_privilege_id\n+ *\t\tCheck user privileges on a publication given\n+ *\t\tpublication oid and text priv name.\n+ *\t\tcurrent_user is assumed\n+ */\n+Datum\n+has_publication_privilege_id(PG_FUNCTION_ARGS)\n+{\n+\tOid\t\t\tpuboid = PG_GETARG_OID(0);\n+\ttext\t *priv_type_text = PG_GETARG_TEXT_PP(1);\n+\tOid\t\t\troleid;\n+\tAclMode\t\tmode;\n+\tAclResult\taclresult;\n+\n+\troleid = GetUserId();\n+\tmode = convert_publication_priv_string(priv_type_text);\n+\n+\tif (!SearchSysCacheExists1(PUBLICATIONOID, ObjectIdGetDatum(puboid)))\n+\t\tPG_RETURN_NULL();\n+\n+\taclresult = object_aclcheck(PublicationRelationId, puboid, roleid, mode);\n+\n+\tPG_RETURN_BOOL(aclresult == ACLCHECK_OK);\n+}\n+\n+/*\n+ * convert_publication_priv_string\n+ *\t\tConvert text string to AclMode value.\n+ */\n+static AclMode\n+convert_publication_priv_string(text *priv_type_text)\n+{\n+\tstatic const priv_map type_priv_map[] = {\n+\t\t{\"USAGE\", ACL_USAGE},\n+\t\t{\"USAGE WITH GRANT OPTION\", ACL_GRANT_OPTION_FOR(ACL_USAGE)},\n+\t\t{NULL, 0}\n+\t};\n+\n+\treturn convert_any_priv_string(priv_type_text, type_priv_map);\n+}\n+\n /*\n * pg_has_role variants\n *\t\tThese are all named \"pg_has_role\" at the SQL level.\ndiff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c\nindex c5a95f5dcc..ac928c83cc 100644\n--- a/src/backend/utils/misc/guc_tables.c\n+++ b/src/backend/utils/misc/guc_tables.c\n@@ -685,6 +685,8 @@ const char *const config_group_names[] =\n \tgettext_noop(\"Replication / Primary Server\"),\n \t/* REPLICATION_STANDBY */\n \tgettext_noop(\"Replication / Standby Servers\"),\n+\t/* REPLICATION_PUBLISHERS */\n+\tgettext_noop(\"Replication / Publishers\"),\n \t/* REPLICATION_SUBSCRIBERS */\n \tgettext_noop(\"Replication / Subscribers\"),\n \t/* QUERY_TUNING_METHOD */\n@@ -1972,6 +1974,16 @@ struct config_bool ConfigureNamesBool[] =\n \t\tNULL, NULL, NULL\n \t},\n \n+\t{\n+\t\t{\"publication_security\", PGC_SUSET, REPLICATION_PUBLISHERS,\n+\t\t\tgettext_noop(\"Enable publication security.\"),\n+\t\t\tgettext_noop(\"When enabled, the USAGE privilege is needed to access publications.\")\n+\t\t},\n+\t\t&publication_security,\n+\t\tfalse,\n+\t\tNULL, NULL, NULL\n+\t},\n+\n \t/* End-of-list marker */\n \t{\n \t\t{NULL, 0, 0, NULL, NULL}, NULL, false, NULL, NULL, NULL\ndiff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample\nindex d06074b86f..720b7157c2 100644\n--- a/src/backend/utils/misc/postgresql.conf.sample\n+++ b/src/backend/utils/misc/postgresql.conf.sample\n@@ -353,6 +353,12 @@\n \t\t\t\t\t# retrieve WAL after a failed attempt\n #recovery_min_apply_delay = 0\t\t# minimum delay for applying changes during recovery\n \n+# - Publishers -\n+\n+# These settings are ignored on a subscriber.\n+\n+#publication_security = off\t\t# should publication privileges be checked?\n+\n # - Subscribers -\n \n # These settings are ignored on a publisher.\ndiff --git a/src/bin/pg_dump/dumputils.c b/src/bin/pg_dump/dumputils.c\nindex 9753a6d868..7408353b72 100644\n--- a/src/bin/pg_dump/dumputils.c\n+++ b/src/bin/pg_dump/dumputils.c\n@@ -511,6 +511,8 @@ do { \\\n \t\tCONVERT_PRIV('r', \"SELECT\");\n \t\tCONVERT_PRIV('w', \"UPDATE\");\n \t}\n+\telse if (strcmp(type, \"PUBLICATION\") == 0)\n+\t\tCONVERT_PRIV('U', \"USAGE\");\n \telse\n \t\tabort();\n \ndiff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c\nindex 527c7651ab..56db60c443 100644\n--- a/src/bin/pg_dump/pg_dump.c\n+++ b/src/bin/pg_dump/pg_dump.c\n@@ -3950,6 +3950,8 @@ getPublications(Archive *fout, int *numPublications)\n \tint\t\t\ti_pubdelete;\n \tint\t\t\ti_pubtruncate;\n \tint\t\t\ti_pubviaroot;\n+\tint\t\t\ti_pubacl;\n+\tint\t\t\ti_acldefault;\n \tint\t\t\ti,\n \t\t\t\tntups;\n \n@@ -3964,27 +3966,32 @@ getPublications(Archive *fout, int *numPublications)\n \tresetPQExpBuffer(query);\n \n \t/* Get the publications. */\n-\tif (fout->remoteVersion >= 130000)\n+\tif (fout->remoteVersion >= 150000)\n \t\tappendPQExpBufferStr(query,\n-\t\t\t\t\t\t\t \"SELECT p.tableoid, p.oid, p.pubname, \"\n-\t\t\t\t\t\t\t \"p.pubowner, \"\n-\t\t\t\t\t\t\t \"p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot \"\n-\t\t\t\t\t\t\t \"FROM pg_publication p\");\n+\t\t\t\t\t\t \"SELECT p.tableoid, p.oid, p.pubname, \"\n+\t\t\t\t\t\t \"p.pubowner, \"\n+\t\t\t\t\t\t \"p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot, p.pubacl, acldefault('P', p.pubowner) AS acldefault \"\n+\t\t\t\t\t\t \"FROM pg_publication p\");\n+\telse if (fout->remoteVersion >= 130000)\n+\t\tappendPQExpBuffer(query,\n+\t\t\t\t\t\t \"SELECT p.tableoid, p.oid, p.pubname, \"\n+\t\t\t\t\t\t \"p.pubowner, \"\n+\t\t\t\t\t\t \"p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot, '{}' AS pubacl, '{}' AS acldefault \"\n+\t\t\t\t\t\t \"FROM pg_publication p\");\n \telse if (fout->remoteVersion >= 110000)\n \t\tappendPQExpBufferStr(query,\n-\t\t\t\t\t\t\t \"SELECT p.tableoid, p.oid, p.pubname, \"\n-\t\t\t\t\t\t\t \"p.pubowner, \"\n-\t\t\t\t\t\t\t \"p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot \"\n-\t\t\t\t\t\t\t \"FROM pg_publication p\");\n+\t\t\t\t\t\t \"SELECT p.tableoid, p.oid, p.pubname, \"\n+\t\t\t\t\t\t \"p.pubowner, \"\n+\t\t\t\t\t\t \"p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot, '{}' AS pubacl, '{}' AS acldefault \"\n+\t\t\t\t\t\t \"FROM pg_publication p\");\n \telse\n \t\tappendPQExpBufferStr(query,\n-\t\t\t\t\t\t\t \"SELECT p.tableoid, p.oid, p.pubname, \"\n-\t\t\t\t\t\t\t \"p.pubowner, \"\n-\t\t\t\t\t\t\t \"p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot \"\n-\t\t\t\t\t\t\t \"FROM pg_publication p\");\n+\t\t\t\t\t\t \"SELECT p.tableoid, p.oid, p.pubname, \"\n+\t\t\t\t\t\t \"p.pubowner, \"\n+\t\t\t\t\t\t \"p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot, '{}' AS pubacl, '{}' AS acldefault \"\n+\t\t\t\t\t\t \"FROM pg_publication p\");\n \n \tres = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);\n-\n \tntups = PQntuples(res);\n \n \ti_tableoid = PQfnumber(res, \"tableoid\");\n@@ -3997,6 +4004,8 @@ getPublications(Archive *fout, int *numPublications)\n \ti_pubdelete = PQfnumber(res, \"pubdelete\");\n \ti_pubtruncate = PQfnumber(res, \"pubtruncate\");\n \ti_pubviaroot = PQfnumber(res, \"pubviaroot\");\n+\ti_pubacl = PQfnumber(res, \"pubacl\");\n+\ti_acldefault = PQfnumber(res, \"acldefault\");\n \n \tpubinfo = pg_malloc(ntups * sizeof(PublicationInfo));\n \n@@ -4021,6 +4030,11 @@ getPublications(Archive *fout, int *numPublications)\n \t\t\t(strcmp(PQgetvalue(res, i, i_pubtruncate), \"t\") == 0);\n \t\tpubinfo[i].pubviaroot =\n \t\t\t(strcmp(PQgetvalue(res, i, i_pubviaroot), \"t\") == 0);\n+\t\tpubinfo[i].dacl.acl = pg_strdup(PQgetvalue(res, i, i_pubacl));\n+\t\tpubinfo[i].dacl.acldefault = pg_strdup(PQgetvalue(res, i, i_acldefault));\n+\t\tpubinfo[i].dacl.privtype = 0;\n+\t\tpubinfo[i].dacl.initprivs = NULL;\n+\t\tpubinfo[i].dobj.components |= DUMP_COMPONENT_ACL;\n \n \t\t/* Decide whether we want to dump it */\n \t\tselectDumpableObject(&(pubinfo[i].dobj), fout);\n@@ -4124,6 +4138,11 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)\n \t\t\t\t\t NULL, pubinfo->rolname,\n \t\t\t\t\t pubinfo->dobj.catId, 0, pubinfo->dobj.dumpId);\n \n+\tif (pubinfo->dobj.dump & DUMP_COMPONENT_ACL)\n+\t\tdumpACL(fout, pubinfo->dobj.dumpId, InvalidDumpId, \"PUBLICATION\",\n+\t\t\t\tpg_strdup(fmtId(pubinfo->dobj.name)), NULL, NULL,\n+\t\t\t\tpubinfo->rolname, &pubinfo->dacl);\n+\n \tdestroyPQExpBuffer(delq);\n \tdestroyPQExpBuffer(query);\n \tfree(qpubname);\ndiff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h\nindex e7cbd8d7ed..81770eefb2 100644\n--- a/src/bin/pg_dump/pg_dump.h\n+++ b/src/bin/pg_dump/pg_dump.h\n@@ -613,6 +613,7 @@ typedef struct _policyInfo\n typedef struct _PublicationInfo\n {\n \tDumpableObject dobj;\n+\tDumpableAcl dacl;\n \tconst char *rolname;\n \tbool\t\tpuballtables;\n \tbool\t\tpubinsert;\ndiff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c\nindex c8a0bb7b3a..b37cd13a89 100644\n--- a/src/bin/psql/describe.c\n+++ b/src/bin/psql/describe.c\n@@ -6287,6 +6287,7 @@ describePublications(const char *pattern)\n \tPGresult *res;\n \tbool\t\thas_pubtruncate;\n \tbool\t\thas_pubviaroot;\n+\tbool\t\thas_pubacl;\n \n \tPQExpBufferData title;\n \tprintTableContent cont;\n@@ -6303,6 +6304,7 @@ describePublications(const char *pattern)\n \n \thas_pubtruncate = (pset.sversion >= 110000);\n \thas_pubviaroot = (pset.sversion >= 130000);\n+\thas_pubacl = (pset.sversion >= 160000);\n \n \tinitPQExpBuffer(&buf);\n \n@@ -6316,6 +6318,9 @@ describePublications(const char *pattern)\n \tif (has_pubviaroot)\n \t\tappendPQExpBufferStr(&buf,\n \t\t\t\t\t\t\t \", pubviaroot\");\n+\tif (has_pubacl)\n+\t\tappendPQExpBufferStr(&buf,\n+\t\t\t\t\t\t\t\", pubacl\");\n \tappendPQExpBufferStr(&buf,\n \t\t\t\t\t\t \"\\nFROM pg_catalog.pg_publication\\n\");\n \n@@ -6367,6 +6372,8 @@ describePublications(const char *pattern)\n \t\t\tncols++;\n \t\tif (has_pubviaroot)\n \t\t\tncols++;\n+\t\tif (has_pubacl)\n+\t\t\tncols++;\n \n \t\tinitPQExpBuffer(&title);\n \t\tprintfPQExpBuffer(&title, _(\"Publication %s\"), pubname);\n@@ -6381,6 +6388,8 @@ describePublications(const char *pattern)\n \t\t\tprintTableAddHeader(&cont, gettext_noop(\"Truncates\"), true, align);\n \t\tif (has_pubviaroot)\n \t\t\tprintTableAddHeader(&cont, gettext_noop(\"Via root\"), true, align);\n+\t\tif (has_pubacl)\n+\t\t\tprintTableAddHeader(&cont, gettext_noop(\"Access privileges\"), true, align);\n \n \t\tprintTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);\n \t\tprintTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);\n@@ -6391,6 +6400,8 @@ describePublications(const char *pattern)\n \t\t\tprintTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);\n \t\tif (has_pubviaroot)\n \t\t\tprintTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);\n+\t\tif (has_pubacl)\n+\t\t\tprintTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);\n \n \t\tif (!puballtables)\n \t\t{\ndiff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c\nindex 5e1882eaea..8455bd2ef0 100644\n--- a/src/bin/psql/tab-complete.c\n+++ b/src/bin/psql/tab-complete.c\n@@ -3940,6 +3940,7 @@ psql_completion(const char *text, int start, int end)\n \t\t\t\t\t\t\t\t\t\t\t\"LARGE OBJECT\",\n \t\t\t\t\t\t\t\t\t\t\t\"PARAMETER\",\n \t\t\t\t\t\t\t\t\t\t\t\"PROCEDURE\",\n+\t\t\t\t\t\t\t\t\t\t\t\"PUBLICATION\",\n \t\t\t\t\t\t\t\t\t\t\t\"ROUTINE\",\n \t\t\t\t\t\t\t\t\t\t\t\"SCHEMA\",\n \t\t\t\t\t\t\t\t\t\t\t\"SEQUENCE\",\n@@ -3977,6 +3978,8 @@ psql_completion(const char *text, int start, int end)\n \t\t\tCOMPLETE_WITH_QUERY(Query_for_list_of_languages);\n \t\telse if (TailMatches(\"PROCEDURE\"))\n \t\t\tCOMPLETE_WITH_VERSIONED_SCHEMA_QUERY(Query_for_list_of_procedures);\n+\t\telse if (TailMatches(\"PUBLICATION\"))\n+\t\t\tCOMPLETE_WITH_VERSIONED_QUERY(Query_for_list_of_publications);\n \t\telse if (TailMatches(\"ROUTINE\"))\n \t\t\tCOMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_routines);\n \t\telse if (TailMatches(\"SCHEMA\"))\ndiff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat\nindex c0f2a8a77c..c617924c8e 100644\n--- a/src/include/catalog/pg_proc.dat\n+++ b/src/include/catalog/pg_proc.dat\n@@ -7225,6 +7225,9 @@\n { oid => '2273', descr => 'current user privilege on schema by schema oid',\n proname => 'has_schema_privilege', provolatile => 's', prorettype => 'bool',\n proargtypes => 'oid text', prosrc => 'has_schema_privilege_id' },\n+{ oid => '9800', descr => 'current user privilege on publication by publication oid',\n+ proname => 'has_publication_privilege', provolatile => 's', prorettype => 'bool',\n+ proargtypes => 'oid text', prosrc => 'has_publication_privilege_id' },\n \n { oid => '2390',\n descr => 'user privilege on tablespace by username, tablespace name',\ndiff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h\nindex dab5bc8444..87da458bdb 100644\n--- a/src/include/catalog/pg_publication.h\n+++ b/src/include/catalog/pg_publication.h\n@@ -54,6 +54,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)\n \n \t/* true if partition changes are published using root schema */\n \tbool\t\tpubviaroot;\n+\n+#ifdef CATALOG_VARLEN\t\t\t/* variable-length fields start here */\n+\t/* NOTE: These fields are not present in a relcache entry's rd_rel field. */\n+\t/* access permissions */\n+\taclitem\t\tpubacl[1] BKI_DEFAULT(_null_);\n+#endif\n } FormData_pg_publication;\n \n /* ----------------\n@@ -63,6 +69,8 @@ CATALOG(pg_publication,6104,PublicationRelationId)\n */\n typedef FormData_pg_publication *Form_pg_publication;\n \n+DECLARE_TOAST(pg_publication, 9801, 9802);\n+\n DECLARE_UNIQUE_INDEX_PKEY(pg_publication_oid_index, 6110, PublicationObjectIndexId, on pg_publication using btree(oid oid_ops));\n DECLARE_UNIQUE_INDEX(pg_publication_pubname_index, 6111, PublicationNameIndexId, on pg_publication using btree(pubname name_ops));\n \n@@ -136,6 +144,8 @@ typedef enum PublicationPartOpt\n \tPUBLICATION_PART_ALL,\n } PublicationPartOpt;\n \n+extern PGDLLIMPORT bool publication_security;\n+\n extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);\n extern List *GetAllTablesPublications(void);\n extern List *GetAllTablesPublicationRelations(bool pubviaroot);\ndiff --git a/src/include/commands/copy.h b/src/include/commands/copy.h\nindex dd45dba465..55a6a262de 100644\n--- a/src/include/commands/copy.h\n+++ b/src/include/commands/copy.h\n@@ -73,6 +73,8 @@ extern void DoCopy(ParseState *pstate, const CopyStmt *stmt,\n \t\t\t\t uint64 *processed);\n \n extern void ProcessCopyOptions(ParseState *pstate, CopyFormatOptions *opts_out, bool is_from, List *options);\n+extern List *ProcessCopyToPublicationOptions(ParseState *pstate,\n+\t\t\t\t\t\t\t\t\t\t\t List *options, bool is_from);\n extern CopyFromState BeginCopyFrom(ParseState *pstate, Relation rel, Node *whereClause,\n \t\t\t\t\t\t\t\t const char *filename,\n \t\t\t\t\t\t\t\t bool is_program, copy_data_source_cb data_source_cb, List *attnamelist, List *options);\n@@ -91,10 +93,11 @@ extern DestReceiver *CreateCopyDestReceiver(void);\n * internal prototypes\n */\n extern RawStmt *CreateCopyToQuery(const CopyStmt *stmt, Relation rel,\n-\t\t\t\t\t\t\t\t int stmt_location, int stmt_len);\n+\t\t\t\t\t\t\t\t int stmt_location, int stmt_len, bool inh);\n extern CopyToState BeginCopyTo(ParseState *pstate, Relation rel, RawStmt *raw_query,\n \t\t\t\t\t\t\t Oid queryRelId, const char *filename, bool is_program,\n-\t\t\t\t\t\t\t copy_data_dest_cb data_dest_cb, List *attnamelist, List *options);\n+\t\t\t\t\t\t\t copy_data_dest_cb data_dest_cb, List *attnamelist, List *options,\n+\t\t\t\t\t\t\t List *publication_names);\n extern void EndCopyTo(CopyToState cstate);\n extern uint64 DoCopyTo(CopyToState cstate);\n extern List *CopyGetAttnums(TupleDesc tupDesc, Relation rel,\ndiff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h\nindex 3d67787e7a..1bedf03f71 100644\n--- a/src/include/nodes/parsenodes.h\n+++ b/src/include/nodes/parsenodes.h\n@@ -88,8 +88,8 @@ typedef uint64 AclMode;\t\t\t/* a bitmask of privilege bits */\n #define ACL_REFERENCES\t(1<<5)\n #define ACL_TRIGGER\t\t(1<<6)\n #define ACL_EXECUTE\t\t(1<<7)\t/* for functions */\n-#define ACL_USAGE\t\t(1<<8)\t/* for languages, namespaces, FDWs, and\n-\t\t\t\t\t\t\t\t * servers */\n+#define ACL_USAGE\t\t(1<<8)\t/* for languages, namespaces, FDWs, servers\n+\t\t\t\t\t\t\t\t * and publications */\n #define ACL_CREATE\t\t(1<<9)\t/* for namespaces and databases */\n #define ACL_CREATE_TEMP (1<<10) /* for databases */\n #define ACL_CONNECT\t\t(1<<11) /* for databases */\ndiff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h\nindex 0ea2df5088..6d9b6fa250 100644\n--- a/src/include/replication/logicalproto.h\n+++ b/src/include/replication/logicalproto.h\n@@ -113,6 +113,7 @@ typedef struct LogicalRepRelation\n \tchar\t\treplident;\t\t/* replica identity */\n \tchar\t\trelkind;\t\t/* remote relation kind */\n \tBitmapset *attkeys;\t\t/* Bitmap of key columns */\n+\tchar\t *pubnames;\t\t/* publication names (comma-separated list) */\n } LogicalRepRelation;\n \n /* Type mapping info */\ndiff --git a/src/include/utils/acl.h b/src/include/utils/acl.h\nindex f8e1238fa2..eb4e5044e8 100644\n--- a/src/include/utils/acl.h\n+++ b/src/include/utils/acl.h\n@@ -169,6 +169,7 @@ typedef struct ArrayType Acl;\n #define ACL_ALL_RIGHTS_SCHEMA\t\t(ACL_USAGE|ACL_CREATE)\n #define ACL_ALL_RIGHTS_TABLESPACE\t(ACL_CREATE)\n #define ACL_ALL_RIGHTS_TYPE\t\t\t(ACL_USAGE)\n+#define ACL_ALL_RIGHTS_PUBLICATION\t(ACL_USAGE)\n \n /* operation codes for pg_*_aclmask */\n typedef enum\ndiff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h\nindex d5a0880678..87ddeecc6e 100644\n--- a/src/include/utils/guc_tables.h\n+++ b/src/include/utils/guc_tables.h\n@@ -75,6 +75,7 @@ enum config_group\n \tREPLICATION_SENDING,\n \tREPLICATION_PRIMARY,\n \tREPLICATION_STANDBY,\n+\tREPLICATION_PUBLISHERS,\n \tREPLICATION_SUBSCRIBERS,\n \tQUERY_TUNING_METHOD,\n \tQUERY_TUNING_COST,\ndiff --git a/src/test/modules/test_copy_callbacks/test_copy_callbacks.c b/src/test/modules/test_copy_callbacks/test_copy_callbacks.c\nindex e65771067e..9178e102bb 100644\n--- a/src/test/modules/test_copy_callbacks/test_copy_callbacks.c\n+++ b/src/test/modules/test_copy_callbacks/test_copy_callbacks.c\n@@ -38,7 +38,7 @@ test_copy_to_callback(PG_FUNCTION_ARGS)\n \tint64\t\tprocessed;\n \n \tcstate = BeginCopyTo(NULL, rel, NULL, RelationGetRelid(rel), NULL, false,\n-\t\t\t\t\t\t to_cb, NIL, NIL);\n+\t\t\t\t\t\t to_cb, NIL, NIL, NIL);\n \tprocessed = DoCopyTo(cstate);\n \tEndCopyTo(cstate);\n \ndiff --git a/src/test/regress/expected/copy.out b/src/test/regress/expected/copy.out\nindex 8a8bf43fde..b6011bea0f 100644\n--- a/src/test/regress/expected/copy.out\n+++ b/src/test/regress/expected/copy.out\n@@ -240,3 +240,55 @@ SELECT * FROM header_copytest ORDER BY a;\n (5 rows)\n \n drop table header_copytest;\n+-- Filtering by publication\n+-- Suppress the warning about insufficient wal_level when creating\n+-- publications.\n+set client_min_messages to error;\n+create role regress_copy_repl_user login replication;\n+create table published_copytest (i int);\n+insert into published_copytest(i) select x from generate_series(1, 10) g(x);\n+create publication pub1 for table published_copytest where (i >= 7);\n+set publication_security to on;\n+-- Test both table name and query forms of the COPY command.\n+set role regress_copy_repl_user;\n+copy published_copytest to stdout (publication_names (pub1));\n+7\n+8\n+9\n+10\n+copy (select i from published_copytest) to stdout (publication_names (pub1));\n+7\n+8\n+9\n+10\n+reset role;\n+-- Publish some more data.\n+create publication pub2 for table published_copytest where (i <= 2);\n+set role regress_copy_repl_user;\n+copy published_copytest to stdout (publication_names (pub1, pub2));\n+1\n+2\n+7\n+8\n+9\n+10\n+reset role;\n+-- If any publication has no filter, the other filters are ignored.\n+create publication pub3 for table published_copytest;\n+set role regress_copy_repl_user;\n+copy published_copytest to stdout (publication_names (pub1, pub2, pub3));\n+1\n+2\n+3\n+4\n+5\n+6\n+7\n+8\n+9\n+10\n+reset role;\n+reset publication_security;\n+reset client_min_messages;\n+drop role regress_copy_repl_user;\n+drop publication pub1, pub2, pub3;\ndiff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out\nindex 427f87ea07..76a70c80d4 100644\n--- a/src/test/regress/expected/publication.out\n+++ b/src/test/regress/expected/publication.out\n@@ -87,10 +87,10 @@ RESET client_min_messages;\n -- should be able to add schema to 'FOR TABLE' publication\n ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;\n \\dRp+ testpub_fortable\n- Publication testpub_fortable\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_fortable\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"public.testpub_tbl1\"\n Tables from schemas:\n@@ -99,20 +99,20 @@ Tables from schemas:\n -- should be able to drop schema from 'FOR TABLE' publication\n ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;\n \\dRp+ testpub_fortable\n- Publication testpub_fortable\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_fortable\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"public.testpub_tbl1\"\n \n -- should be able to set schema to 'FOR TABLE' publication\n ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;\n \\dRp+ testpub_fortable\n- Publication testpub_fortable\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_fortable\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test\"\n \n@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;\n CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;\n RESET client_min_messages;\n \\dRp+ testpub_for_tbl_schema\n- Publication testpub_for_tbl_schema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_for_tbl_schema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"pub_test.testpub_nopk\"\n Tables from schemas:\n@@ -135,10 +135,10 @@ Tables from schemas:\n -- should be able to add a table of the same schema to the schema publication\n ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;\n \\dRp+ testpub_forschema\n- Publication testpub_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"pub_test.testpub_nopk\"\n Tables from schemas:\n@@ -147,10 +147,10 @@ Tables from schemas:\n -- should be able to drop the table\n ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;\n \\dRp+ testpub_forschema\n- Publication testpub_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test\"\n \n@@ -161,10 +161,10 @@ ERROR: relation \"testpub_nopk\" is not part of the publication\n -- should be able to set table to schema publication\n ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;\n \\dRp+ testpub_forschema\n- Publication testpub_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"pub_test.testpub_nopk\"\n \n@@ -186,10 +186,10 @@ Publications:\n \"testpub_foralltables\"\n \n \\dRp+ testpub_foralltables\n- Publication testpub_foralltables\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | t | t | t | f | f | f\n+ Publication testpub_foralltables\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | t | t | t | f | f | f | \n (1 row)\n \n DROP TABLE testpub_tbl2;\n@@ -201,19 +201,19 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;\n CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;\n RESET client_min_messages;\n \\dRp+ testpub3\n- Publication testpub3\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub3\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"public.testpub_tbl3\"\n \"public.testpub_tbl3a\"\n \n \\dRp+ testpub4\n- Publication testpub4\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub4\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"public.testpub_tbl3\"\n \n@@ -234,10 +234,10 @@ UPDATE testpub_parted1 SET a = 1;\n -- only parent is listed as being in publication, not the partition\n ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;\n \\dRp+ testpub_forparted\n- Publication testpub_forparted\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_forparted\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"public.testpub_parted\"\n \n@@ -252,10 +252,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;\n UPDATE testpub_parted1 SET a = 1;\n ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);\n \\dRp+ testpub_forparted\n- Publication testpub_forparted\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | t\n+ Publication testpub_forparted\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | t | \n Tables:\n \"public.testpub_parted\"\n \n@@ -284,10 +284,10 @@ SET client_min_messages = 'ERROR';\n CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');\n RESET client_min_messages;\n \\dRp+ testpub5\n- Publication testpub5\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | f | f | f | f\n+ Publication testpub5\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | f | f | f | f | \n Tables:\n \"public.testpub_rf_tbl1\"\n \"public.testpub_rf_tbl2\" WHERE ((c <> 'test'::text) AND (d < 5))\n@@ -300,10 +300,10 @@ Tables:\n \n ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);\n \\dRp+ testpub5\n- Publication testpub5\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | f | f | f | f\n+ Publication testpub5\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | f | f | f | f | \n Tables:\n \"public.testpub_rf_tbl1\"\n \"public.testpub_rf_tbl2\" WHERE ((c <> 'test'::text) AND (d < 5))\n@@ -319,10 +319,10 @@ Publications:\n \n ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;\n \\dRp+ testpub5\n- Publication testpub5\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | f | f | f | f\n+ Publication testpub5\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | f | f | f | f | \n Tables:\n \"public.testpub_rf_tbl1\"\n \"public.testpub_rf_tbl3\" WHERE ((e > 1000) AND (e < 2000))\n@@ -330,10 +330,10 @@ Tables:\n -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)\n ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);\n \\dRp+ testpub5\n- Publication testpub5\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | f | f | f | f\n+ Publication testpub5\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | f | f | f | f | \n Tables:\n \"public.testpub_rf_tbl3\" WHERE ((e > 300) AND (e < 500))\n \n@@ -366,10 +366,10 @@ SET client_min_messages = 'ERROR';\n CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');\n RESET client_min_messages;\n \\dRp+ testpub_syntax1\n- Publication testpub_syntax1\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | f | f | f | f\n+ Publication testpub_syntax1\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | f | f | f | f | \n Tables:\n \"public.testpub_rf_tbl1\"\n \"public.testpub_rf_tbl3\" WHERE (e < 999)\n@@ -379,10 +379,10 @@ SET client_min_messages = 'ERROR';\n CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');\n RESET client_min_messages;\n \\dRp+ testpub_syntax2\n- Publication testpub_syntax2\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | f | f | f | f\n+ Publication testpub_syntax2\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | f | f | f | f | \n Tables:\n \"public.testpub_rf_tbl1\"\n \"testpub_rf_schema1.testpub_rf_tbl5\" WHERE (h < 999)\n@@ -497,10 +497,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;\n ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);\n RESET client_min_messages;\n \\dRp+ testpub6\n- Publication testpub6\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub6\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"testpub_rf_schema2.testpub_rf_tbl6\" WHERE (i < 99)\n Tables from schemas:\n@@ -714,10 +714,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');\n RESET client_min_messages;\n ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);\t\t-- ok\n \\dRp+ testpub_table_ins\n- Publication testpub_table_ins\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | f | f | t | f\n+ Publication testpub_table_ins\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | f | f | t | f | \n Tables:\n \"public.testpub_tbl5\" (a)\n \n@@ -891,10 +891,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));\n ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;\n ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);\n \\dRp+ testpub_both_filters\n- Publication testpub_both_filters\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_both_filters\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"public.testpub_tbl_both_filters\" (a, c) WHERE (c <> 1)\n \n@@ -1099,10 +1099,10 @@ ERROR: relation \"testpub_tbl1\" is already member of publication \"testpub_fortbl\n CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;\n ERROR: publication \"testpub_fortbl\" already exists\n \\dRp+ testpub_fortbl\n- Publication testpub_fortbl\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_fortbl\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"pub_test.testpub_nopk\"\n \"public.testpub_tbl1\"\n@@ -1140,10 +1140,10 @@ Publications:\n \"testpub_fortbl\"\n \n \\dRp+ testpub_default\n- Publication testpub_default\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | f | f\n+ Publication testpub_default\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | f | f | \n Tables:\n \"pub_test.testpub_nopk\"\n \"public.testpub_tbl1\"\n@@ -1214,17 +1214,57 @@ ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail\n ERROR: permission denied to change owner of publication \"testpub4\"\n HINT: The owner of a FOR TABLES IN SCHEMA publication must be a superuser.\n ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok\n+-- Test the USAGE privilege.\n+SET ROLE regress_publication_user;\n+CREATE ROLE regress_publication_user4;\n+-- First, check that USAGE is granted to PUBLIC by default.\n+SET ROLE regress_publication_user4;\n+SELECT has_publication_privilege(p.oid, 'usage')\n+FROM pg_catalog.pg_publication p\n+WHERE p.pubname='testpub4';\n+ has_publication_privilege \n+---------------------------\n+ t\n+(1 row)\n+\n+-- Revoke the USAGE privilege from PUBLIC.\n+SET ROLE regress_publication_user;\n+REVOKE USAGE ON PUBLICATION testpub4 FROM public;\n+-- regress_publication_user4 does not have the privilege now.\n+SET ROLE regress_publication_user4;\n+SELECT has_publication_privilege(p.oid, 'usage')\n+FROM pg_catalog.pg_publication p\n+WHERE p.pubname='testpub4';\n+ has_publication_privilege \n+---------------------------\n+ f\n+(1 row)\n+\n+-- Grant USAGE to regress_publication_user4 explicitly.\n+SET ROLE regress_publication_user;\n+GRANT USAGE ON PUBLICATION testpub4 TO regress_publication_user4;\n+-- regress_publication_user4 does have the privilege now.\n+SET ROLE regress_publication_user4;\n+SELECT has_publication_privilege(p.oid, 'usage')\n+FROM pg_catalog.pg_publication p\n+WHERE p.pubname='testpub4';\n+ has_publication_privilege \n+---------------------------\n+ t\n+(1 row)\n+\n SET ROLE regress_publication_user;\n DROP PUBLICATION testpub4;\n DROP ROLE regress_publication_user3;\n+DROP ROLE regress_publication_user4;\n REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;\n DROP TABLE testpub_parted;\n DROP TABLE testpub_tbl1;\n \\dRp+ testpub_default\n- Publication testpub_default\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | f | f\n+ Publication testpub_default\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | f | f | \n (1 row)\n \n -- fail - must be owner of publication\n@@ -1263,19 +1303,19 @@ CREATE TABLE \"CURRENT_SCHEMA\".\"CURRENT_SCHEMA\"(id int);\n SET client_min_messages = 'ERROR';\n CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \n CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;\n \\dRp+ testpub2_forschema\n- Publication testpub2_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub2_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \"pub_test2\"\n@@ -1289,44 +1329,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA \"CURRENT_SCHEMA\", CUR\n CREATE PUBLICATION testpub_fortable FOR TABLE \"CURRENT_SCHEMA\".\"CURRENT_SCHEMA\";\n RESET client_min_messages;\n \\dRp+ testpub3_forschema\n- Publication testpub3_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub3_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"public\"\n \n \\dRp+ testpub4_forschema\n- Publication testpub4_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub4_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"CURRENT_SCHEMA\"\n \n \\dRp+ testpub5_forschema\n- Publication testpub5_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub5_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"CURRENT_SCHEMA\"\n \"public\"\n \n \\dRp+ testpub6_forschema\n- Publication testpub6_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub6_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"CURRENT_SCHEMA\"\n \"public\"\n \n \\dRp+ testpub_fortable\n- Publication testpub_fortable\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_fortable\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"CURRENT_SCHEMA.CURRENT_SCHEMA\"\n \n@@ -1360,10 +1400,10 @@ ERROR: schema \"testpub_view\" does not exist\n -- dropping the schema should reflect the change in publication\n DROP SCHEMA pub_test3;\n \\dRp+ testpub2_forschema\n- Publication testpub2_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub2_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \"pub_test2\"\n@@ -1371,20 +1411,20 @@ Tables from schemas:\n -- renaming the schema should reflect the change in publication\n ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;\n \\dRp+ testpub2_forschema\n- Publication testpub2_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub2_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1_renamed\"\n \"pub_test2\"\n \n ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;\n \\dRp+ testpub2_forschema\n- Publication testpub2_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub2_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \"pub_test2\"\n@@ -1392,10 +1432,10 @@ Tables from schemas:\n -- alter publication add schema\n ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \"pub_test2\"\n@@ -1404,10 +1444,10 @@ Tables from schemas:\n ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;\n ERROR: schema \"non_existent_schema\" does not exist\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \"pub_test2\"\n@@ -1416,10 +1456,10 @@ Tables from schemas:\n ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;\n ERROR: schema \"pub_test1\" is already member of publication \"testpub1_forschema\"\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \"pub_test2\"\n@@ -1427,10 +1467,10 @@ Tables from schemas:\n -- alter publication drop schema\n ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \n@@ -1438,10 +1478,10 @@ Tables from schemas:\n ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;\n ERROR: tables from schema \"pub_test2\" are not part of the publication\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \n@@ -1449,29 +1489,29 @@ Tables from schemas:\n ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;\n ERROR: schema \"non_existent_schema\" does not exist\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \n -- drop all schemas\n ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n (1 row)\n \n -- alter publication set multiple schema\n ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \"pub_test2\"\n@@ -1480,10 +1520,10 @@ Tables from schemas:\n ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;\n ERROR: schema \"non_existent_schema\" does not exist\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \"pub_test2\"\n@@ -1492,10 +1532,10 @@ Tables from schemas:\n -- removing the duplicate schemas\n ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \n@@ -1574,18 +1614,18 @@ SET client_min_messages = 'ERROR';\n CREATE PUBLICATION testpub3_forschema;\n RESET client_min_messages;\n \\dRp+ testpub3_forschema\n- Publication testpub3_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub3_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n (1 row)\n \n ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;\n \\dRp+ testpub3_forschema\n- Publication testpub3_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub3_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \n@@ -1595,20 +1635,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA\n CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;\n RESET client_min_messages;\n \\dRp+ testpub_forschema_fortable\n- Publication testpub_forschema_fortable\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_forschema_fortable\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"pub_test2.tbl1\"\n Tables from schemas:\n \"pub_test1\"\n \n \\dRp+ testpub_fortable_forschema\n- Publication testpub_fortable_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_fortable_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"pub_test2.tbl1\"\n Tables from schemas:\ndiff --git a/src/test/regress/sql/copy.sql b/src/test/regress/sql/copy.sql\nindex f9da7b1508..4174823cff 100644\n--- a/src/test/regress/sql/copy.sql\n+++ b/src/test/regress/sql/copy.sql\n@@ -268,3 +268,39 @@ a\tc\tb\n \n SELECT * FROM header_copytest ORDER BY a;\n drop table header_copytest;\n+\n+-- Filtering by publication\n+\n+-- Suppress the warning about insufficient wal_level when creating\n+-- publications.\n+set client_min_messages to error;\n+\n+create role regress_copy_repl_user login replication;\n+create table published_copytest (i int);\n+insert into published_copytest(i) select x from generate_series(1, 10) g(x);\n+create publication pub1 for table published_copytest where (i >= 7);\n+\n+set publication_security to on;\n+\n+-- Test both table name and query forms of the COPY command.\n+set role regress_copy_repl_user;\n+copy published_copytest to stdout (publication_names (pub1));\n+copy (select i from published_copytest) to stdout (publication_names (pub1));\n+reset role;\n+\n+-- Publish some more data.\n+create publication pub2 for table published_copytest where (i <= 2);\n+set role regress_copy_repl_user;\n+copy published_copytest to stdout (publication_names (pub1, pub2));\n+reset role;\n+\n+-- If any publication has no filter, the other filters are ignored.\n+create publication pub3 for table published_copytest;\n+set role regress_copy_repl_user;\n+copy published_copytest to stdout (publication_names (pub1, pub2, pub3));\n+reset role;\n+\n+reset publication_security;\n+reset client_min_messages;\n+drop role regress_copy_repl_user;\n+drop publication pub1, pub2, pub3;\ndiff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql\nindex a47c5939d5..303870a1e9 100644\n--- a/src/test/regress/sql/publication.sql\n+++ b/src/test/regress/sql/publication.sql\n@@ -808,9 +808,37 @@ SET ROLE regress_publication_user3;\n ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail\n ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok\n \n+-- Test the USAGE privilege.\n+SET ROLE regress_publication_user;\n+CREATE ROLE regress_publication_user4;\n+-- First, check that USAGE is granted to PUBLIC by default.\n+SET ROLE regress_publication_user4;\n+SELECT has_publication_privilege(p.oid, 'usage')\n+FROM pg_catalog.pg_publication p\n+WHERE p.pubname='testpub4';\n+\n+-- Revoke the USAGE privilege from PUBLIC.\n+SET ROLE regress_publication_user;\n+REVOKE USAGE ON PUBLICATION testpub4 FROM public;\n+-- regress_publication_user4 does not have the privilege now.\n+SET ROLE regress_publication_user4;\n+SELECT has_publication_privilege(p.oid, 'usage')\n+FROM pg_catalog.pg_publication p\n+WHERE p.pubname='testpub4';\n+\n+-- Grant USAGE to regress_publication_user4 explicitly.\n+SET ROLE regress_publication_user;\n+GRANT USAGE ON PUBLICATION testpub4 TO regress_publication_user4;\n+-- regress_publication_user4 does have the privilege now.\n+SET ROLE regress_publication_user4;\n+SELECT has_publication_privilege(p.oid, 'usage')\n+FROM pg_catalog.pg_publication p\n+WHERE p.pubname='testpub4';\n+\n SET ROLE regress_publication_user;\n DROP PUBLICATION testpub4;\n DROP ROLE regress_publication_user3;\n+DROP ROLE regress_publication_user4;\n \n REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;\n \ndiff --git a/src/test/subscription/t/027_nosuperuser.pl b/src/test/subscription/t/027_nosuperuser.pl\nindex 59192dbe2f..31e94514c1 100644\n--- a/src/test/subscription/t/027_nosuperuser.pl\n+++ b/src/test/subscription/t/027_nosuperuser.pl\n@@ -7,8 +7,10 @@ use warnings;\n use PostgreSQL::Test::Cluster;\n use Test::More;\n \n-my ($node_publisher, $node_subscriber, $publisher_connstr, $result, $offset);\n+my ($node_publisher, $node_subscriber, $publisher_connstr, $result, $offset,\n+\t$offset_pub);\n $offset = 0;\n+$offset_pub = 0;\n \n sub publish_insert\n {\n@@ -103,7 +105,8 @@ $node_publisher->init(allows_streaming => 'logical');\n $node_subscriber->init;\n $node_publisher->start;\n $node_subscriber->start;\n-$publisher_connstr = $node_publisher->connstr . ' dbname=postgres';\n+# Non-super user, so that we can test publication privileges.\n+$publisher_connstr = $node_publisher->connstr . ' dbname=postgres user=regress_alice';\n my %remainder_a = (\n \tpublisher => 0,\n \tsubscriber => 1);\n@@ -141,6 +144,8 @@ for my $node ($node_publisher, $node_subscriber)\n }\n $node_publisher->safe_psql(\n \t'postgres', qq(\n+ALTER ROLE regress_alice REPLICATION;\n+\n SET SESSION AUTHORIZATION regress_alice;\n \n CREATE PUBLICATION alice\n@@ -316,4 +321,53 @@ expect_replication(\"alice.unpartitioned\", 2, 23, 25,\n \t\"nosuperuser nobypassrls table owner can replicate delete into unpartitioned despite rls\"\n );\n \n+# Test publication permissions.\n+$node_publisher->append_conf(\n+\t'postgresql.conf',\n+\tqq[\n+publication_security = on\n+]);\n+$node_publisher->restart;\n+\n+# First, make sure that the user specified in the subscription is not able to\n+# access the data, then do some changes. (By deleting everything we make the\n+# following checks simpler.)\n+$node_publisher->safe_psql(\n+\t'postgres', qq(\n+REVOKE USAGE ON PUBLICATION alice FROM PUBLIC;\n+REVOKE USAGE ON PUBLICATION alice FROM regress_alice;\n+ALTER DATABASE postgres SET publication_security TO on;\n+\n+DELETE FROM alice.unpartitioned;\n+));\n+# Missing permission should cause error.\n+expect_failure(\"alice.unpartitioned\", 2, 23, 25,\n+\t\t\t qr/ERROR: ( [A-Z0-9]+:)? permission denied for publication alice/msi, 0);\n+# Check that the missing privilege makes table synchronization fail too.\n+$node_subscriber->safe_psql(\n+\t'postgres', qq(\n+SET SESSION AUTHORIZATION regress_admin;\n+DROP SUBSCRIPTION admin_sub;\n+TRUNCATE TABLE alice.unpartitioned;\n+CREATE SUBSCRIPTION admin_sub CONNECTION '$publisher_connstr' PUBLICATION alice;\n+));\n+# Note that expect_failure() does not wait for the end of the synchronization,\n+# so if there was any data on publisher side and if it found its way to the\n+# subscriber, the function might still see an empty table. So we only rely on\n+# the function to check the error message.\n+expect_failure(\"alice.unpartitioned\", 0, '', '',\n+\t\t\t qr/ERROR: ( [A-Z0-9]+:)? permission denied for publication alice/msi, 0);\n+# Restore the privilege on the publication.\n+$node_publisher->safe_psql(\n+\t'postgres', qq(\n+GRANT USAGE ON PUBLICATION alice TO regress_alice;\n+));\n+# Wait for synchronization to complete.\n+$node_subscriber->wait_for_subscription_sync;\n+# The replication should work again now.\n+publish_insert(\"alice.unpartitioned\", 1);\n+expect_replication(\"alice.unpartitioned\", 1, 1, 1,\n+ \"unpartitioned is replicated as soon as regress_alic has permissions on alice publication\"\n+);\n+\n done_testing();\n-- \n2.31.1",
"msg_date": "Wed, 01 Feb 2023 13:02:07 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "FYI this looks like it needs a rebase due to a conflict in copy.c and\nan offset in pgoutput.c.\n\nIs there anything specific that still needs review or do you think\nyou've handled all Peter's concerns? In particular, is there \"a\ncomprehensive description of what it is trying to do\"? :)\n\n\n",
"msg_date": "Tue, 14 Mar 2023 14:30:09 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "Gregory Stark (as CFM) <stark.cfm@gmail.com> wrote:\n\n> FYI this looks like it needs a rebase due to a conflict in copy.c and\n> an offset in pgoutput.c.\n> \n> Is there anything specific that still needs review or do you think\n> you've handled all Peter's concerns? In particular, is there \"a\n> comprehensive description of what it is trying to do\"? :)\n\nI tried to improve the documentation and commit messages in v05. v06 (just\nrebased) is attached.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n\n From 281272f99527ab53547d4d6a6ce71d2d5bfe7b14 Mon Sep 17 00:00:00 2001\nFrom: Antonin Houska <ah@cybertec.at>\nDate: Wed, 15 Mar 2023 04:21:01 +0100\nSubject: [PATCH 2/2] Implement the USAGE privilege on PUBLICATION.\n\nPublication row filters and column lists can be used to prevent subsets of\ndata from being replicated via the logical replication system. These features\ncan address performance issues, but currently should not be used for security\npurposes, such as hiding sensitive data from the subscribers. The problem is\nthat any subscriber can get the data from any publication, so if the sensitive\ndata is deliberately not published via one publication, it can still be\navailable via another one (supposedly created for another subscriber).\n\nThis patch adds an ACL column to the pg_publication catalog, implements the\ncorresponding checks and enhances the GRANT and REVOKE commands to grant and\nrevoke the USAGE privilege on publication to / from roles. The USAGE privilege\nis initially granted to the PUBLIC group (so that existing configurations\ndon't get broken) but the user can revoke it and grant it only to individual\nsubscription users (i.e. users mentioned in the subscription connection\nconfiguration). Thus the publisher instance can reject to send data that given\nsubscriber is not supposed to receive.\n\nObviously, the publication privileges are checked on the publisher side,\notherwise the implementation wouldn't be secure. The output plugin\n(pgoutput.c) is the easy part because it already does receive the list of\npublications whose data it should send to the subscriber. The initial table\nsynchronization is a little bit tricky because so far the \"tablesync worker\"\n(running on the subscriber side) was responsible for constructing the SQL\nquery for the COPY TO command, which is executed on the publisher side.\n\nThis patch adds a new option PUBLICATION_NAMES to the COPY TO command. The\nsubscriber uses it to pass a list of publications to the publisher. The\npublisher checks if the subscription user has the USAGE privilege on each\npublication, retrieves the corresponding data (i.e. rows matching the row\nfilters of the publications) and sends it to the subscriber.\n\nSince the publisher and subscriber instances can be on different major\nversions of postgres, and since old subscribers cannot send the publication\nnames during the initial table synchronization, a new configuration variable\n\"publication_security\" was added. The default value is \"off\", meaning that the\npublisher does not require the COPY TO command to contain the\nPUBLICATION_NAMES option. If the option is passed yet, the publisher does not\ncheck the privileges on the listed publications, but it does perform row\nfiltering according to the publication filters. Thus upgrade of the publisher\ninstance does not break anything.\n\nOnce all the subscribers have migrated to the postgres version that supports\nthis feature, this variable should be set to \"on\". At that moment the\npublisher starts to require the presence of the PUBLICATION_NAMES option in\nthe COPY TO command, as long as the COPY TO is executed by a role which has\nthe REPLICATION privilege. (Role w/o the REPLICATION privilege aren't\ncurrently allowed to use the PUBLICATION_NAMES option.)\n---\n doc/src/sgml/catalogs.sgml | 9 +\n doc/src/sgml/config.sgml | 28 ++\n doc/src/sgml/ddl.sgml | 14 +\n doc/src/sgml/logical-replication.sgml | 72 +--\n doc/src/sgml/ref/copy.sgml | 36 ++\n doc/src/sgml/ref/grant.sgml | 9 +-\n src/backend/catalog/aclchk.c | 22 +\n src/backend/catalog/namespace.c | 34 +-\n src/backend/catalog/objectaddress.c | 2 +-\n src/backend/catalog/pg_publication.c | 20 +-\n src/backend/commands/copy.c | 168 ++++++-\n src/backend/commands/copyto.c | 218 ++++++++-\n src/backend/commands/publicationcmds.c | 2 +\n src/backend/executor/execMain.c | 4 +-\n src/backend/parser/gram.y | 8 +\n src/backend/replication/logical/tablesync.c | 91 ++--\n src/backend/replication/pgoutput/pgoutput.c | 3 +\n src/backend/replication/walsender.c | 6 +\n src/backend/utils/adt/acl.c | 51 +++\n src/backend/utils/misc/guc_tables.c | 12 +\n src/backend/utils/misc/postgresql.conf.sample | 6 +\n src/bin/pg_dump/dumputils.c | 2 +\n src/bin/pg_dump/pg_dump.c | 47 +-\n src/bin/pg_dump/pg_dump.h | 1 +\n src/bin/psql/describe.c | 11 +\n src/bin/psql/tab-complete.c | 3 +\n src/include/catalog/pg_proc.dat | 3 +\n src/include/catalog/pg_publication.h | 10 +\n src/include/commands/copy.h | 5 +-\n src/include/replication/logicalproto.h | 1 +\n src/include/utils/acl.h | 1 +\n src/include/utils/guc_tables.h | 1 +\n .../test_copy_callbacks/test_copy_callbacks.c | 2 +-\n src/test/regress/expected/copy.out | 52 +++\n src/test/regress/expected/publication.out | 424 ++++++++++--------\n src/test/regress/sql/copy.sql | 36 ++\n src/test/regress/sql/publication.sql | 28 ++\n src/test/subscription/t/027_nosuperuser.pl | 58 ++-\n 38 files changed, 1197 insertions(+), 303 deletions(-)\n\ndiff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml\nindex 746baf5053..c5baafceef 100644\n--- a/doc/src/sgml/catalogs.sgml\n+++ b/doc/src/sgml/catalogs.sgml\n@@ -6361,6 +6361,15 @@ SCRAM-SHA-256$<replaceable><iteration count></replaceable>:<replaceable>&l\n publication instead of its own.\n </para></entry>\n </row>\n+\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>pubacl</structfield> <type>aclitem[]</type>\n+ </para>\n+ <para>\n+ Access privileges; see <xref linkend=\"ddl-priv\"/> for details\n+ </para></entry>\n+ </row>\n </tbody>\n </tgroup>\n </table>\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\nindex e5c41cc6c6..b9caae4423 100644\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -4958,6 +4958,34 @@ ANY <replaceable class=\"parameter\">num_sync</replaceable> ( <replaceable class=\"\n </variablelist>\n </sect2>\n \n+ <sect2 id=\"runtime-config-replication-publisher\">\n+ <title>Publishers</title>\n+\n+ <para>\n+ These settings control the behavior of a logical replication publisher.\n+ Their values on the subscriber are irrelevant.\n+ </para>\n+\n+ <variablelist>\n+\n+ <varlistentry id=\"guc-publication-security\" xreflabel=\"publication_security\">\n+ <term><varname>publication_security</varname> (<type>boolean</type>)\n+ <indexterm>\n+ <primary><varname>publication_security</varname> configuration parameter</primary>\n+ <secondary>in a publisher</secondary>\n+ </indexterm>\n+ </term>\n+ <listitem>\n+ <para>\n+ Specifies whether the publisher should check the publication\n+ privileges before it sends data to the subscriber. See\n+ <xref linkend=\"logical-replication-security\"/> for more details.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+ </variablelist>\n+ </sect2>\n+\n <sect2 id=\"runtime-config-replication-subscriber\">\n <title>Subscribers</title>\n \ndiff --git a/doc/src/sgml/ddl.sgml b/doc/src/sgml/ddl.sgml\nindex 5179125510..9a71790678 100644\n--- a/doc/src/sgml/ddl.sgml\n+++ b/doc/src/sgml/ddl.sgml\n@@ -1963,6 +1963,13 @@ REVOKE ALL ON accounts FROM PUBLIC;\n statements that have previously performed this lookup, so this is not\n a completely secure way to prevent object access.\n </para>\n+ <para>\n+ For publications, allows logical replication via particular\n+ publication. The user specified in\n+ the <link linkend=\"sql-createsubscription\"><command>CREATE\n+ SUBSCRIPTION</command></link> command must have this privilege on all\n+ publications listed in that command.\n+ </para>\n <para>\n For sequences, allows use of the\n <function>currval</function> and <function>nextval</function> functions.\n@@ -2156,6 +2163,7 @@ REVOKE ALL ON accounts FROM PUBLIC;\n <literal>FOREIGN DATA WRAPPER</literal>,\n <literal>FOREIGN SERVER</literal>,\n <literal>LANGUAGE</literal>,\n+ <literal>PUBLICATION</literal>,\n <literal>SCHEMA</literal>,\n <literal>SEQUENCE</literal>,\n <literal>TYPE</literal>\n@@ -2252,6 +2260,12 @@ REVOKE ALL ON accounts FROM PUBLIC;\n <entry>none</entry>\n <entry><literal>\\dconfig+</literal></entry>\n </row>\n+ <row>\n+ <entry><literal>PUBLICATION</literal></entry>\n+ <entry><literal>U</literal></entry>\n+ <entry>U</entry>\n+ <entry><literal>\\dRp+</literal></entry>\n+ </row>\n <row>\n <entry><literal>SCHEMA</literal></entry>\n <entry><literal>UC</literal></entry>\ndiff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml\nindex 1bd5660c87..64774e68cd 100644\n--- a/doc/src/sgml/logical-replication.sgml\n+++ b/doc/src/sgml/logical-replication.sgml\n@@ -898,26 +898,26 @@ CREATE PUBLICATION\n <command>psql</command> can be used to show the row filter expressions (if\n defined) for each publication.\n <programlisting>\n-test_pub=# \\dRp+\n+ test_pub=# \\dRp+\n Publication p1\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root\n-----------+------------+---------+---------+---------+-----------+----------\n- postgres | f | t | t | t | t | f\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges\n+----------+------------+---------+---------+---------+-----------+------------------------------\n+ postgres | f | t | t | t | t | f |\n Tables:\n \"public.t1\" WHERE ((a > 5) AND (c = 'NSW'::text))\n \n Publication p2\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root\n-----------+------------+---------+---------+---------+-----------+----------\n- postgres | f | t | t | t | t | f\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges\n+----------+------------+---------+---------+---------+-----------+------------------------------\n+ postgres | f | t | t | t | t | f |\n Tables:\n \"public.t1\"\n \"public.t2\" WHERE (e = 99)\n \n Publication p3\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root\n-----------+------------+---------+---------+---------+-----------+----------\n- postgres | f | t | t | t | t | f\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges\n+----------+------------+---------+---------+---------+-----------+------------------------------\n+ postgres | f | t | t | t | t | f |\n Tables:\n \"public.t2\" WHERE (d = 10)\n \"public.t3\" WHERE (g = 10)\n@@ -1259,10 +1259,11 @@ test_sub=# SELECT * FROM child ORDER BY a;\n \n <para>\n The choice of columns can be based on behavioral or performance reasons.\n- However, do not rely on this feature for security: a malicious subscriber\n- is able to obtain data from columns that are not specifically\n- published. If security is a consideration, protections can be applied\n- at the publisher side.\n+ However, if you want to use this feature for security, please consider\n+ using the privileges on publication, as explained in\n+ <xref linkend=\"logical-replication-security\"/>. Otherwise a malicious\n+ subscriber may be able to use other publications to obtain data from\n+ columns that are not specifically published via your publication.\n </para>\n \n <para>\n@@ -1360,9 +1361,9 @@ CREATE PUBLICATION\n <programlisting>\n test_pub=# \\dRp+\n Publication p1\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root\n-----------+------------+---------+---------+---------+-----------+----------\n- postgres | f | t | t | t | t | f\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges\n+----------+------------+---------+---------+---------+-----------+------------------------------\n+ postgres | f | t | t | t | t | f |\n Tables:\n \"public.t1\" (id, a, b, d)\n </programlisting></para>\n@@ -1724,12 +1725,6 @@ CONTEXT: processing remote data for replication origin \"pg_16395\" during \"INSER\n and it must have the <literal>LOGIN</literal> attribute.\n </para>\n \n- <para>\n- In order to be able to copy the initial table data, the role used for the\n- replication connection must have the <literal>SELECT</literal> privilege on\n- a published table (or be a superuser).\n- </para>\n-\n <para>\n To create a publication, the user must have the <literal>CREATE</literal>\n privilege in the database.\n@@ -1743,16 +1738,25 @@ CONTEXT: processing remote data for replication origin \"pg_16395\" during \"INSER\n </para>\n \n <para>\n- There are currently no privileges on publications. Any subscription (that\n- is able to connect) can access any publication. Thus, if you intend to\n- hide some information from particular subscribers, such as by using row\n- filters or column lists, or by not adding the whole table to the\n- publication, be aware that other publications in the same database could\n- expose the same information. Publication privileges might be added to\n- <productname>PostgreSQL</productname> in the future to allow for\n- finer-grained access control.\n+ To replicate data, the role used for the replication connection must have\n+ the <literal>USAGE</literal> privilege on the publication. In such a case,\n+ the subscription role needs neither the <literal>SELECT</literal>\n+ privileges on the replicated tables nor the <literal>USAGE</literal>\n+ privilege on the containing schemas.\n </para>\n \n+ <note>\n+ <para>\n+ The <literal>USAGE</literal> privilege on publication is only checked if\n+ the <link linkend=\"guc-publication-security\"><varname>publication_security</varname></link>\n+ configuration parameter is set. The default is <literal>off</literal>. It\n+ should only be set to <literal>on</literal> if all the subscribers are\n+ on <productname>PostgreSQL</productname> server version 16 or later. The\n+ older versions do not send the publication names for the initial table\n+ synchronization, so they would fail to receive the data.\n+ </para>\n+ </note>\n+\n <para>\n To create a subscription, the user must be a superuser.\n </para>\n@@ -1812,6 +1816,12 @@ CONTEXT: processing remote data for replication origin \"pg_16395\" during \"INSER\n <link linkend=\"guc-wal-sender-timeout\"><varname>wal_sender_timeout</varname></link>.\n </para>\n \n+ <para>\n+ <link linkend=\"guc-publication-security\"><varname>publication_security</varname></link>\n+ must be set to <literal>on</literal> if the publisher is supposed to check\n+ the publication privileges.\n+ </para>\n+\n </sect2>\n \n <sect2 id=\"logical-replication-config-subscriber\">\ndiff --git a/doc/src/sgml/ref/copy.sgml b/doc/src/sgml/ref/copy.sgml\nindex 5e591ed2e6..3bc199e701 100644\n--- a/doc/src/sgml/ref/copy.sgml\n+++ b/doc/src/sgml/ref/copy.sgml\n@@ -44,6 +44,7 @@ COPY { <replaceable class=\"parameter\">table_name</replaceable> [ ( <replaceable\n FORCE_NULL ( <replaceable class=\"parameter\">column_name</replaceable> [, ...] )\n ENCODING '<replaceable class=\"parameter\">encoding_name</replaceable>'\n DEFAULT '<replaceable class=\"parameter\">default_string</replaceable>'\n+ PUBLICATION_NAMES ( <replaceable class=\"parameter\">publication_name</replaceable> [, ...] )\n </synopsis>\n </refsynopsisdiv>\n \n@@ -382,6 +383,41 @@ COPY { <replaceable class=\"parameter\">table_name</replaceable> [ ( <replaceable\n </listitem>\n </varlistentry>\n \n+ <varlistentry>\n+ <term><replaceable class=\"parameter\">publication_name</replaceable></term>\n+ <listitem>\n+ <para>\n+ The name of an\n+ existing <link linkend=\"logical-replication-publication\">publication</link>.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+\n+ <varlistentry>\n+ <term><literal>PUBLICATION_NAMES</literal></term>\n+ <listitem>\n+ <para>\n+ Specifies a list of publications. Only rows that match the\n+ <link linkend=\"logical-replication-row-filter\">row filter</link> of at\n+ least one the publications are copied. If at least one publication in\n+ the list has no row filter, the whole table contents will be copied.\n+ </para>\n+ <para>\n+ If\n+ the <link linkend=\"guc-publication-security\">publication_security</link>\n+ configuration parameter is <literal>on</literal>, the list is required.\n+ and the user needs to have the <literal>USAGE</literal> privilege on all\n+ the publications in the list which are actually used to retrieve the\n+ data from given table.\n+ </para>\n+ <para>\n+ This option is allowed only in <command>COPY TO</command>. Currently,\n+ only the users with the <literal>REPLICATION</literal> privilege can use\n+ this option.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n+\n <varlistentry>\n <term><literal>WHERE</literal></term>\n <listitem>\ndiff --git a/doc/src/sgml/ref/grant.sgml b/doc/src/sgml/ref/grant.sgml\nindex 35bf0332c8..329a4f9023 100644\n--- a/doc/src/sgml/ref/grant.sgml\n+++ b/doc/src/sgml/ref/grant.sgml\n@@ -82,6 +82,11 @@ GRANT { { SET | ALTER SYSTEM } [, ... ] | ALL [ PRIVILEGES ] }\n TO <replaceable class=\"parameter\">role_specification</replaceable> [, ...] [ WITH GRANT OPTION ]\n [ GRANTED BY <replaceable class=\"parameter\">role_specification</replaceable> ]\n \n+GRANT { USAGE [, ... ] | ALL [ PRIVILEGES ] }\n+ ON PUBLICATION <replaceable class=\"parameter\">publication_name</replaceable> [, ...]\n+ TO <replaceable class=\"parameter\">role_specification</replaceable> [, ...] [ WITH GRANT OPTION ]\n+ [ GRANTED BY <replaceable class=\"parameter\">role_specification</replaceable> ]\n+\n GRANT { { CREATE | USAGE } [, ...] | ALL [ PRIVILEGES ] }\n ON SCHEMA <replaceable>schema_name</replaceable> [, ...]\n TO <replaceable class=\"parameter\">role_specification</replaceable> [, ...] [ WITH GRANT OPTION ]\n@@ -513,8 +518,8 @@ GRANT admins TO joe;\n </para>\n \n <para>\n- Privileges on databases, tablespaces, schemas, languages, and\n- configuration parameters are\n+ Privileges on databases, tablespaces, schemas, languages, configuration\n+ parameters and publications are\n <productname>PostgreSQL</productname> extensions.\n </para>\n </refsect1>\ndiff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.c\nindex c4232344aa..b7dc203859 100644\n--- a/src/backend/catalog/aclchk.c\n+++ b/src/backend/catalog/aclchk.c\n@@ -253,6 +253,9 @@ restrict_and_check_grant(bool is_grant, AclMode avail_goptions, bool all_privs,\n \t\tcase OBJECT_FUNCTION:\n \t\t\twhole_mask = ACL_ALL_RIGHTS_FUNCTION;\n \t\t\tbreak;\n+\t\tcase OBJECT_PUBLICATION:\n+\t\t\twhole_mask = ACL_ALL_RIGHTS_PUBLICATION;\n+\t\t\tbreak;\n \t\tcase OBJECT_LANGUAGE:\n \t\t\twhole_mask = ACL_ALL_RIGHTS_LANGUAGE;\n \t\t\tbreak;\n@@ -485,6 +488,10 @@ ExecuteGrantStmt(GrantStmt *stmt)\n \t\t\tall_privileges = ACL_ALL_RIGHTS_FUNCTION;\n \t\t\terrormsg = gettext_noop(\"invalid privilege type %s for function\");\n \t\t\tbreak;\n+\t\tcase OBJECT_PUBLICATION:\n+\t\t\tall_privileges = ACL_ALL_RIGHTS_PUBLICATION;\n+\t\t\terrormsg = gettext_noop(\"invalid privilege type %s for publication\");\n+\t\t\tbreak;\n \t\tcase OBJECT_LANGUAGE:\n \t\t\tall_privileges = ACL_ALL_RIGHTS_LANGUAGE;\n \t\t\terrormsg = gettext_noop(\"invalid privilege type %s for language\");\n@@ -621,6 +628,9 @@ ExecGrantStmt_oids(InternalGrant *istmt)\n \t\tcase OBJECT_LARGEOBJECT:\n \t\t\tExecGrant_Largeobject(istmt);\n \t\t\tbreak;\n+\t\tcase OBJECT_PUBLICATION:\n+\t\t\tExecGrant_common(istmt, PublicationRelationId, ACL_ALL_RIGHTS_PUBLICATION, NULL);\n+\t\t\tbreak;\n \t\tcase OBJECT_SCHEMA:\n \t\t\tExecGrant_common(istmt, NamespaceRelationId, ACL_ALL_RIGHTS_SCHEMA, NULL);\n \t\t\tbreak;\n@@ -731,6 +741,16 @@ objectNamesToOids(ObjectType objtype, List *objnames, bool is_grant)\n \t\t\t\tobjects = lappend_oid(objects, lobjOid);\n \t\t\t}\n \t\t\tbreak;\n+\t\tcase OBJECT_PUBLICATION:\n+\t\t\tforeach(cell, objnames)\n+\t\t\t{\n+\t\t\t\tchar\t *nspname = strVal(lfirst(cell));\n+\t\t\t\tOid\t\t\toid;\n+\n+\t\t\t\toid = get_publication_oid(nspname, false);\n+\t\t\t\tobjects = lappend_oid(objects, oid);\n+\t\t\t}\n+\t\t\tbreak;\n \t\tcase OBJECT_SCHEMA:\n \t\t\tforeach(cell, objnames)\n \t\t\t{\n@@ -3023,6 +3043,8 @@ pg_aclmask(ObjectType objtype, Oid object_oid, AttrNumber attnum, Oid roleid,\n \t\t\treturn object_aclmask(DatabaseRelationId, object_oid, roleid, mask, how);\n \t\tcase OBJECT_FUNCTION:\n \t\t\treturn object_aclmask(ProcedureRelationId, object_oid, roleid, mask, how);\n+\t\tcase OBJECT_PUBLICATION:\n+\t\t\treturn object_aclmask(PublicationRelationId, object_oid, roleid, mask, how);\n \t\tcase OBJECT_LANGUAGE:\n \t\t\treturn object_aclmask(LanguageRelationId, object_oid, roleid, mask, how);\n \t\tcase OBJECT_LARGEOBJECT:\ndiff --git a/src/backend/catalog/namespace.c b/src/backend/catalog/namespace.c\nindex 14e57adee2..d76b052059 100644\n--- a/src/backend/catalog/namespace.c\n+++ b/src/backend/catalog/namespace.c\n@@ -2936,7 +2936,6 @@ Oid\n LookupExplicitNamespace(const char *nspname, bool missing_ok)\n {\n \tOid\t\t\tnamespaceId;\n-\tAclResult\taclresult;\n \n \t/* check for pg_temp alias */\n \tif (strcmp(nspname, \"pg_temp\") == 0)\n@@ -2955,10 +2954,20 @@ LookupExplicitNamespace(const char *nspname, bool missing_ok)\n \tif (missing_ok && !OidIsValid(namespaceId))\n \t\treturn InvalidOid;\n \n-\taclresult = object_aclcheck(NamespaceRelationId, namespaceId, GetUserId(), ACL_USAGE);\n-\tif (aclresult != ACLCHECK_OK)\n-\t\taclcheck_error(aclresult, OBJECT_SCHEMA,\n-\t\t\t\t\t nspname);\n+\t/*\n+\t * If the publication security is active, bypass the standard security\n+\t * checks.\n+\t */\n+\tif (!publication_security)\n+\t{\n+\t\tAclResult\taclresult;\n+\n+\t\taclresult = object_aclcheck(NamespaceRelationId, namespaceId, GetUserId(),\n+\t\t\t\t\t\t\t\t\tACL_USAGE);\n+\t\tif (aclresult != ACLCHECK_OK)\n+\t\t\taclcheck_error(aclresult, OBJECT_SCHEMA,\n+\t\t\t\t\t\t nspname);\n+\t}\n \t/* Schema search hook for this lookup */\n \tInvokeNamespaceSearchHook(namespaceId, true);\n \n@@ -3835,10 +3844,16 @@ recomputeNamespacePath(void)\n \t\t\t\trname = NameStr(((Form_pg_authid) GETSTRUCT(tuple))->rolname);\n \t\t\t\tnamespaceId = get_namespace_oid(rname, true);\n \t\t\t\tReleaseSysCache(tuple);\n+\n+\t\t\t\t/*\n+\t\t\t\t * If the publication security is active, bypass the standard\n+\t\t\t\t * security checks.\n+\t\t\t\t */\n \t\t\t\tif (OidIsValid(namespaceId) &&\n \t\t\t\t\t!list_member_oid(oidlist, namespaceId) &&\n-\t\t\t\t\tobject_aclcheck(NamespaceRelationId, namespaceId, roleid,\n-\t\t\t\t\t\t\t\t\t\t ACL_USAGE) == ACLCHECK_OK &&\n+\t\t\t\t\t(publication_security ||\n+\t\t\t\t\t object_aclcheck(NamespaceRelationId, namespaceId, roleid,\n+\t\t\t\t\t\t\t\t\t ACL_USAGE) == ACLCHECK_OK) &&\n \t\t\t\t\tInvokeNamespaceSearchHook(namespaceId, false))\n \t\t\t\t\toidlist = lappend_oid(oidlist, namespaceId);\n \t\t\t}\n@@ -3865,8 +3880,9 @@ recomputeNamespacePath(void)\n \t\t\tnamespaceId = get_namespace_oid(curname, true);\n \t\t\tif (OidIsValid(namespaceId) &&\n \t\t\t\t!list_member_oid(oidlist, namespaceId) &&\n-\t\t\t\tobject_aclcheck(NamespaceRelationId, namespaceId, roleid,\n-\t\t\t\t\t\t\t\t\t ACL_USAGE) == ACLCHECK_OK &&\n+\t\t\t\t(publication_security ||\n+\t\t\t\t object_aclcheck(NamespaceRelationId, namespaceId, roleid,\n+\t\t\t\t\t\t\t\t ACL_USAGE) == ACLCHECK_OK) &&\n \t\t\t\tInvokeNamespaceSearchHook(namespaceId, false))\n \t\t\t\toidlist = lappend_oid(oidlist, namespaceId);\n \t\t}\ndiff --git a/src/backend/catalog/objectaddress.c b/src/backend/catalog/objectaddress.c\nindex 2f688166e1..31e7599111 100644\n--- a/src/backend/catalog/objectaddress.c\n+++ b/src/backend/catalog/objectaddress.c\n@@ -587,7 +587,7 @@ static const ObjectPropertyType ObjectProperty[] =\n \t\tAnum_pg_publication_pubname,\n \t\tInvalidAttrNumber,\n \t\tAnum_pg_publication_pubowner,\n-\t\tInvalidAttrNumber,\n+\t\tAnum_pg_publication_pubacl,\n \t\tOBJECT_PUBLICATION,\n \t\ttrue\n \t},\ndiff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c\nindex 7f6024b7a5..93793b1fa4 100644\n--- a/src/backend/catalog/pg_publication.c\n+++ b/src/backend/catalog/pg_publication.c\n@@ -1069,9 +1069,11 @@ GetPublicationRelationMapping(Oid pubid, Oid relid,\n \t\t*qual_isnull = true;\n \t}\n }\n+\n /*\n * Pick those publications from a list which should actually be used to\n- * publish given relation and return them.\n+ * publish given relation, check their USAGE privilege is needed and return\n+ * them.\n *\n * If publish_as_relid_p is passed, the relation whose tuple descriptor should\n * be used to publish the data is stored in *publish_as_relid_p.\n@@ -1165,6 +1167,22 @@ GetEffectiveRelationPublications(Oid relid, List *publications,\n \t\t\t\tpublish = true;\n \t\t}\n \n+\t\t/*\n+\t\t * Check privileges before we use any information of the\n+\t\t * publication.\n+\t\t */\n+\t\tif (publication_security && publish)\n+\t\t{\n+\t\t\tOid\t\t\troleid = GetUserId();\n+\t\t\tAclResult\taclresult;\n+\n+\t\t\taclresult = object_aclcheck(PublicationRelationId, pub->oid,\n+\t\t\t\t\t\t\t\t\t\troleid, ACL_USAGE);\n+\t\t\tif (aclresult != ACLCHECK_OK)\n+\t\t\t\taclcheck_error(aclresult, OBJECT_PUBLICATION,\n+\t\t\t\t\t\t\t get_publication_name(pub->oid, false));\n+\t\t}\n+\n \t\t/*\n \t\t * If the relation is to be published, determine actions to publish,\n \t\t * and list of columns, if appropriate.\ndiff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c\nindex 8edc2c19f6..6504a27771 100644\n--- a/src/backend/commands/copy.c\n+++ b/src/backend/commands/copy.c\n@@ -41,6 +41,8 @@\n #include \"utils/rel.h\"\n #include \"utils/rls.h\"\n \n+static bool isReplicationUser(void);\n+\n /*\n *\t DoCopy executes the SQL COPY statement\n *\n@@ -71,6 +73,7 @@ DoCopy(ParseState *pstate, const CopyStmt *stmt,\n \tOid\t\t\trelid;\n \tRawStmt *query = NULL;\n \tNode\t *whereClause = NULL;\n+\tList\t\t*publication_names = NIL;\n \n \t/*\n \t * Disallow COPY to/from file or program except to users with the\n@@ -105,14 +108,23 @@ DoCopy(ParseState *pstate, const CopyStmt *stmt,\n \t\t}\n \t}\n \n+\t/*\n+\t * It seems more useful to tell the user immediately that something is\n+\t * wrong about the use of the PUBLICATION_NAMES option than to complain\n+\t * about missing SELECT privilege below: whoever is authorized to use this\n+\t * option shouldn't need the SELECT privilege at all. Therefore check the\n+\t * PUBLICATION_NAMES option earlier than the other options. XXX Shouldn't\n+\t * we check all the options here anyway?\n+\t */\n+\tpublication_names = ProcessCopyToPublicationOptions(pstate,\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\tstmt->options,\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\tstmt->is_from);\n+\n \tif (stmt->relation)\n \t{\n \t\tLOCKMODE\tlockmode = is_from ? RowExclusiveLock : AccessShareLock;\n \t\tParseNamespaceItem *nsitem;\n \t\tRTEPermissionInfo *perminfo;\n-\t\tTupleDesc\ttupDesc;\n-\t\tList\t *attnums;\n-\t\tListCell *cur;\n \n \t\tAssert(!stmt->query);\n \n@@ -127,6 +139,14 @@ DoCopy(ParseState *pstate, const CopyStmt *stmt,\n \t\tperminfo = nsitem->p_perminfo;\n \t\tperminfo->requiredPerms = (is_from ? ACL_INSERT : ACL_SELECT);\n \n+\t\t/*\n+\t\t * The access by a replication user is controlled by the publication\n+\t\t * privileges, ACL_SELECT is not required. The actual checks of the\n+\t\t * publication privileges will take place later.\n+\t\t */\n+\t\tif (!is_from && publication_security)\n+\t\t\tperminfo->requiredPerms &= ~ACL_SELECT;\n+\n \t\tif (stmt->whereClause)\n \t\t{\n \t\t\t/* add nsitem to query namespace */\n@@ -147,19 +167,31 @@ DoCopy(ParseState *pstate, const CopyStmt *stmt,\n \t\t\twhereClause = (Node *) make_ands_implicit((Expr *) whereClause);\n \t\t}\n \n-\t\ttupDesc = RelationGetDescr(rel);\n-\t\tattnums = CopyGetAttnums(tupDesc, rel, stmt->attlist);\n-\t\tforeach(cur, attnums)\n+\t\t/*\n+\t\t * If publication row filters need to be applied, the query form of\n+\t\t * COPY TO is used, so the permissions will be checked by the\n+\t\t * executor. Otherwise check the permissions now.\n+\t\t */\n+\t\tif (publication_names == NIL)\n \t\t{\n-\t\t\tint\t\t\tattno;\n-\t\t\tBitmapset **bms;\n+\t\t\tTupleDesc\ttupDesc;\n+\t\t\tList\t *attnums;\n+\t\t\tListCell *cur;\n \n-\t\t\tattno = lfirst_int(cur) - FirstLowInvalidHeapAttributeNumber;\n-\t\t\tbms = is_from ? &perminfo->insertedCols : &perminfo->selectedCols;\n+\t\t\ttupDesc = RelationGetDescr(rel);\n+\t\t\tattnums = CopyGetAttnums(tupDesc, rel, stmt->attlist);\n+\t\t\tforeach(cur, attnums)\n+\t\t\t{\n+\t\t\t\tint\t\t\tattno;\n+\t\t\t\tBitmapset **bms;\n \n-\t\t\t*bms = bms_add_member(*bms, attno);\n+\t\t\t\tattno = lfirst_int(cur) - FirstLowInvalidHeapAttributeNumber;\n+\t\t\t\tbms = is_from ? &perminfo->insertedCols : &perminfo->selectedCols;\n+\n+\t\t\t\t*bms = bms_add_member(*bms, attno);\n+\t\t\t}\n+\t\t\tExecCheckPermissions(pstate->p_rtable, list_make1(perminfo), true);\n \t\t}\n-\t\tExecCheckPermissions(pstate->p_rtable, list_make1(perminfo), true);\n \n \t\t/*\n \t\t * Permission check for row security policies.\n@@ -184,6 +216,7 @@ DoCopy(ParseState *pstate, const CopyStmt *stmt,\n \t\t\t\t\t\t errhint(\"Use INSERT statements instead.\")));\n \n \t\t\tquery = CreateCopyToQuery(stmt, rel, stmt_location, stmt_len);\n+\n \t\t\t/*\n \t\t\t * Close the relation for now, but keep the lock on it to prevent\n \t\t\t * changes between now and when we start the query-based COPY.\n@@ -232,10 +265,24 @@ DoCopy(ParseState *pstate, const CopyStmt *stmt,\n \telse\n \t{\n \t\tCopyToState cstate;\n+\t\tRelation\trel_loc = rel;\n \n-\t\tcstate = BeginCopyTo(pstate, rel, query, relid,\n+\t\t/*\n+\t\t * If publication row filters need to be applied, use the \"COPY query\n+\t\t * TO ...\" form of the command.\n+\t\t */\n+\t\tif (rel && publication_names)\n+\t\t{\n+\t\t\tquery = CreateCopyToQuery(stmt, rel, stmt_location, stmt_len);\n+\n+\t\t\t/* BeginCopyTo() should only receive the query. */\n+\t\t\trel_loc = NULL;\n+\t\t}\n+\n+\t\tcstate = BeginCopyTo(pstate, rel_loc, query, relid,\n \t\t\t\t\t\t\t stmt->filename, stmt->is_program,\n-\t\t\t\t\t\t\t NULL, stmt->attlist, stmt->options);\n+\t\t\t\t\t\t\t NULL, stmt->attlist, stmt->options,\n+\t\t\t\t\t\t\t publication_names);\n \t\t*processed = DoCopyTo(cstate);\t/* copy from database to file */\n \t\tEndCopyTo(cstate);\n \t}\n@@ -482,6 +529,13 @@ ProcessCopyOptions(ParseState *pstate,\n \t\t\t\t\t\t\t\tdefel->defname),\n \t\t\t\t\t\t parser_errposition(pstate, defel->location)));\n \t\t}\n+\t\telse if (strcmp(defel->defname, \"publication_names\") == 0)\n+\t\t{\n+\t\t\t/*\n+\t\t\t * ProcessCopyToPublicationOptions() should have been checked this\n+\t\t\t * already.\n+\t\t\t */\n+\t\t}\n \t\telse\n \t\t\tereport(ERROR,\n \t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n@@ -679,6 +733,78 @@ ProcessCopyOptions(ParseState *pstate,\n \t}\n }\n \n+/*\n+ * Check the PUBLICATION_NAMES option of the \"COPY TO\" command.\n+ *\n+ * This option is checked separate from others.\n+ */\n+List *\n+ProcessCopyToPublicationOptions(ParseState *pstate, List *options,\n+\t\t\t\t\t\t\t\tbool is_from)\n+{\n+\tListCell *option;\n+\tbool\tfound = false;\n+\tList\t*result = NIL;\n+\n+\t/* Extract options from the statement node tree */\n+\tforeach(option, options)\n+\t{\n+\t\tDefElem *defel = lfirst_node(DefElem, option);\n+\n+\t\tif (strcmp(defel->defname, \"publication_names\") == 0)\n+\t\t{\n+\t\t\tif (is_from)\n+\t\t\t\tereport(ERROR,\n+\t\t\t\t\t\terrmsg(\"PUBLICATION_NAMES option only available using COPY TO\"));\n+\n+\t\t\tif (result)\n+\t\t\t\terrorConflictingDefElem(defel, pstate);\n+\t\t\tfound = true;\n+\t\t\tif (defel->arg == NULL || IsA(defel->arg, List))\n+\t\t\t\tresult = castNode(List, defel->arg);\n+\t\t\telse\n+\t\t\t\tereport(ERROR,\n+\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+\t\t\t\t\t\t errmsg(\"argument to option \\\"%s\\\" must be a list of publication names\",\n+\t\t\t\t\t\t\t\tdefel->defname),\n+\t\t\t\t\t\t parser_errposition(pstate, defel->location)));\n+\t\t}\n+\t}\n+\n+\t/*\n+\t * If the publication security is enabled, subscriber must send the list\n+\t * of publication in order to tell which subset of the data it is\n+\t * authorized to receive.\n+\t *\n+\t * publication_security does not affect sessions of non-replication users.\n+\t */\n+\tif (!found && publication_security && isReplicationUser())\n+\t{\n+\t\t/*\n+\t\t * This probably means that an old version of subscriber tries to get\n+\t\t * data from a secured publisher.\n+\t\t */\n+\t\tereport(ERROR,\n+\t\t\t\t(errmsg(\"publication security requires the PUBLICATION_NAMES option\")));\n+\t}\n+\n+\t/*\n+\t * The option does only make sense in the context of (logical)\n+\t * replication. We could allow it for non-replication users too, but then\n+\t * we'd have to require it publication_security is on like above and thus\n+\t * break existing client code.\n+\t */\n+\tif (found && !isReplicationUser())\n+\t\tereport(ERROR,\n+\t\t\t\t(errmsg(\"PUBLICATION_NAMES may only be used by roles with the REPLICATION privilege\")));\n+\n+\tif (found && result == NIL)\n+\t\tereport(ERROR,\n+\t\t\t\t(errmsg(\"the value of the PUBLICATION_NAMES option must not be empty\")));\n+\n+\treturn result;\n+}\n+\n /*\n * CopyGetAttnums - build an integer list of attnums to be copied\n *\n@@ -769,3 +895,17 @@ CopyGetAttnums(TupleDesc tupDesc, Relation rel, List *attnamelist)\n \n \treturn attnums;\n }\n+\n+/*\n+ * Check whether the current session can use the USAGE privilege on\n+ * publications instead of the SELECT privileges on tables.\n+ *\n+ * Superuser makes the test pass too so that subscriptions which connect to\n+ * the publisher as superuser work fine.\n+ */\n+static bool\n+isReplicationUser(void)\n+{\n+\treturn has_rolreplication(GetUserId()) || superuser();\n+\n+}\ndiff --git a/src/backend/commands/copyto.c b/src/backend/commands/copyto.c\nindex af0cdef158..fd508b592f 100644\n--- a/src/backend/commands/copyto.c\n+++ b/src/backend/commands/copyto.c\n@@ -34,13 +34,18 @@\n #include \"miscadmin.h\"\n #include \"nodes/makefuncs.h\"\n #include \"optimizer/optimizer.h\"\n+#include \"parser/parsetree.h\"\n+#include \"parser/parse_relation.h\"\n #include \"pgstat.h\"\n #include \"rewrite/rewriteHandler.h\"\n+#include \"rewrite/rewriteManip.h\"\n #include \"storage/fd.h\"\n #include \"tcop/tcopprot.h\"\n+#include \"utils/builtins.h\"\n #include \"utils/lsyscache.h\"\n #include \"utils/memutils.h\"\n #include \"utils/partcache.h\"\n+#include \"utils/acl.h\"\n #include \"utils/rel.h\"\n #include \"utils/snapmgr.h\"\n \n@@ -132,6 +137,10 @@ static void CopySendEndOfRow(CopyToState cstate);\n static void CopySendInt32(CopyToState cstate, int32 val);\n static void CopySendInt16(CopyToState cstate, int16 val);\n \n+static void AddPublicationFiltersToQuery(CopyToState cstate, Query *query,\n+\t\t\t\t\t\t\t\t\t\t List *publication_names);\n+static Node *GetPublicationFilters(Relation rel, List *publications,\n+\t\t\t\t\t\t\t\t int varno);\n \n /*\n * Send copy start/stop messages for frontend copies. These have changed\n@@ -439,6 +448,7 @@ CreateCopyToQuery(const CopyStmt *stmt, Relation rel, int stmt_location,\n * 'data_dest_cb': Callback that processes the output data\n * 'attnamelist': List of char *, columns to include. NIL selects all cols.\n * 'options': List of DefElem. See copy_opt_item in gram.y for selections.\n+ * 'publication_names': PUBLICATION_NAMES option (also contained in 'options')\n *\n * Returns a CopyToState, to be passed to DoCopyTo() and related functions.\n */\n@@ -451,7 +461,8 @@ BeginCopyTo(ParseState *pstate,\n \t\t\tbool is_program,\n \t\t\tcopy_data_dest_cb data_dest_cb,\n \t\t\tList *attnamelist,\n-\t\t\tList *options)\n+\t\t\tList *options,\n+\t\t\tList *publication_names)\n {\n \tCopyToState cstate;\n \tbool\t\tpipe = (filename == NULL && data_dest_cb == NULL);\n@@ -606,6 +617,12 @@ BeginCopyTo(ParseState *pstate,\n \t\t\t\t\t errmsg(\"COPY query must have a RETURNING clause\")));\n \t\t}\n \n+\t\t/*\n+\t\t * If the subscriber passed the publication names, use them.\n+\t\t */\n+\t\tif (publication_names)\n+\t\t\tAddPublicationFiltersToQuery(cstate, query, publication_names);\n+\n \t\t/* plan the query */\n \t\tplan = pg_plan_query(query, pstate->p_sourcetext,\n \t\t\t\t\t\t\t CURSOR_OPT_PARALLEL_OK, NULL);\n@@ -1376,3 +1393,202 @@ CreateCopyDestReceiver(void)\n \n \treturn (DestReceiver *) self;\n }\n+\n+/*\n+ * For each table in the query add the row filters of the related publication\n+ * to the WHERE clause. While doing so, check if the current user has the\n+ * USAGE privilege on the publications.\n+ */\n+static void\n+AddPublicationFiltersToQuery(CopyToState cstate, Query *query,\n+\t\t\t\t\t\t\t List *publication_names)\n+{\n+\tList\t*publications = NIL;\n+\tIndex rtindex;\n+\tFromExpr *from_expr;\n+\tListCell\t*lc;\n+\n+\tAssert(publication_names);\n+\n+\t/* Convert the list of names to a list of OIDs. */\n+\tforeach(lc, publication_names)\n+\t{\n+\t\tchar\t*pubname = strVal(lfirst(lc));\n+\t\tOid\t\tpubid;\n+\t\tPublication\t*pub;\n+\n+\t\tpubid = get_publication_oid(pubname, true);\n+\t\tif (pubid == InvalidOid)\n+\t\t{\n+\t\t\tereport(WARNING,\n+\t\t\t\t(errcode(ERRCODE_UNDEFINED_OBJECT),\n+\t\t\t\t errmsg(\"publication \\\"%s\\\" does not exist\", pubname)));\n+\t\t\tcontinue;\n+\t\t}\n+\n+\t\tpub = GetPublication(pubid);\n+\n+\t\tpublications = lappend(publications, pub);\n+\t}\n+\n+\tif (publications == NIL)\n+\t\tereport(ERROR, errmsg(\"no valid publication received\"));\n+\n+\t/*\n+\t * If the query references at least one table, construct or adjust the\n+\t * WHERE clause according to the publications.\n+\t */\n+\tfrom_expr = query->jointree;\n+\n+\trtindex = 1;\n+\tforeach(lc, query->rtable)\n+\t{\n+\t\tRangeTblEntry *rte;\n+\t\tRelation\tqrel;\n+\t\tList\t*pubs_matched;\n+\t\tNode\t*quals;\n+\n+\t\trte = lfirst_node(RangeTblEntry, lc);\n+\n+\t\t/*\n+\t\t * NoLock because the relation should already be locked due to the\n+\t\t * prior rewriting.\n+\t\t */\n+\t\tqrel = relation_open(rte->relid, NoLock);\n+\n+\t\t/*\n+\t\t * Clear ACL_SELECT on each RTE entry if the ACL_USAGE permission on\n+\t\t * publications should control the access, see below.\n+\t\t */\n+\t\tif (publication_security)\n+\t\t{\n+\t\t\tRTEPermissionInfo *perminfo;\n+\n+\t\t\tperminfo = getRTEPermissionInfo(query->rteperminfos, rte);\n+\t\t\tperminfo->requiredPerms &= ~ACL_SELECT;\n+\t\t}\n+\n+\t\t/*\n+\t\t * Retrieve the publications relevant to this relation, and if needed,\n+\t\t * check if the current user has the USAGE privilege on them.\n+\t\t */\n+\t\tpubs_matched = GetEffectiveRelationPublications(RelationGetRelid(qrel),\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\tpublications, NULL, NULL);\n+\t\tif (pubs_matched == NIL)\n+\t\t\tereport(ERROR,\n+\t\t\t\t\t(errmsg(\"no publication for relation \\\"%s\\\"\",\n+\t\t\t\t\t\t\tget_rel_name(RelationGetRelid(qrel)))));\n+\n+\t\t/* Range table implies there should be a FROM list. */\n+\t\tAssert(from_expr && from_expr->fromlist);\n+\n+\t\t/*\n+\t\t * Use the publication filters to construct the (additional) filter\n+\t\t * expression for this relation.\n+\t\t */\n+\t\tquals = GetPublicationFilters(qrel, pubs_matched, rtindex);\n+\t\tif (quals)\n+\t\t{\n+\t\t\tif (from_expr->quals == NULL)\n+\t\t\t{\n+\t\t\t\t/* Assign a new WHERE clause to the query. */\n+\t\t\t\tfrom_expr->quals = quals;\n+\t\t\t}\n+\t\t\telse\n+\t\t\t{\n+\t\t\t\tList\t*new_quals;\n+\n+\t\t\t\t/*\n+\t\t\t\t * AND the filter for this relation to the existing WHERE\n+\t\t\t\t * clause.\n+\t\t\t\t */\n+\t\t\t\tnew_quals = list_make2(quals, from_expr->quals);\n+\t\t\t\tfrom_expr->quals = (Node *) make_andclause(new_quals);\n+\t\t\t}\n+\t\t}\n+\n+\t\tlist_free(pubs_matched);\n+\t\trelation_close(qrel, NoLock);\n+\t\trtindex++;\n+\t}\n+}\n+\n+/*\n+ * Construct WHERE clause for a relation according to the given list of\n+ * publications.\n+ *\n+ * Return NULL if at least one of the publications has no filter.\n+ */\n+static Node *\n+GetPublicationFilters(Relation rel, List *publications, int varno)\n+{\n+\tOid\t\trelid = RelationGetRelid(rel);\n+\tList\t *filters = NIL;\n+\tNode\t *result = NULL;\n+\tListCell *lc;\n+\tbool\t\tisvarlena;\n+\tFmgrInfo\tfmgrinfo;\n+\tOid\t\t\toutfunc;\n+\n+\tAssert(publications);\n+\n+\t/* Make sure we're ready call the output function for the node values. */\n+\tgetTypeOutputInfo(PG_NODE_TREEOID, &outfunc, &isvarlena);\n+\tAssert(isvarlena);\n+\tfmgr_info(outfunc, &fmgrinfo);\n+\n+\t/* Retrieve the publication filters. */\n+\tforeach(lc, publications)\n+\t{\n+\t\tPublication\t\t*pub = (Publication *) lfirst(lc);\n+\t\tDatum\tattrs, qual;\n+\t\tbool\tattrs_isnull, qual_isnull;\n+\t\tchar\t *nodeStr;\n+\t\tNode\t *node;\n+\n+\t\t/* Get the filter expression. */\n+\t\tGetPublicationRelationMapping(pub->oid, relid, &attrs, &attrs_isnull,\n+\t\t\t\t\t\t\t\t\t &qual, &qual_isnull);\n+\n+\t\t/*\n+\t\t * A single publication w/o expression means that the whole table\n+\t\t * should be published.\n+\t\t */\n+\t\tif (qual_isnull)\n+\t\t{\n+\t\t\tif (filters)\n+\t\t\t{\n+\t\t\t\tlist_free_deep(filters);\n+\t\t\t\tfilters = NIL;\n+\t\t\t}\n+\n+\t\t\tbreak;\n+\t\t}\n+\n+\t\t/* Get the filter expression and add it to the list. */\n+\t\tnodeStr = OutputFunctionCall(&fmgrinfo, qual);\n+\t\tnode = stringToNode(nodeStr);\n+\t\tpfree(nodeStr);\n+\n+\t\t/*\n+\t\t * Adjust varno so that the expression references the correct\n+\t\t * range table entry.\n+\t\t */\n+\t\tChangeVarNodes(node, 1, varno, 0);\n+\n+\t\t/*\n+\t\t * XXX Is it worth checking for duplicate expressions in the list?\n+\t\t */\n+\t\tfilters = lappend(filters, node);\n+\t}\n+\n+\tif (filters)\n+\t{\n+\t\tif (list_length(filters) > 1)\n+\t\t\tresult = (Node *) make_orclause(filters);\n+\t\telse\n+\t\t\tresult = (Node *) linitial(filters);\n+\t}\n+\n+\treturn result;\n+}\ndiff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c\nindex f4ba572697..d9652604c7 100644\n--- a/src/backend/commands/publicationcmds.c\n+++ b/src/backend/commands/publicationcmds.c\n@@ -800,6 +800,8 @@ CreatePublication(ParseState *pstate, CreatePublicationStmt *stmt)\n \t\tBoolGetDatum(pubactions.pubtruncate);\n \tvalues[Anum_pg_publication_pubviaroot - 1] =\n \t\tBoolGetDatum(publish_via_partition_root);\n+\tvalues[Anum_pg_publication_pubowner - 1] = ObjectIdGetDatum(GetUserId());\n+\tnulls[Anum_pg_publication_pubacl - 1] = true;\n \n \ttup = heap_form_tuple(RelationGetDescr(rel), values, nulls);\n \ndiff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c\nindex b32f419176..663bfc034c 100644\n--- a/src/backend/executor/execMain.c\n+++ b/src/backend/executor/execMain.c\n@@ -613,7 +613,9 @@ ExecCheckOneRelPerms(RTEPermissionInfo *perminfo)\n \tOid\t\t\trelOid = perminfo->relid;\n \n \trequiredPerms = perminfo->requiredPerms;\n-\tAssert(requiredPerms != 0);\n+\n+\tif (requiredPerms == 0)\n+\t\treturn true;\n \n \t/*\n \t * userid to check as: current user unless we have a setuid indication.\ndiff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\nindex a0138382a1..fc70aa2057 100644\n--- a/src/backend/parser/gram.y\n+++ b/src/backend/parser/gram.y\n@@ -7655,6 +7655,14 @@ privilege_target:\n \t\t\t\t\tn->objs = $2;\n \t\t\t\t\t$$ = n;\n \t\t\t\t}\n+\t\t\t| PUBLICATION name_list\n+\t\t\t\t{\n+\t\t\t\t\tPrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget));\n+\t\t\t\t\tn->targtype = ACL_TARGET_OBJECT;\n+\t\t\t\t\tn->objtype = OBJECT_PUBLICATION;\n+\t\t\t\t\tn->objs = $2;\n+\t\t\t\t\t$$ = n;\n+\t\t\t\t}\n \t\t\t| SCHEMA name_list\n \t\t\t\t{\n \t\t\t\t\tPrivTarget *n = (PrivTarget *) palloc(sizeof(PrivTarget));\ndiff --git a/src/backend/replication/logical/tablesync.c b/src/backend/replication/logical/tablesync.c\nindex 07eea504ba..2a8cfc5be0 100644\n--- a/src/backend/replication/logical/tablesync.c\n+++ b/src/backend/replication/logical/tablesync.c\n@@ -753,6 +753,27 @@ copy_read_data(void *outbuf, int minread, int maxread)\n }\n \n \n+/*\n+ * Return a comma-separated list of publications associated with the current\n+ * subscriptions.\n+ */\n+static char *\n+get_publication_names(void)\n+{\n+\tStringInfoData buf;\n+\tListCell *lc;\n+\n+\tinitStringInfo(&buf);\n+\tforeach(lc, MySubscription->publications)\n+\t{\n+\t\tif (foreach_current_index(lc) > 0)\n+\t\t\tappendStringInfoString(&buf, \", \");\n+\t\tappendStringInfoString(&buf, quote_literal_cstr(strVal(lfirst(lc))));\n+\t}\n+\n+\treturn buf.data;\n+}\n+\n /*\n * Get information about remote relation in similar fashion the RELATION\n * message provides during replication. This function also returns the relation\n@@ -770,7 +791,6 @@ fetch_remote_table_info(char *nspname, char *relname,\n \tOid\t\t\tqualRow[] = {TEXTOID};\n \tbool\t\tisnull;\n \tint\t\t\tnatt;\n-\tListCell *lc;\n \tBitmapset *included_cols = NULL;\n \n \tlrel->nspname = nspname;\n@@ -812,7 +832,6 @@ fetch_remote_table_info(char *nspname, char *relname,\n \tExecDropSingleTupleTableSlot(slot);\n \twalrcv_clear_result(res);\n \n-\n \t/*\n \t * Get column lists for each relation.\n \t *\n@@ -824,15 +843,7 @@ fetch_remote_table_info(char *nspname, char *relname,\n \t\tWalRcvExecResult *pubres;\n \t\tTupleTableSlot *tslot;\n \t\tOid\t\t\tattrsRow[] = {INT2VECTOROID};\n-\t\tStringInfoData pub_names;\n-\n-\t\tinitStringInfo(&pub_names);\n-\t\tforeach(lc, MySubscription->publications)\n-\t\t{\n-\t\t\tif (foreach_current_index(lc) > 0)\n-\t\t\t\tappendStringInfoString(&pub_names, \", \");\n-\t\t\tappendStringInfoString(&pub_names, quote_literal_cstr(strVal(lfirst(lc))));\n-\t\t}\n+\t\tchar\t *pub_names = get_publication_names();\n \n \t\t/*\n \t\t * Fetch info about column lists for the relation (from all the\n@@ -849,7 +860,7 @@ fetch_remote_table_info(char *nspname, char *relname,\n \t\t\t\t\t\t \" WHERE gpt.relid = %u AND c.oid = gpt.relid\"\n \t\t\t\t\t\t \" AND p.pubname IN ( %s )\",\n \t\t\t\t\t\t lrel->remoteid,\n-\t\t\t\t\t\t pub_names.data);\n+\t\t\t\t\t\t pub_names);\n \n \t\tpubres = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data,\n \t\t\t\t\t\t\t lengthof(attrsRow), attrsRow);\n@@ -904,8 +915,7 @@ fetch_remote_table_info(char *nspname, char *relname,\n \t\tExecDropSingleTupleTableSlot(tslot);\n \n \t\twalrcv_clear_result(pubres);\n-\n-\t\tpfree(pub_names.data);\n+\t\tpfree(pub_names);\n \t}\n \n \t/*\n@@ -986,6 +996,18 @@ fetch_remote_table_info(char *nspname, char *relname,\n \n \twalrcv_clear_result(res);\n \n+\tlrel->pubnames = NULL;\n+\tif (walrcv_server_version(LogRepWorkerWalRcvConn) >= 160000)\n+\t{\n+\t\t/*\n+\t\t * If the publication ACL is implemented, the publisher is responsible\n+\t\t * for checking. All we need to do is to pass the publication names.\n+\t\t * The publisher should only return the data matching these\n+\t\t * publications and only check the ACLs of these.\n+\t\t */\n+\t\tlrel->pubnames = get_publication_names();\n+\t}\n+\n \t/*\n \t * Get relation's row filter expressions. DISTINCT avoids the same\n \t * expression of a table in multiple publications from being included\n@@ -1005,21 +1027,9 @@ fetch_remote_table_info(char *nspname, char *relname,\n \t * 3) one of the subscribed publications is declared as TABLES IN SCHEMA\n \t * that includes this relation\n \t */\n-\tif (walrcv_server_version(LogRepWorkerWalRcvConn) >= 150000)\n+\telse if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 150000)\n \t{\n-\t\tStringInfoData pub_names;\n-\n-\t\t/* Build the pubname list. */\n-\t\tinitStringInfo(&pub_names);\n-\t\tforeach(lc, MySubscription->publications)\n-\t\t{\n-\t\t\tchar\t *pubname = strVal(lfirst(lc));\n-\n-\t\t\tif (foreach_current_index(lc) > 0)\n-\t\t\t\tappendStringInfoString(&pub_names, \", \");\n-\n-\t\t\tappendStringInfoString(&pub_names, quote_literal_cstr(pubname));\n-\t\t}\n+\t\tchar\t *pub_names = get_publication_names();\n \n \t\t/* Check for row filters. */\n \t\tresetStringInfo(&cmd);\n@@ -1030,7 +1040,7 @@ fetch_remote_table_info(char *nspname, char *relname,\n \t\t\t\t\t\t \" WHERE gpt.relid = %u\"\n \t\t\t\t\t\t \" AND p.pubname IN ( %s )\",\n \t\t\t\t\t\t lrel->remoteid,\n-\t\t\t\t\t\t pub_names.data);\n+\t\t\t\t\t\t pub_names);\n \n \t\tres = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 1, qualRow);\n \n@@ -1069,6 +1079,7 @@ fetch_remote_table_info(char *nspname, char *relname,\n \t\tExecDropSingleTupleTableSlot(slot);\n \n \t\twalrcv_clear_result(res);\n+\t\tpfree(pub_names);\n \t}\n \n \tpfree(cmd.data);\n@@ -1105,7 +1116,12 @@ copy_table(Relation rel)\n \t/* Start copy on the publisher. */\n \tinitStringInfo(&cmd);\n \n-\t/* Regular table with no row filter */\n+\t/*\n+\t * Regular table with no row filter.\n+\t *\n+\t * Note that \"qual\" can also be NIL due to the fact the publisher is\n+\t * supposed to handle the row filters, so that we didn't check them here.\n+\t */\n \tif (lrel.relkind == RELKIND_RELATION && qual == NIL)\n \t{\n \t\tappendStringInfo(&cmd, \"COPY %s (\",\n@@ -1122,8 +1138,6 @@ copy_table(Relation rel)\n \n \t\t\tappendStringInfoString(&cmd, quote_identifier(lrel.attnames[i]));\n \t\t}\n-\n-\t\tappendStringInfoString(&cmd, \") TO STDOUT\");\n \t}\n \telse\n \t{\n@@ -1165,9 +1179,20 @@ copy_table(Relation rel)\n \t\t\t}\n \t\t\tlist_free_deep(qual);\n \t\t}\n+\t}\n+\n+\tappendStringInfoString(&cmd, \") TO STDOUT\");\n \n-\t\tappendStringInfoString(&cmd, \") TO STDOUT\");\n+\tif (lrel.pubnames)\n+\t{\n+\t\t/*\n+\t\t * Tell the publisher which publications we are interested in.\n+\t\t * Publishers of recent versions do need this information to construct\n+\t\t * the query filter and to check publication privileges.\n+\t\t */\n+\t\tappendStringInfo(&cmd, \" (PUBLICATION_NAMES (%s)) \", lrel.pubnames);\n \t}\n+\n \tres = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 0, NULL);\n \tpfree(cmd.data);\n \tif (res->status != WALRCV_OK_COPY_OUT)\ndiff --git a/src/backend/replication/pgoutput/pgoutput.c b/src/backend/replication/pgoutput/pgoutput.c\nindex 21b8b2944e..6c2c5add1c 100644\n--- a/src/backend/replication/pgoutput/pgoutput.c\n+++ b/src/backend/replication/pgoutput/pgoutput.c\n@@ -14,6 +14,7 @@\n \n #include \"access/tupconvert.h\"\n #include \"catalog/partition.h\"\n+#include \"catalog/pg_authid.h\"\n #include \"catalog/pg_publication.h\"\n #include \"catalog/pg_publication_rel.h\"\n #include \"catalog/pg_subscription.h\"\n@@ -21,6 +22,7 @@\n #include \"commands/subscriptioncmds.h\"\n #include \"executor/executor.h\"\n #include \"fmgr.h\"\n+#include \"miscadmin.h\"\n #include \"nodes/makefuncs.h\"\n #include \"optimizer/optimizer.h\"\n #include \"parser/parse_relation.h\"\n@@ -28,6 +30,7 @@\n #include \"replication/logicalproto.h\"\n #include \"replication/origin.h\"\n #include \"replication/pgoutput.h\"\n+#include \"utils/acl.h\"\n #include \"utils/builtins.h\"\n #include \"utils/inval.h\"\n #include \"utils/lsyscache.h\"\ndiff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c\nindex 75e8363e24..694a9828bb 100644\n--- a/src/backend/replication/walsender.c\n+++ b/src/backend/replication/walsender.c\n@@ -125,6 +125,12 @@ int\t\t\twal_sender_timeout = 60 * 1000; /* maximum time to send one WAL\n \t\t\t\t\t\t\t\t\t\t\t * data message */\n bool\t\tlog_replication_commands = false;\n \n+/*\n+ * Should USAGE privilege on publications be checked? Defaults to false so\n+ * that server upgrade does not break existing logical replication.\n+ */\n+bool\t\tpublication_security = false;\n+\n /*\n * State for WalSndWakeupRequest\n */\ndiff --git a/src/backend/utils/adt/acl.c b/src/backend/utils/adt/acl.c\nindex 8f7522d103..8c318676e1 100644\n--- a/src/backend/utils/adt/acl.c\n+++ b/src/backend/utils/adt/acl.c\n@@ -29,6 +29,7 @@\n #include \"catalog/pg_namespace.h\"\n #include \"catalog/pg_parameter_acl.h\"\n #include \"catalog/pg_proc.h\"\n+#include \"catalog/pg_publication.h\"\n #include \"catalog/pg_tablespace.h\"\n #include \"catalog/pg_type.h\"\n #include \"commands/dbcommands.h\"\n@@ -118,6 +119,7 @@ static AclMode convert_tablespace_priv_string(text *priv_type_text);\n static Oid\tconvert_type_name(text *typename);\n static AclMode convert_type_priv_string(text *priv_type_text);\n static AclMode convert_parameter_priv_string(text *priv_text);\n+static AclMode convert_publication_priv_string(text *priv_type_text);\n static AclMode convert_role_priv_string(text *priv_type_text);\n static AclResult pg_role_aclcheck(Oid role_oid, Oid roleid, AclMode mode);\n \n@@ -844,6 +846,10 @@ acldefault(ObjectType objtype, Oid ownerId)\n \t\t\tworld_default = ACL_NO_RIGHTS;\n \t\t\towner_default = ACL_ALL_RIGHTS_PARAMETER_ACL;\n \t\t\tbreak;\n+\t\tcase OBJECT_PUBLICATION:\n+\t\t\tworld_default = ACL_USAGE;\n+\t\t\towner_default = ACL_ALL_RIGHTS_PUBLICATION;\n+\t\t\tbreak;\n \t\tdefault:\n \t\t\telog(ERROR, \"unrecognized object type: %d\", (int) objtype);\n \t\t\tworld_default = ACL_NO_RIGHTS;\t/* keep compiler quiet */\n@@ -929,6 +935,9 @@ acldefault_sql(PG_FUNCTION_ARGS)\n \t\tcase 'p':\n \t\t\tobjtype = OBJECT_PARAMETER_ACL;\n \t\t\tbreak;\n+\t\tcase 'P':\n+\t\t\tobjtype = OBJECT_PUBLICATION;\n+\t\t\tbreak;\n \t\tcase 't':\n \t\t\tobjtype = OBJECT_TABLESPACE;\n \t\t\tbreak;\n@@ -4558,6 +4567,48 @@ convert_parameter_priv_string(text *priv_text)\n \treturn convert_any_priv_string(priv_text, parameter_priv_map);\n }\n \n+/*\n+ * has_publication_privilege_id\n+ *\t\tCheck user privileges on a publication given\n+ *\t\tpublication oid and text priv name.\n+ *\t\tcurrent_user is assumed\n+ */\n+Datum\n+has_publication_privilege_id(PG_FUNCTION_ARGS)\n+{\n+\tOid\t\t\tpuboid = PG_GETARG_OID(0);\n+\ttext\t *priv_type_text = PG_GETARG_TEXT_PP(1);\n+\tOid\t\t\troleid;\n+\tAclMode\t\tmode;\n+\tAclResult\taclresult;\n+\n+\troleid = GetUserId();\n+\tmode = convert_publication_priv_string(priv_type_text);\n+\n+\tif (!SearchSysCacheExists1(PUBLICATIONOID, ObjectIdGetDatum(puboid)))\n+\t\tPG_RETURN_NULL();\n+\n+\taclresult = object_aclcheck(PublicationRelationId, puboid, roleid, mode);\n+\n+\tPG_RETURN_BOOL(aclresult == ACLCHECK_OK);\n+}\n+\n+/*\n+ * convert_publication_priv_string\n+ *\t\tConvert text string to AclMode value.\n+ */\n+static AclMode\n+convert_publication_priv_string(text *priv_type_text)\n+{\n+\tstatic const priv_map type_priv_map[] = {\n+\t\t{\"USAGE\", ACL_USAGE},\n+\t\t{\"USAGE WITH GRANT OPTION\", ACL_GRANT_OPTION_FOR(ACL_USAGE)},\n+\t\t{NULL, 0}\n+\t};\n+\n+\treturn convert_any_priv_string(priv_type_text, type_priv_map);\n+}\n+\n /*\n * pg_has_role variants\n *\t\tThese are all named \"pg_has_role\" at the SQL level.\ndiff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c\nindex 1c0583fe26..90ef11eef9 100644\n--- a/src/backend/utils/misc/guc_tables.c\n+++ b/src/backend/utils/misc/guc_tables.c\n@@ -686,6 +686,8 @@ const char *const config_group_names[] =\n \tgettext_noop(\"Replication / Primary Server\"),\n \t/* REPLICATION_STANDBY */\n \tgettext_noop(\"Replication / Standby Servers\"),\n+\t/* REPLICATION_PUBLISHERS */\n+\tgettext_noop(\"Replication / Publishers\"),\n \t/* REPLICATION_SUBSCRIBERS */\n \tgettext_noop(\"Replication / Subscribers\"),\n \t/* QUERY_TUNING_METHOD */\n@@ -1973,6 +1975,16 @@ struct config_bool ConfigureNamesBool[] =\n \t\tNULL, NULL, NULL\n \t},\n \n+\t{\n+\t\t{\"publication_security\", PGC_SUSET, REPLICATION_PUBLISHERS,\n+\t\t\tgettext_noop(\"Enable publication security.\"),\n+\t\t\tgettext_noop(\"When enabled, the USAGE privilege is needed to access publications.\")\n+\t\t},\n+\t\t&publication_security,\n+\t\tfalse,\n+\t\tNULL, NULL, NULL\n+\t},\n+\n \t/* End-of-list marker */\n \t{\n \t\t{NULL, 0, 0, NULL, NULL}, NULL, false, NULL, NULL, NULL\ndiff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample\nindex d06074b86f..720b7157c2 100644\n--- a/src/backend/utils/misc/postgresql.conf.sample\n+++ b/src/backend/utils/misc/postgresql.conf.sample\n@@ -353,6 +353,12 @@\n \t\t\t\t\t# retrieve WAL after a failed attempt\n #recovery_min_apply_delay = 0\t\t# minimum delay for applying changes during recovery\n \n+# - Publishers -\n+\n+# These settings are ignored on a subscriber.\n+\n+#publication_security = off\t\t# should publication privileges be checked?\n+\n # - Subscribers -\n \n # These settings are ignored on a publisher.\ndiff --git a/src/bin/pg_dump/dumputils.c b/src/bin/pg_dump/dumputils.c\nindex 079693585c..4b0d1b5d27 100644\n--- a/src/bin/pg_dump/dumputils.c\n+++ b/src/bin/pg_dump/dumputils.c\n@@ -511,6 +511,8 @@ do { \\\n \t\tCONVERT_PRIV('r', \"SELECT\");\n \t\tCONVERT_PRIV('w', \"UPDATE\");\n \t}\n+\telse if (strcmp(type, \"PUBLICATION\") == 0)\n+\t\tCONVERT_PRIV('U', \"USAGE\");\n \telse\n \t\tabort();\n \ndiff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c\nindex 2e068c6620..a14c8738ef 100644\n--- a/src/bin/pg_dump/pg_dump.c\n+++ b/src/bin/pg_dump/pg_dump.c\n@@ -4025,6 +4025,8 @@ getPublications(Archive *fout, int *numPublications)\n \tint\t\t\ti_pubdelete;\n \tint\t\t\ti_pubtruncate;\n \tint\t\t\ti_pubviaroot;\n+\tint\t\t\ti_pubacl;\n+\tint\t\t\ti_acldefault;\n \tint\t\t\ti,\n \t\t\t\tntups;\n \n@@ -4039,27 +4041,32 @@ getPublications(Archive *fout, int *numPublications)\n \tresetPQExpBuffer(query);\n \n \t/* Get the publications. */\n-\tif (fout->remoteVersion >= 130000)\n+\tif (fout->remoteVersion >= 150000)\n \t\tappendPQExpBufferStr(query,\n-\t\t\t\t\t\t\t \"SELECT p.tableoid, p.oid, p.pubname, \"\n-\t\t\t\t\t\t\t \"p.pubowner, \"\n-\t\t\t\t\t\t\t \"p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot \"\n-\t\t\t\t\t\t\t \"FROM pg_publication p\");\n+\t\t\t\t\t\t \"SELECT p.tableoid, p.oid, p.pubname, \"\n+\t\t\t\t\t\t \"p.pubowner, \"\n+\t\t\t\t\t\t \"p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot, p.pubacl, acldefault('P', p.pubowner) AS acldefault \"\n+\t\t\t\t\t\t \"FROM pg_publication p\");\n+\telse if (fout->remoteVersion >= 130000)\n+\t\tappendPQExpBuffer(query,\n+\t\t\t\t\t\t \"SELECT p.tableoid, p.oid, p.pubname, \"\n+\t\t\t\t\t\t \"p.pubowner, \"\n+\t\t\t\t\t\t \"p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, p.pubviaroot, '{}' AS pubacl, '{}' AS acldefault \"\n+\t\t\t\t\t\t \"FROM pg_publication p\");\n \telse if (fout->remoteVersion >= 110000)\n \t\tappendPQExpBufferStr(query,\n-\t\t\t\t\t\t\t \"SELECT p.tableoid, p.oid, p.pubname, \"\n-\t\t\t\t\t\t\t \"p.pubowner, \"\n-\t\t\t\t\t\t\t \"p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot \"\n-\t\t\t\t\t\t\t \"FROM pg_publication p\");\n+\t\t\t\t\t\t \"SELECT p.tableoid, p.oid, p.pubname, \"\n+\t\t\t\t\t\t \"p.pubowner, \"\n+\t\t\t\t\t\t \"p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, p.pubtruncate, false AS pubviaroot, '{}' AS pubacl, '{}' AS acldefault \"\n+\t\t\t\t\t\t \"FROM pg_publication p\");\n \telse\n \t\tappendPQExpBufferStr(query,\n-\t\t\t\t\t\t\t \"SELECT p.tableoid, p.oid, p.pubname, \"\n-\t\t\t\t\t\t\t \"p.pubowner, \"\n-\t\t\t\t\t\t\t \"p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot \"\n-\t\t\t\t\t\t\t \"FROM pg_publication p\");\n+\t\t\t\t\t\t \"SELECT p.tableoid, p.oid, p.pubname, \"\n+\t\t\t\t\t\t \"p.pubowner, \"\n+\t\t\t\t\t\t \"p.puballtables, p.pubinsert, p.pubupdate, p.pubdelete, false AS pubtruncate, false AS pubviaroot, '{}' AS pubacl, '{}' AS acldefault \"\n+\t\t\t\t\t\t \"FROM pg_publication p\");\n \n \tres = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);\n-\n \tntups = PQntuples(res);\n \n \ti_tableoid = PQfnumber(res, \"tableoid\");\n@@ -4072,6 +4079,8 @@ getPublications(Archive *fout, int *numPublications)\n \ti_pubdelete = PQfnumber(res, \"pubdelete\");\n \ti_pubtruncate = PQfnumber(res, \"pubtruncate\");\n \ti_pubviaroot = PQfnumber(res, \"pubviaroot\");\n+\ti_pubacl = PQfnumber(res, \"pubacl\");\n+\ti_acldefault = PQfnumber(res, \"acldefault\");\n \n \tpubinfo = pg_malloc(ntups * sizeof(PublicationInfo));\n \n@@ -4096,6 +4105,11 @@ getPublications(Archive *fout, int *numPublications)\n \t\t\t(strcmp(PQgetvalue(res, i, i_pubtruncate), \"t\") == 0);\n \t\tpubinfo[i].pubviaroot =\n \t\t\t(strcmp(PQgetvalue(res, i, i_pubviaroot), \"t\") == 0);\n+\t\tpubinfo[i].dacl.acl = pg_strdup(PQgetvalue(res, i, i_pubacl));\n+\t\tpubinfo[i].dacl.acldefault = pg_strdup(PQgetvalue(res, i, i_acldefault));\n+\t\tpubinfo[i].dacl.privtype = 0;\n+\t\tpubinfo[i].dacl.initprivs = NULL;\n+\t\tpubinfo[i].dobj.components |= DUMP_COMPONENT_ACL;\n \n \t\t/* Decide whether we want to dump it */\n \t\tselectDumpableObject(&(pubinfo[i].dobj), fout);\n@@ -4199,6 +4213,11 @@ dumpPublication(Archive *fout, const PublicationInfo *pubinfo)\n \t\t\t\t\t NULL, pubinfo->rolname,\n \t\t\t\t\t pubinfo->dobj.catId, 0, pubinfo->dobj.dumpId);\n \n+\tif (pubinfo->dobj.dump & DUMP_COMPONENT_ACL)\n+\t\tdumpACL(fout, pubinfo->dobj.dumpId, InvalidDumpId, \"PUBLICATION\",\n+\t\t\t\tpg_strdup(fmtId(pubinfo->dobj.name)), NULL, NULL,\n+\t\t\t\tpubinfo->rolname, &pubinfo->dacl);\n+\n \tdestroyPQExpBuffer(delq);\n \tdestroyPQExpBuffer(query);\n \tfree(qpubname);\ndiff --git a/src/bin/pg_dump/pg_dump.h b/src/bin/pg_dump/pg_dump.h\nindex cdca0b993d..36e9a00cbf 100644\n--- a/src/bin/pg_dump/pg_dump.h\n+++ b/src/bin/pg_dump/pg_dump.h\n@@ -612,6 +612,7 @@ typedef struct _policyInfo\n typedef struct _PublicationInfo\n {\n \tDumpableObject dobj;\n+\tDumpableAcl dacl;\n \tconst char *rolname;\n \tbool\t\tpuballtables;\n \tbool\t\tpubinsert;\ndiff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c\nindex 99e28f607e..8746a6ed73 100644\n--- a/src/bin/psql/describe.c\n+++ b/src/bin/psql/describe.c\n@@ -6308,6 +6308,7 @@ describePublications(const char *pattern)\n \tPGresult *res;\n \tbool\t\thas_pubtruncate;\n \tbool\t\thas_pubviaroot;\n+\tbool\t\thas_pubacl;\n \n \tPQExpBufferData title;\n \tprintTableContent cont;\n@@ -6324,6 +6325,7 @@ describePublications(const char *pattern)\n \n \thas_pubtruncate = (pset.sversion >= 110000);\n \thas_pubviaroot = (pset.sversion >= 130000);\n+\thas_pubacl = (pset.sversion >= 160000);\n \n \tinitPQExpBuffer(&buf);\n \n@@ -6337,6 +6339,9 @@ describePublications(const char *pattern)\n \tif (has_pubviaroot)\n \t\tappendPQExpBufferStr(&buf,\n \t\t\t\t\t\t\t \", pubviaroot\");\n+\tif (has_pubacl)\n+\t\tappendPQExpBufferStr(&buf,\n+\t\t\t\t\t\t\t\", pubacl\");\n \tappendPQExpBufferStr(&buf,\n \t\t\t\t\t\t \"\\nFROM pg_catalog.pg_publication\\n\");\n \n@@ -6388,6 +6393,8 @@ describePublications(const char *pattern)\n \t\t\tncols++;\n \t\tif (has_pubviaroot)\n \t\t\tncols++;\n+\t\tif (has_pubacl)\n+\t\t\tncols++;\n \n \t\tinitPQExpBuffer(&title);\n \t\tprintfPQExpBuffer(&title, _(\"Publication %s\"), pubname);\n@@ -6402,6 +6409,8 @@ describePublications(const char *pattern)\n \t\t\tprintTableAddHeader(&cont, gettext_noop(\"Truncates\"), true, align);\n \t\tif (has_pubviaroot)\n \t\t\tprintTableAddHeader(&cont, gettext_noop(\"Via root\"), true, align);\n+\t\tif (has_pubacl)\n+\t\t\tprintTableAddHeader(&cont, gettext_noop(\"Access privileges\"), true, align);\n \n \t\tprintTableAddCell(&cont, PQgetvalue(res, i, 2), false, false);\n \t\tprintTableAddCell(&cont, PQgetvalue(res, i, 3), false, false);\n@@ -6412,6 +6421,8 @@ describePublications(const char *pattern)\n \t\t\tprintTableAddCell(&cont, PQgetvalue(res, i, 7), false, false);\n \t\tif (has_pubviaroot)\n \t\t\tprintTableAddCell(&cont, PQgetvalue(res, i, 8), false, false);\n+\t\tif (has_pubacl)\n+\t\t\tprintTableAddCell(&cont, PQgetvalue(res, i, 9), false, false);\n \n \t\tif (!puballtables)\n \t\t{\ndiff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c\nindex 42e87b9e49..768db694d8 100644\n--- a/src/bin/psql/tab-complete.c\n+++ b/src/bin/psql/tab-complete.c\n@@ -3940,6 +3940,7 @@ psql_completion(const char *text, int start, int end)\n \t\t\t\t\t\t\t\t\t\t\t\"LARGE OBJECT\",\n \t\t\t\t\t\t\t\t\t\t\t\"PARAMETER\",\n \t\t\t\t\t\t\t\t\t\t\t\"PROCEDURE\",\n+\t\t\t\t\t\t\t\t\t\t\t\"PUBLICATION\",\n \t\t\t\t\t\t\t\t\t\t\t\"ROUTINE\",\n \t\t\t\t\t\t\t\t\t\t\t\"SCHEMA\",\n \t\t\t\t\t\t\t\t\t\t\t\"SEQUENCE\",\n@@ -3977,6 +3978,8 @@ psql_completion(const char *text, int start, int end)\n \t\t\tCOMPLETE_WITH_QUERY(Query_for_list_of_languages);\n \t\telse if (TailMatches(\"PROCEDURE\"))\n \t\t\tCOMPLETE_WITH_VERSIONED_SCHEMA_QUERY(Query_for_list_of_procedures);\n+\t\telse if (TailMatches(\"PUBLICATION\"))\n+\t\t\tCOMPLETE_WITH_VERSIONED_QUERY(Query_for_list_of_publications);\n \t\telse if (TailMatches(\"ROUTINE\"))\n \t\t\tCOMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_routines);\n \t\telse if (TailMatches(\"SCHEMA\"))\ndiff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat\nindex fbc4aade49..17f358d419 100644\n--- a/src/include/catalog/pg_proc.dat\n+++ b/src/include/catalog/pg_proc.dat\n@@ -7245,6 +7245,9 @@\n { oid => '2273', descr => 'current user privilege on schema by schema oid',\n proname => 'has_schema_privilege', provolatile => 's', prorettype => 'bool',\n proargtypes => 'oid text', prosrc => 'has_schema_privilege_id' },\n+{ oid => '9800', descr => 'current user privilege on publication by publication oid',\n+ proname => 'has_publication_privilege', provolatile => 's', prorettype => 'bool',\n+ proargtypes => 'oid text', prosrc => 'has_publication_privilege_id' },\n \n { oid => '2390',\n descr => 'user privilege on tablespace by username, tablespace name',\ndiff --git a/src/include/catalog/pg_publication.h b/src/include/catalog/pg_publication.h\nindex dab5bc8444..87da458bdb 100644\n--- a/src/include/catalog/pg_publication.h\n+++ b/src/include/catalog/pg_publication.h\n@@ -54,6 +54,12 @@ CATALOG(pg_publication,6104,PublicationRelationId)\n \n \t/* true if partition changes are published using root schema */\n \tbool\t\tpubviaroot;\n+\n+#ifdef CATALOG_VARLEN\t\t\t/* variable-length fields start here */\n+\t/* NOTE: These fields are not present in a relcache entry's rd_rel field. */\n+\t/* access permissions */\n+\taclitem\t\tpubacl[1] BKI_DEFAULT(_null_);\n+#endif\n } FormData_pg_publication;\n \n /* ----------------\n@@ -63,6 +69,8 @@ CATALOG(pg_publication,6104,PublicationRelationId)\n */\n typedef FormData_pg_publication *Form_pg_publication;\n \n+DECLARE_TOAST(pg_publication, 9801, 9802);\n+\n DECLARE_UNIQUE_INDEX_PKEY(pg_publication_oid_index, 6110, PublicationObjectIndexId, on pg_publication using btree(oid oid_ops));\n DECLARE_UNIQUE_INDEX(pg_publication_pubname_index, 6111, PublicationNameIndexId, on pg_publication using btree(pubname name_ops));\n \n@@ -136,6 +144,8 @@ typedef enum PublicationPartOpt\n \tPUBLICATION_PART_ALL,\n } PublicationPartOpt;\n \n+extern PGDLLIMPORT bool publication_security;\n+\n extern List *GetPublicationRelations(Oid pubid, PublicationPartOpt pub_partopt);\n extern List *GetAllTablesPublications(void);\n extern List *GetAllTablesPublicationRelations(bool pubviaroot);\ndiff --git a/src/include/commands/copy.h b/src/include/commands/copy.h\nindex 774b835251..9953091370 100644\n--- a/src/include/commands/copy.h\n+++ b/src/include/commands/copy.h\n@@ -75,6 +75,8 @@ extern void DoCopy(ParseState *pstate, const CopyStmt *stmt,\n \t\t\t\t uint64 *processed);\n \n extern void ProcessCopyOptions(ParseState *pstate, CopyFormatOptions *opts_out, bool is_from, List *options);\n+extern List *ProcessCopyToPublicationOptions(ParseState *pstate,\n+\t\t\t\t\t\t\t\t\t\t\t List *options, bool is_from);\n extern CopyFromState BeginCopyFrom(ParseState *pstate, Relation rel, Node *whereClause,\n \t\t\t\t\t\t\t\t const char *filename,\n \t\t\t\t\t\t\t\t bool is_program, copy_data_source_cb data_source_cb, List *attnamelist, List *options);\n@@ -96,7 +98,8 @@ extern RawStmt *CreateCopyToQuery(const CopyStmt *stmt, Relation rel,\n \t\t\t\t\t\t\t\t int stmt_location, int stmt_len);\n extern CopyToState BeginCopyTo(ParseState *pstate, Relation rel, RawStmt *raw_query,\n \t\t\t\t\t\t\t Oid queryRelId, const char *filename, bool is_program,\n-\t\t\t\t\t\t\t copy_data_dest_cb data_dest_cb, List *attnamelist, List *options);\n+\t\t\t\t\t\t\t copy_data_dest_cb data_dest_cb, List *attnamelist, List *options,\n+\t\t\t\t\t\t\t List *publication_names);\n extern void EndCopyTo(CopyToState cstate);\n extern uint64 DoCopyTo(CopyToState cstate);\n extern List *CopyGetAttnums(TupleDesc tupDesc, Relation rel,\ndiff --git a/src/include/replication/logicalproto.h b/src/include/replication/logicalproto.h\nindex 0ea2df5088..6d9b6fa250 100644\n--- a/src/include/replication/logicalproto.h\n+++ b/src/include/replication/logicalproto.h\n@@ -113,6 +113,7 @@ typedef struct LogicalRepRelation\n \tchar\t\treplident;\t\t/* replica identity */\n \tchar\t\trelkind;\t\t/* remote relation kind */\n \tBitmapset *attkeys;\t\t/* Bitmap of key columns */\n+\tchar\t *pubnames;\t\t/* publication names (comma-separated list) */\n } LogicalRepRelation;\n \n /* Type mapping info */\ndiff --git a/src/include/utils/acl.h b/src/include/utils/acl.h\nindex f8e1238fa2..eb4e5044e8 100644\n--- a/src/include/utils/acl.h\n+++ b/src/include/utils/acl.h\n@@ -169,6 +169,7 @@ typedef struct ArrayType Acl;\n #define ACL_ALL_RIGHTS_SCHEMA\t\t(ACL_USAGE|ACL_CREATE)\n #define ACL_ALL_RIGHTS_TABLESPACE\t(ACL_CREATE)\n #define ACL_ALL_RIGHTS_TYPE\t\t\t(ACL_USAGE)\n+#define ACL_ALL_RIGHTS_PUBLICATION\t(ACL_USAGE)\n \n /* operation codes for pg_*_aclmask */\n typedef enum\ndiff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h\nindex d5a0880678..87ddeecc6e 100644\n--- a/src/include/utils/guc_tables.h\n+++ b/src/include/utils/guc_tables.h\n@@ -75,6 +75,7 @@ enum config_group\n \tREPLICATION_SENDING,\n \tREPLICATION_PRIMARY,\n \tREPLICATION_STANDBY,\n+\tREPLICATION_PUBLISHERS,\n \tREPLICATION_SUBSCRIBERS,\n \tQUERY_TUNING_METHOD,\n \tQUERY_TUNING_COST,\ndiff --git a/src/test/modules/test_copy_callbacks/test_copy_callbacks.c b/src/test/modules/test_copy_callbacks/test_copy_callbacks.c\nindex e65771067e..9178e102bb 100644\n--- a/src/test/modules/test_copy_callbacks/test_copy_callbacks.c\n+++ b/src/test/modules/test_copy_callbacks/test_copy_callbacks.c\n@@ -38,7 +38,7 @@ test_copy_to_callback(PG_FUNCTION_ARGS)\n \tint64\t\tprocessed;\n \n \tcstate = BeginCopyTo(NULL, rel, NULL, RelationGetRelid(rel), NULL, false,\n-\t\t\t\t\t\t to_cb, NIL, NIL);\n+\t\t\t\t\t\t to_cb, NIL, NIL, NIL);\n \tprocessed = DoCopyTo(cstate);\n \tEndCopyTo(cstate);\n \ndiff --git a/src/test/regress/expected/copy.out b/src/test/regress/expected/copy.out\nindex 8a8bf43fde..b6011bea0f 100644\n--- a/src/test/regress/expected/copy.out\n+++ b/src/test/regress/expected/copy.out\n@@ -240,3 +240,55 @@ SELECT * FROM header_copytest ORDER BY a;\n (5 rows)\n \n drop table header_copytest;\n+-- Filtering by publication\n+-- Suppress the warning about insufficient wal_level when creating\n+-- publications.\n+set client_min_messages to error;\n+create role regress_copy_repl_user login replication;\n+create table published_copytest (i int);\n+insert into published_copytest(i) select x from generate_series(1, 10) g(x);\n+create publication pub1 for table published_copytest where (i >= 7);\n+set publication_security to on;\n+-- Test both table name and query forms of the COPY command.\n+set role regress_copy_repl_user;\n+copy published_copytest to stdout (publication_names (pub1));\n+7\n+8\n+9\n+10\n+copy (select i from published_copytest) to stdout (publication_names (pub1));\n+7\n+8\n+9\n+10\n+reset role;\n+-- Publish some more data.\n+create publication pub2 for table published_copytest where (i <= 2);\n+set role regress_copy_repl_user;\n+copy published_copytest to stdout (publication_names (pub1, pub2));\n+1\n+2\n+7\n+8\n+9\n+10\n+reset role;\n+-- If any publication has no filter, the other filters are ignored.\n+create publication pub3 for table published_copytest;\n+set role regress_copy_repl_user;\n+copy published_copytest to stdout (publication_names (pub1, pub2, pub3));\n+1\n+2\n+3\n+4\n+5\n+6\n+7\n+8\n+9\n+10\n+reset role;\n+reset publication_security;\n+reset client_min_messages;\n+drop role regress_copy_repl_user;\n+drop publication pub1, pub2, pub3;\ndiff --git a/src/test/regress/expected/publication.out b/src/test/regress/expected/publication.out\nindex 427f87ea07..76a70c80d4 100644\n--- a/src/test/regress/expected/publication.out\n+++ b/src/test/regress/expected/publication.out\n@@ -87,10 +87,10 @@ RESET client_min_messages;\n -- should be able to add schema to 'FOR TABLE' publication\n ALTER PUBLICATION testpub_fortable ADD TABLES IN SCHEMA pub_test;\n \\dRp+ testpub_fortable\n- Publication testpub_fortable\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_fortable\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"public.testpub_tbl1\"\n Tables from schemas:\n@@ -99,20 +99,20 @@ Tables from schemas:\n -- should be able to drop schema from 'FOR TABLE' publication\n ALTER PUBLICATION testpub_fortable DROP TABLES IN SCHEMA pub_test;\n \\dRp+ testpub_fortable\n- Publication testpub_fortable\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_fortable\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"public.testpub_tbl1\"\n \n -- should be able to set schema to 'FOR TABLE' publication\n ALTER PUBLICATION testpub_fortable SET TABLES IN SCHEMA pub_test;\n \\dRp+ testpub_fortable\n- Publication testpub_fortable\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_fortable\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test\"\n \n@@ -123,10 +123,10 @@ CREATE PUBLICATION testpub_forschema FOR TABLES IN SCHEMA pub_test;\n CREATE PUBLICATION testpub_for_tbl_schema FOR TABLES IN SCHEMA pub_test, TABLE pub_test.testpub_nopk;\n RESET client_min_messages;\n \\dRp+ testpub_for_tbl_schema\n- Publication testpub_for_tbl_schema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_for_tbl_schema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"pub_test.testpub_nopk\"\n Tables from schemas:\n@@ -135,10 +135,10 @@ Tables from schemas:\n -- should be able to add a table of the same schema to the schema publication\n ALTER PUBLICATION testpub_forschema ADD TABLE pub_test.testpub_nopk;\n \\dRp+ testpub_forschema\n- Publication testpub_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"pub_test.testpub_nopk\"\n Tables from schemas:\n@@ -147,10 +147,10 @@ Tables from schemas:\n -- should be able to drop the table\n ALTER PUBLICATION testpub_forschema DROP TABLE pub_test.testpub_nopk;\n \\dRp+ testpub_forschema\n- Publication testpub_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test\"\n \n@@ -161,10 +161,10 @@ ERROR: relation \"testpub_nopk\" is not part of the publication\n -- should be able to set table to schema publication\n ALTER PUBLICATION testpub_forschema SET TABLE pub_test.testpub_nopk;\n \\dRp+ testpub_forschema\n- Publication testpub_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"pub_test.testpub_nopk\"\n \n@@ -186,10 +186,10 @@ Publications:\n \"testpub_foralltables\"\n \n \\dRp+ testpub_foralltables\n- Publication testpub_foralltables\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | t | t | t | f | f | f\n+ Publication testpub_foralltables\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | t | t | t | f | f | f | \n (1 row)\n \n DROP TABLE testpub_tbl2;\n@@ -201,19 +201,19 @@ CREATE PUBLICATION testpub3 FOR TABLE testpub_tbl3;\n CREATE PUBLICATION testpub4 FOR TABLE ONLY testpub_tbl3;\n RESET client_min_messages;\n \\dRp+ testpub3\n- Publication testpub3\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub3\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"public.testpub_tbl3\"\n \"public.testpub_tbl3a\"\n \n \\dRp+ testpub4\n- Publication testpub4\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub4\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"public.testpub_tbl3\"\n \n@@ -234,10 +234,10 @@ UPDATE testpub_parted1 SET a = 1;\n -- only parent is listed as being in publication, not the partition\n ALTER PUBLICATION testpub_forparted ADD TABLE testpub_parted;\n \\dRp+ testpub_forparted\n- Publication testpub_forparted\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_forparted\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"public.testpub_parted\"\n \n@@ -252,10 +252,10 @@ ALTER TABLE testpub_parted DETACH PARTITION testpub_parted1;\n UPDATE testpub_parted1 SET a = 1;\n ALTER PUBLICATION testpub_forparted SET (publish_via_partition_root = true);\n \\dRp+ testpub_forparted\n- Publication testpub_forparted\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | t\n+ Publication testpub_forparted\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | t | \n Tables:\n \"public.testpub_parted\"\n \n@@ -284,10 +284,10 @@ SET client_min_messages = 'ERROR';\n CREATE PUBLICATION testpub5 FOR TABLE testpub_rf_tbl1, testpub_rf_tbl2 WHERE (c <> 'test' AND d < 5) WITH (publish = 'insert');\n RESET client_min_messages;\n \\dRp+ testpub5\n- Publication testpub5\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | f | f | f | f\n+ Publication testpub5\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | f | f | f | f | \n Tables:\n \"public.testpub_rf_tbl1\"\n \"public.testpub_rf_tbl2\" WHERE ((c <> 'test'::text) AND (d < 5))\n@@ -300,10 +300,10 @@ Tables:\n \n ALTER PUBLICATION testpub5 ADD TABLE testpub_rf_tbl3 WHERE (e > 1000 AND e < 2000);\n \\dRp+ testpub5\n- Publication testpub5\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | f | f | f | f\n+ Publication testpub5\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | f | f | f | f | \n Tables:\n \"public.testpub_rf_tbl1\"\n \"public.testpub_rf_tbl2\" WHERE ((c <> 'test'::text) AND (d < 5))\n@@ -319,10 +319,10 @@ Publications:\n \n ALTER PUBLICATION testpub5 DROP TABLE testpub_rf_tbl2;\n \\dRp+ testpub5\n- Publication testpub5\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | f | f | f | f\n+ Publication testpub5\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | f | f | f | f | \n Tables:\n \"public.testpub_rf_tbl1\"\n \"public.testpub_rf_tbl3\" WHERE ((e > 1000) AND (e < 2000))\n@@ -330,10 +330,10 @@ Tables:\n -- remove testpub_rf_tbl1 and add testpub_rf_tbl3 again (another WHERE expression)\n ALTER PUBLICATION testpub5 SET TABLE testpub_rf_tbl3 WHERE (e > 300 AND e < 500);\n \\dRp+ testpub5\n- Publication testpub5\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | f | f | f | f\n+ Publication testpub5\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | f | f | f | f | \n Tables:\n \"public.testpub_rf_tbl3\" WHERE ((e > 300) AND (e < 500))\n \n@@ -366,10 +366,10 @@ SET client_min_messages = 'ERROR';\n CREATE PUBLICATION testpub_syntax1 FOR TABLE testpub_rf_tbl1, ONLY testpub_rf_tbl3 WHERE (e < 999) WITH (publish = 'insert');\n RESET client_min_messages;\n \\dRp+ testpub_syntax1\n- Publication testpub_syntax1\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | f | f | f | f\n+ Publication testpub_syntax1\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | f | f | f | f | \n Tables:\n \"public.testpub_rf_tbl1\"\n \"public.testpub_rf_tbl3\" WHERE (e < 999)\n@@ -379,10 +379,10 @@ SET client_min_messages = 'ERROR';\n CREATE PUBLICATION testpub_syntax2 FOR TABLE testpub_rf_tbl1, testpub_rf_schema1.testpub_rf_tbl5 WHERE (h < 999) WITH (publish = 'insert');\n RESET client_min_messages;\n \\dRp+ testpub_syntax2\n- Publication testpub_syntax2\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | f | f | f | f\n+ Publication testpub_syntax2\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | f | f | f | f | \n Tables:\n \"public.testpub_rf_tbl1\"\n \"testpub_rf_schema1.testpub_rf_tbl5\" WHERE (h < 999)\n@@ -497,10 +497,10 @@ CREATE PUBLICATION testpub6 FOR TABLES IN SCHEMA testpub_rf_schema2;\n ALTER PUBLICATION testpub6 SET TABLES IN SCHEMA testpub_rf_schema2, TABLE testpub_rf_schema2.testpub_rf_tbl6 WHERE (i < 99);\n RESET client_min_messages;\n \\dRp+ testpub6\n- Publication testpub6\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub6\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"testpub_rf_schema2.testpub_rf_tbl6\" WHERE (i < 99)\n Tables from schemas:\n@@ -714,10 +714,10 @@ CREATE PUBLICATION testpub_table_ins WITH (publish = 'insert, truncate');\n RESET client_min_messages;\n ALTER PUBLICATION testpub_table_ins ADD TABLE testpub_tbl5 (a);\t\t-- ok\n \\dRp+ testpub_table_ins\n- Publication testpub_table_ins\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | f | f | t | f\n+ Publication testpub_table_ins\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | f | f | t | f | \n Tables:\n \"public.testpub_tbl5\" (a)\n \n@@ -891,10 +891,10 @@ CREATE TABLE testpub_tbl_both_filters (a int, b int, c int, PRIMARY KEY (a,c));\n ALTER TABLE testpub_tbl_both_filters REPLICA IDENTITY USING INDEX testpub_tbl_both_filters_pkey;\n ALTER PUBLICATION testpub_both_filters ADD TABLE testpub_tbl_both_filters (a,c) WHERE (c != 1);\n \\dRp+ testpub_both_filters\n- Publication testpub_both_filters\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_both_filters\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"public.testpub_tbl_both_filters\" (a, c) WHERE (c <> 1)\n \n@@ -1099,10 +1099,10 @@ ERROR: relation \"testpub_tbl1\" is already member of publication \"testpub_fortbl\n CREATE PUBLICATION testpub_fortbl FOR TABLE testpub_tbl1;\n ERROR: publication \"testpub_fortbl\" already exists\n \\dRp+ testpub_fortbl\n- Publication testpub_fortbl\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_fortbl\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"pub_test.testpub_nopk\"\n \"public.testpub_tbl1\"\n@@ -1140,10 +1140,10 @@ Publications:\n \"testpub_fortbl\"\n \n \\dRp+ testpub_default\n- Publication testpub_default\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | f | f\n+ Publication testpub_default\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | f | f | \n Tables:\n \"pub_test.testpub_nopk\"\n \"public.testpub_tbl1\"\n@@ -1214,17 +1214,57 @@ ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail\n ERROR: permission denied to change owner of publication \"testpub4\"\n HINT: The owner of a FOR TABLES IN SCHEMA publication must be a superuser.\n ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok\n+-- Test the USAGE privilege.\n+SET ROLE regress_publication_user;\n+CREATE ROLE regress_publication_user4;\n+-- First, check that USAGE is granted to PUBLIC by default.\n+SET ROLE regress_publication_user4;\n+SELECT has_publication_privilege(p.oid, 'usage')\n+FROM pg_catalog.pg_publication p\n+WHERE p.pubname='testpub4';\n+ has_publication_privilege \n+---------------------------\n+ t\n+(1 row)\n+\n+-- Revoke the USAGE privilege from PUBLIC.\n+SET ROLE regress_publication_user;\n+REVOKE USAGE ON PUBLICATION testpub4 FROM public;\n+-- regress_publication_user4 does not have the privilege now.\n+SET ROLE regress_publication_user4;\n+SELECT has_publication_privilege(p.oid, 'usage')\n+FROM pg_catalog.pg_publication p\n+WHERE p.pubname='testpub4';\n+ has_publication_privilege \n+---------------------------\n+ f\n+(1 row)\n+\n+-- Grant USAGE to regress_publication_user4 explicitly.\n+SET ROLE regress_publication_user;\n+GRANT USAGE ON PUBLICATION testpub4 TO regress_publication_user4;\n+-- regress_publication_user4 does have the privilege now.\n+SET ROLE regress_publication_user4;\n+SELECT has_publication_privilege(p.oid, 'usage')\n+FROM pg_catalog.pg_publication p\n+WHERE p.pubname='testpub4';\n+ has_publication_privilege \n+---------------------------\n+ t\n+(1 row)\n+\n SET ROLE regress_publication_user;\n DROP PUBLICATION testpub4;\n DROP ROLE regress_publication_user3;\n+DROP ROLE regress_publication_user4;\n REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;\n DROP TABLE testpub_parted;\n DROP TABLE testpub_tbl1;\n \\dRp+ testpub_default\n- Publication testpub_default\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | f | f\n+ Publication testpub_default\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | f | f | \n (1 row)\n \n -- fail - must be owner of publication\n@@ -1263,19 +1303,19 @@ CREATE TABLE \"CURRENT_SCHEMA\".\"CURRENT_SCHEMA\"(id int);\n SET client_min_messages = 'ERROR';\n CREATE PUBLICATION testpub1_forschema FOR TABLES IN SCHEMA pub_test1;\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \n CREATE PUBLICATION testpub2_forschema FOR TABLES IN SCHEMA pub_test1, pub_test2, pub_test3;\n \\dRp+ testpub2_forschema\n- Publication testpub2_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub2_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \"pub_test2\"\n@@ -1289,44 +1329,44 @@ CREATE PUBLICATION testpub6_forschema FOR TABLES IN SCHEMA \"CURRENT_SCHEMA\", CUR\n CREATE PUBLICATION testpub_fortable FOR TABLE \"CURRENT_SCHEMA\".\"CURRENT_SCHEMA\";\n RESET client_min_messages;\n \\dRp+ testpub3_forschema\n- Publication testpub3_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub3_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"public\"\n \n \\dRp+ testpub4_forschema\n- Publication testpub4_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub4_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"CURRENT_SCHEMA\"\n \n \\dRp+ testpub5_forschema\n- Publication testpub5_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub5_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"CURRENT_SCHEMA\"\n \"public\"\n \n \\dRp+ testpub6_forschema\n- Publication testpub6_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub6_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"CURRENT_SCHEMA\"\n \"public\"\n \n \\dRp+ testpub_fortable\n- Publication testpub_fortable\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_fortable\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"CURRENT_SCHEMA.CURRENT_SCHEMA\"\n \n@@ -1360,10 +1400,10 @@ ERROR: schema \"testpub_view\" does not exist\n -- dropping the schema should reflect the change in publication\n DROP SCHEMA pub_test3;\n \\dRp+ testpub2_forschema\n- Publication testpub2_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub2_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \"pub_test2\"\n@@ -1371,20 +1411,20 @@ Tables from schemas:\n -- renaming the schema should reflect the change in publication\n ALTER SCHEMA pub_test1 RENAME to pub_test1_renamed;\n \\dRp+ testpub2_forschema\n- Publication testpub2_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub2_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1_renamed\"\n \"pub_test2\"\n \n ALTER SCHEMA pub_test1_renamed RENAME to pub_test1;\n \\dRp+ testpub2_forschema\n- Publication testpub2_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub2_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \"pub_test2\"\n@@ -1392,10 +1432,10 @@ Tables from schemas:\n -- alter publication add schema\n ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test2;\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \"pub_test2\"\n@@ -1404,10 +1444,10 @@ Tables from schemas:\n ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA non_existent_schema;\n ERROR: schema \"non_existent_schema\" does not exist\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \"pub_test2\"\n@@ -1416,10 +1456,10 @@ Tables from schemas:\n ALTER PUBLICATION testpub1_forschema ADD TABLES IN SCHEMA pub_test1;\n ERROR: schema \"pub_test1\" is already member of publication \"testpub1_forschema\"\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \"pub_test2\"\n@@ -1427,10 +1467,10 @@ Tables from schemas:\n -- alter publication drop schema\n ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \n@@ -1438,10 +1478,10 @@ Tables from schemas:\n ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test2;\n ERROR: tables from schema \"pub_test2\" are not part of the publication\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \n@@ -1449,29 +1489,29 @@ Tables from schemas:\n ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA non_existent_schema;\n ERROR: schema \"non_existent_schema\" does not exist\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \n -- drop all schemas\n ALTER PUBLICATION testpub1_forschema DROP TABLES IN SCHEMA pub_test1;\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n (1 row)\n \n -- alter publication set multiple schema\n ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test2;\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \"pub_test2\"\n@@ -1480,10 +1520,10 @@ Tables from schemas:\n ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA non_existent_schema;\n ERROR: schema \"non_existent_schema\" does not exist\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \"pub_test2\"\n@@ -1492,10 +1532,10 @@ Tables from schemas:\n -- removing the duplicate schemas\n ALTER PUBLICATION testpub1_forschema SET TABLES IN SCHEMA pub_test1, pub_test1;\n \\dRp+ testpub1_forschema\n- Publication testpub1_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub1_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \n@@ -1574,18 +1614,18 @@ SET client_min_messages = 'ERROR';\n CREATE PUBLICATION testpub3_forschema;\n RESET client_min_messages;\n \\dRp+ testpub3_forschema\n- Publication testpub3_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub3_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n (1 row)\n \n ALTER PUBLICATION testpub3_forschema SET TABLES IN SCHEMA pub_test1;\n \\dRp+ testpub3_forschema\n- Publication testpub3_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub3_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables from schemas:\n \"pub_test1\"\n \n@@ -1595,20 +1635,20 @@ CREATE PUBLICATION testpub_forschema_fortable FOR TABLES IN SCHEMA pub_test1, TA\n CREATE PUBLICATION testpub_fortable_forschema FOR TABLE pub_test2.tbl1, TABLES IN SCHEMA pub_test1;\n RESET client_min_messages;\n \\dRp+ testpub_forschema_fortable\n- Publication testpub_forschema_fortable\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_forschema_fortable\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"pub_test2.tbl1\"\n Tables from schemas:\n \"pub_test1\"\n \n \\dRp+ testpub_fortable_forschema\n- Publication testpub_fortable_forschema\n- Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root \n---------------------------+------------+---------+---------+---------+-----------+----------\n- regress_publication_user | f | t | t | t | t | f\n+ Publication testpub_fortable_forschema\n+ Owner | All tables | Inserts | Updates | Deletes | Truncates | Via root | Access privileges \n+--------------------------+------------+---------+---------+---------+-----------+----------+-------------------\n+ regress_publication_user | f | t | t | t | t | f | \n Tables:\n \"pub_test2.tbl1\"\n Tables from schemas:\ndiff --git a/src/test/regress/sql/copy.sql b/src/test/regress/sql/copy.sql\nindex f9da7b1508..4174823cff 100644\n--- a/src/test/regress/sql/copy.sql\n+++ b/src/test/regress/sql/copy.sql\n@@ -268,3 +268,39 @@ a\tc\tb\n \n SELECT * FROM header_copytest ORDER BY a;\n drop table header_copytest;\n+\n+-- Filtering by publication\n+\n+-- Suppress the warning about insufficient wal_level when creating\n+-- publications.\n+set client_min_messages to error;\n+\n+create role regress_copy_repl_user login replication;\n+create table published_copytest (i int);\n+insert into published_copytest(i) select x from generate_series(1, 10) g(x);\n+create publication pub1 for table published_copytest where (i >= 7);\n+\n+set publication_security to on;\n+\n+-- Test both table name and query forms of the COPY command.\n+set role regress_copy_repl_user;\n+copy published_copytest to stdout (publication_names (pub1));\n+copy (select i from published_copytest) to stdout (publication_names (pub1));\n+reset role;\n+\n+-- Publish some more data.\n+create publication pub2 for table published_copytest where (i <= 2);\n+set role regress_copy_repl_user;\n+copy published_copytest to stdout (publication_names (pub1, pub2));\n+reset role;\n+\n+-- If any publication has no filter, the other filters are ignored.\n+create publication pub3 for table published_copytest;\n+set role regress_copy_repl_user;\n+copy published_copytest to stdout (publication_names (pub1, pub2, pub3));\n+reset role;\n+\n+reset publication_security;\n+reset client_min_messages;\n+drop role regress_copy_repl_user;\n+drop publication pub1, pub2, pub3;\ndiff --git a/src/test/regress/sql/publication.sql b/src/test/regress/sql/publication.sql\nindex a47c5939d5..303870a1e9 100644\n--- a/src/test/regress/sql/publication.sql\n+++ b/src/test/regress/sql/publication.sql\n@@ -808,9 +808,37 @@ SET ROLE regress_publication_user3;\n ALTER PUBLICATION testpub4 owner to regress_publication_user2; -- fail\n ALTER PUBLICATION testpub4 owner to regress_publication_user; -- ok\n \n+-- Test the USAGE privilege.\n+SET ROLE regress_publication_user;\n+CREATE ROLE regress_publication_user4;\n+-- First, check that USAGE is granted to PUBLIC by default.\n+SET ROLE regress_publication_user4;\n+SELECT has_publication_privilege(p.oid, 'usage')\n+FROM pg_catalog.pg_publication p\n+WHERE p.pubname='testpub4';\n+\n+-- Revoke the USAGE privilege from PUBLIC.\n+SET ROLE regress_publication_user;\n+REVOKE USAGE ON PUBLICATION testpub4 FROM public;\n+-- regress_publication_user4 does not have the privilege now.\n+SET ROLE regress_publication_user4;\n+SELECT has_publication_privilege(p.oid, 'usage')\n+FROM pg_catalog.pg_publication p\n+WHERE p.pubname='testpub4';\n+\n+-- Grant USAGE to regress_publication_user4 explicitly.\n+SET ROLE regress_publication_user;\n+GRANT USAGE ON PUBLICATION testpub4 TO regress_publication_user4;\n+-- regress_publication_user4 does have the privilege now.\n+SET ROLE regress_publication_user4;\n+SELECT has_publication_privilege(p.oid, 'usage')\n+FROM pg_catalog.pg_publication p\n+WHERE p.pubname='testpub4';\n+\n SET ROLE regress_publication_user;\n DROP PUBLICATION testpub4;\n DROP ROLE regress_publication_user3;\n+DROP ROLE regress_publication_user4;\n \n REVOKE CREATE ON DATABASE regression FROM regress_publication_user2;\n \ndiff --git a/src/test/subscription/t/027_nosuperuser.pl b/src/test/subscription/t/027_nosuperuser.pl\nindex 59192dbe2f..31e94514c1 100644\n--- a/src/test/subscription/t/027_nosuperuser.pl\n+++ b/src/test/subscription/t/027_nosuperuser.pl\n@@ -7,8 +7,10 @@ use warnings;\n use PostgreSQL::Test::Cluster;\n use Test::More;\n \n-my ($node_publisher, $node_subscriber, $publisher_connstr, $result, $offset);\n+my ($node_publisher, $node_subscriber, $publisher_connstr, $result, $offset,\n+\t$offset_pub);\n $offset = 0;\n+$offset_pub = 0;\n \n sub publish_insert\n {\n@@ -103,7 +105,8 @@ $node_publisher->init(allows_streaming => 'logical');\n $node_subscriber->init;\n $node_publisher->start;\n $node_subscriber->start;\n-$publisher_connstr = $node_publisher->connstr . ' dbname=postgres';\n+# Non-super user, so that we can test publication privileges.\n+$publisher_connstr = $node_publisher->connstr . ' dbname=postgres user=regress_alice';\n my %remainder_a = (\n \tpublisher => 0,\n \tsubscriber => 1);\n@@ -141,6 +144,8 @@ for my $node ($node_publisher, $node_subscriber)\n }\n $node_publisher->safe_psql(\n \t'postgres', qq(\n+ALTER ROLE regress_alice REPLICATION;\n+\n SET SESSION AUTHORIZATION regress_alice;\n \n CREATE PUBLICATION alice\n@@ -316,4 +321,53 @@ expect_replication(\"alice.unpartitioned\", 2, 23, 25,\n \t\"nosuperuser nobypassrls table owner can replicate delete into unpartitioned despite rls\"\n );\n \n+# Test publication permissions.\n+$node_publisher->append_conf(\n+\t'postgresql.conf',\n+\tqq[\n+publication_security = on\n+]);\n+$node_publisher->restart;\n+\n+# First, make sure that the user specified in the subscription is not able to\n+# access the data, then do some changes. (By deleting everything we make the\n+# following checks simpler.)\n+$node_publisher->safe_psql(\n+\t'postgres', qq(\n+REVOKE USAGE ON PUBLICATION alice FROM PUBLIC;\n+REVOKE USAGE ON PUBLICATION alice FROM regress_alice;\n+ALTER DATABASE postgres SET publication_security TO on;\n+\n+DELETE FROM alice.unpartitioned;\n+));\n+# Missing permission should cause error.\n+expect_failure(\"alice.unpartitioned\", 2, 23, 25,\n+\t\t\t qr/ERROR: ( [A-Z0-9]+:)? permission denied for publication alice/msi, 0);\n+# Check that the missing privilege makes table synchronization fail too.\n+$node_subscriber->safe_psql(\n+\t'postgres', qq(\n+SET SESSION AUTHORIZATION regress_admin;\n+DROP SUBSCRIPTION admin_sub;\n+TRUNCATE TABLE alice.unpartitioned;\n+CREATE SUBSCRIPTION admin_sub CONNECTION '$publisher_connstr' PUBLICATION alice;\n+));\n+# Note that expect_failure() does not wait for the end of the synchronization,\n+# so if there was any data on publisher side and if it found its way to the\n+# subscriber, the function might still see an empty table. So we only rely on\n+# the function to check the error message.\n+expect_failure(\"alice.unpartitioned\", 0, '', '',\n+\t\t\t qr/ERROR: ( [A-Z0-9]+:)? permission denied for publication alice/msi, 0);\n+# Restore the privilege on the publication.\n+$node_publisher->safe_psql(\n+\t'postgres', qq(\n+GRANT USAGE ON PUBLICATION alice TO regress_alice;\n+));\n+# Wait for synchronization to complete.\n+$node_subscriber->wait_for_subscription_sync;\n+# The replication should work again now.\n+publish_insert(\"alice.unpartitioned\", 1);\n+expect_replication(\"alice.unpartitioned\", 1, 1, 1,\n+ \"unpartitioned is replicated as soon as regress_alic has permissions on alice publication\"\n+);\n+\n done_testing();\n-- \n2.31.1",
"msg_date": "Wed, 15 Mar 2023 08:42:10 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Privileges on PUBLICATION"
},
{
"msg_contents": "On 14.03.23 19:30, Gregory Stark (as CFM) wrote:\n> FYI this looks like it needs a rebase due to a conflict in copy.c and\n> an offset in pgoutput.c.\n> \n> Is there anything specific that still needs review or do you think\n> you've handled all Peter's concerns? In particular, is there \"a\n> comprehensive description of what it is trying to do\"? :)\n\nThe latest versions of the patch have pretty much addressed my initial \ncomments. The patch is structured and explained better now. Most \nextraneous or incomplete changes have been addressed.\n\nThe problem now is that it's still a quite complicated patch that \nintroduces a security feature. It still touches a number of subsystems \non different levels of abstraction. This functionality is not of the \nkind, \"if you don't use it it won't affect you\", since it effectively \npokes holes into the existing privileges checking in order to allow \npublication privileges checking to override it in some cases. It will \ntake significant effort to do a complete analysis and testing on whether \nit is secure and robust. I don't think I will have time for that, and I \ndon't think anyone will want to commit something like this at the last \nmoment.\n\nWe have already taken a number of things from earlier patches and \ncommitted them separately as refactorings. I don't see anything in the \ncurrent patch anymore that we might want to take independently like that.\n\nSo in summary I think it would be best to keep this patch around for PG17.\n\n\n\n",
"msg_date": "Mon, 20 Mar 2023 07:17:51 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Privileges on PUBLICATION"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nWhile researching PG15 features, I was trying to read through the \r\ndocs[1] for the \"parallel_commit\" (04e706d4) feature in postgres_fdw to \r\nbetter understand what it does. I found myself becoming lost with the \r\nreferences to (sub)transaction and a few other items that, while \r\naccurate, may be overly specific in this context.\r\n\r\nAttached is a patch to try to simplify the language for the description \r\nof the \"parallel_commit\" option. A few notes:\r\n\r\n* I stated that this feature applies to both transactions and \r\nsubtransactions.\r\n* I tried to condense some of the language around remote/local \r\ntransactions. If this makes the statement inaccurate, let's revise.\r\n* I removed the \"Be careful with this option\" and instead clarified an \r\nexplanation of the case that could cause performance impacts.\r\n\r\nThis feature seems like it will be impactful for distributed workloads \r\nusing \"postgres_fdw\" so I want to ensure that we both accurately and \r\nclearly capture what it can do.\r\n\r\nThanks!\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/docs/devel/postgres-fdw.html#id-1.11.7.47.11.7",
"msg_date": "Mon, 9 May 2022 11:37:35 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "postgres_fdw \"parallel_commit\" docs"
},
{
"msg_contents": "On Mon, May 09, 2022 at 11:37:35AM -0400, Jonathan S. Katz wrote:\n> @@ -473,27 +473,25 @@ OPTIONS (ADD password_required 'false');\n> <term><literal>parallel_commit</literal> (<type>boolean</type>)</term>\n> <listitem>\n> <para>\n> - This option controls whether <filename>postgres_fdw</filename> commits\n> - remote (sub)transactions opened on a foreign server in a local\n> - (sub)transaction in parallel when the local (sub)transaction commits.\n> - This option can only be specified for foreign servers, not per-table.\n> - The default is <literal>false</literal>.\n> + This option controls whether <filename>postgres_fdw</filename> commits in\n> + parallel remote transactions opened on a foreign server in a local\n> + transaction when the local transaction is committed. This setting\n> + applies to remote and local substransactions. This option can only be\n\ntypo: substransactions\n\n> - If multiple foreign servers with this option enabled are involved in\n> - a local (sub)transaction, multiple remote (sub)transactions opened on\n> - those foreign servers in the local (sub)transaction are committed in\n> - parallel across those foreign servers when the local (sub)transaction\n> - commits.\n> + If multiple foreign servers with this option enabled have a local\n> + transaction, multiple remote transactions on those foreign servers are\n> + committed in parallel across those foreign servers when the local\n> + transaction is committed.\n> </para>\n\nI think \"have a transaction\" doesn't sound good, and the old language \"involved\nin\" was better.\n\n\n",
"msg_date": "Mon, 9 May 2022 10:58:15 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw \"parallel_commit\" docs"
},
{
"msg_contents": "Hi Jonathan,\n\nOn Tue, May 10, 2022 at 12:37 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> While researching PG15 features, I was trying to read through the\n> docs[1] for the \"parallel_commit\" (04e706d4) feature in postgres_fdw to\n> better understand what it does. I found myself becoming lost with the\n> references to (sub)transaction and a few other items that, while\n> accurate, may be overly specific in this context.\n\nI have to admit that that is making the docs confusing.\n\n> Attached is a patch to try to simplify the language for the description\n> of the \"parallel_commit\" option. A few notes:\n\nThanks for the patch!\n\n> * I stated that this feature applies to both transactions and\n> subtransactions.\n> * I tried to condense some of the language around remote/local\n> transactions. If this makes the statement inaccurate, let's revise.\n\nOne thing I noticed is this bit:\n\n- When multiple remote (sub)transactions are involved in a local\n- (sub)transaction, by default <filename>postgres_fdw</filename> commits\n- those remote (sub)transactions one by one when the local (sub)transaction\n- commits.\n- Performance can be improved with the following option:\n+ When multiple remote transactions or subtransactions are involved in a\n+ local transaction (or subtransaction) on a foreign server,\n+ <filename>postgres_fdw</filename> by default commits those remote\n+ transactions serially when the local transaction commits.\nPerformance can be\n+ improved with the following option:\n\nI think this might still be a bit confusing. How about rewriting it\nto something like this?\n\nAs described in F.38.4. Transaction Management, in postgres_fdw\ntransactions are managed by creating corresponding remote\ntransactions, and subtransactions are managed by creating\ncorresponding remote subtransactions. When multiple remote\ntransactions are involved in the current local transaction,\npostgres_fdw by default commits those remote transactions serially\nwhen the local transaction is committed. When multiple remote\nsubtransactions are involved in the current local subtransaction, it\nby default commits those remote subtransactions serially when the\nlocal subtransaction is committed. Performance can be improved with\nthe following option:\n\nIt might be a bit redundant to explain the transaction/subtransaction\ncases differently, but I think it makes it clear and maybe\neasy-to-understand that how they are handled by postgres_fdw by\ndefault.\n\n> * I removed the \"Be careful with this option\" and instead clarified an\n> explanation of the case that could cause performance impacts.\n\nI like this change.\n\n> This feature seems like it will be impactful for distributed workloads\n> using \"postgres_fdw\" so I want to ensure that we both accurately and\n> clearly capture what it can do.\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 11 May 2022 19:25:56 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw \"parallel_commit\" docs"
},
{
"msg_contents": "Hi Justin,\n\nOn Tue, May 10, 2022 at 12:58 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Mon, May 09, 2022 at 11:37:35AM -0400, Jonathan S. Katz wrote:\n> > - If multiple foreign servers with this option enabled are involved in\n> > - a local (sub)transaction, multiple remote (sub)transactions opened on\n> > - those foreign servers in the local (sub)transaction are committed in\n> > - parallel across those foreign servers when the local (sub)transaction\n> > - commits.\n> > + If multiple foreign servers with this option enabled have a local\n> > + transaction, multiple remote transactions on those foreign servers are\n> > + committed in parallel across those foreign servers when the local\n> > + transaction is committed.\n> > </para>\n>\n> I think \"have a transaction\" doesn't sound good, and the old language \"involved\n> in\" was better.\n\nI think so too.\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 11 May 2022 19:29:28 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw \"parallel_commit\" docs"
},
{
"msg_contents": "On Wed, May 11, 2022 at 7:25 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> One thing I noticed is this bit:\n>\n> - When multiple remote (sub)transactions are involved in a local\n> - (sub)transaction, by default <filename>postgres_fdw</filename> commits\n> - those remote (sub)transactions one by one when the local (sub)transaction\n> - commits.\n> - Performance can be improved with the following option:\n> + When multiple remote transactions or subtransactions are involved in a\n> + local transaction (or subtransaction) on a foreign server,\n> + <filename>postgres_fdw</filename> by default commits those remote\n> + transactions serially when the local transaction commits.\n> Performance can be\n> + improved with the following option:\n>\n> I think this might still be a bit confusing. How about rewriting it\n> to something like this?\n>\n> As described in F.38.4. Transaction Management, in postgres_fdw\n> transactions are managed by creating corresponding remote\n> transactions, and subtransactions are managed by creating\n> corresponding remote subtransactions. When multiple remote\n> transactions are involved in the current local transaction,\n> postgres_fdw by default commits those remote transactions serially\n> when the local transaction is committed. When multiple remote\n> subtransactions are involved in the current local subtransaction, it\n> by default commits those remote subtransactions serially when the\n> local subtransaction is committed. Performance can be improved with\n> the following option:\n>\n> It might be a bit redundant to explain the transaction/subtransaction\n> cases differently, but I think it makes it clear and maybe\n> easy-to-understand that how they are handled by postgres_fdw by\n> default.\n\nI modified the patch that way.\n\nOn Wed, May 11, 2022 at 7:29 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Tue, May 10, 2022 at 12:58 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Mon, May 09, 2022 at 11:37:35AM -0400, Jonathan S. Katz wrote:\n> > > - If multiple foreign servers with this option enabled are involved in\n> > > - a local (sub)transaction, multiple remote (sub)transactions opened on\n> > > - those foreign servers in the local (sub)transaction are committed in\n> > > - parallel across those foreign servers when the local (sub)transaction\n> > > - commits.\n> > > + If multiple foreign servers with this option enabled have a local\n> > > + transaction, multiple remote transactions on those foreign servers are\n> > > + committed in parallel across those foreign servers when the local\n> > > + transaction is committed.\n> > > </para>\n> >\n> > I think \"have a transaction\" doesn't sound good, and the old language \"involved\n> > in\" was better.\n>\n> I think so too.\n\nI modified the patch to use the old language. Also, I fixed a typo\nreported by Justin.\n\nAttached is an updated patch. I'll commit the patch if no objections.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Thu, 12 May 2022 20:26:03 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw \"parallel_commit\" docs"
},
{
"msg_contents": "Hi Etsuro,\r\n\r\nOn 5/12/22 7:26 AM, Etsuro Fujita wrote:\r\n\r\n> I modified the patch to use the old language. Also, I fixed a typo\r\n> reported by Justin.\r\n> \r\n> Attached is an updated patch. I'll commit the patch if no objections.\r\n\r\nThanks for reviewing and revising! I think this is much easier to read.\r\n\r\nI made a few minor copy edits. Please see attached.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Thu, 12 May 2022 09:32:12 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw \"parallel_commit\" docs"
},
{
"msg_contents": "Hi Jonathan,\n\nOn Thu, May 12, 2022 at 10:32 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> On 5/12/22 7:26 AM, Etsuro Fujita wrote:\n> > Attached is an updated patch. I'll commit the patch if no objections.\n>\n> I think this is much easier to read.\n\nCool!\n\n> I made a few minor copy edits. Please see attached.\n\nLGTM, so I pushed the patch.\n\nThanks for the patch and taking the time to improve this!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 13 May 2022 18:41:31 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw \"parallel_commit\" docs"
}
] |
[
{
"msg_contents": "Example (you need up-to-the-minute HEAD for this particular test\ncase, but anything that runs a little while before failing will do):\n\nregression=# \\timing\nTiming is on.\nregression=# select * from generate_series('2022-01-01 00:00'::timestamptz,\n 'infinity'::timestamptz,\n '1 month'::interval) limit 10;\nERROR: timestamp out of range\nTime: 0.000 ms\n\nThat timing is wrong. It visibly takes more-or-less half a second\non my machine, and v14 psql reports that accurately:\n\nregression=# \\timing\nTiming is on.\nregression=# select * from generate_series('2022-01-01 00:00'::timestamptz,\n 'infinity'::timestamptz,\n '1 month'::interval) limit 10;\nERROR: timestamp out of range\nTime: 662.107 ms\n\nWhile I've not bisected, I think it's a dead cinch that 7844c9918\nis what broke this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 May 2022 11:56:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "psql now shows zero elapsed time after an error"
},
{
"msg_contents": "On Mon, May 9, 2022 at 11:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Example (you need up-to-the-minute HEAD for this particular test\n> case, but anything that runs a little while before failing will do):\n>\n> regression=# \\timing\n> Timing is on.\n> regression=# select * from generate_series('2022-01-01 00:00'::timestamptz,\n> 'infinity'::timestamptz,\n> '1 month'::interval) limit 10;\n> ERROR: timestamp out of range\n> Time: 0.000 ms\n>\n> That timing is wrong. It visibly takes more-or-less half a second\n> on my machine, and v14 psql reports that accurately:\n>\n> regression=# \\timing\n> Timing is on.\n> regression=# select * from generate_series('2022-01-01 00:00'::timestamptz,\n> 'infinity'::timestamptz,\n> '1 month'::interval) limit 10;\n> ERROR: timestamp out of range\n> Time: 662.107 ms\n>\n> While I've not bisected, I think it's a dead cinch that 7844c9918\n> is what broke this.\n>\n\nThat's true. It happens in ExecQueryAndProcessResults(), when we try to\nshow all query results. If some error occured for a certain result, we\nfailed to check whether this is the last result and if so get timing\nmeasure before printing that result.\n\nMaybe something like below would do the fix.\n\n--- a/src/bin/psql/common.c\n+++ b/src/bin/psql/common.c\n@@ -1560,6 +1560,18 @@ ExecQueryAndProcessResults(const char *query, double\n*elapsed_msec, bool *svpt_g\n else\n result = PQgetResult(pset.db);\n\n+ last = (result == NULL);\n+\n+ /*\n+ * Get timing measure before printing the last\nresult.\n+ */\n+ if (last && timing)\n+ {\n+ INSTR_TIME_SET_CURRENT(after);\n+ INSTR_TIME_SUBTRACT(after, before);\n+ *elapsed_msec =\nINSTR_TIME_GET_MILLISEC(after);\n+ }\n+\n continue;\n }\n else if (svpt_gone_p && !*svpt_gone_p)\n\n\nThanks\nRichard\n\nOn Mon, May 9, 2022 at 11:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Example (you need up-to-the-minute HEAD for this particular test\ncase, but anything that runs a little while before failing will do):\n\nregression=# \\timing\nTiming is on.\nregression=# select * from generate_series('2022-01-01 00:00'::timestamptz,\n 'infinity'::timestamptz,\n '1 month'::interval) limit 10;\nERROR: timestamp out of range\nTime: 0.000 ms\n\nThat timing is wrong. It visibly takes more-or-less half a second\non my machine, and v14 psql reports that accurately:\n\nregression=# \\timing\nTiming is on.\nregression=# select * from generate_series('2022-01-01 00:00'::timestamptz,\n 'infinity'::timestamptz,\n '1 month'::interval) limit 10;\nERROR: timestamp out of range\nTime: 662.107 ms\n\nWhile I've not bisected, I think it's a dead cinch that 7844c9918\nis what broke this.That's true. It happens in ExecQueryAndProcessResults(), when we try toshow all query results. If some error occured for a certain result, wefailed to check whether this is the last result and if so get timingmeasure before printing that result.Maybe something like below would do the fix.--- a/src/bin/psql/common.c+++ b/src/bin/psql/common.c@@ -1560,6 +1560,18 @@ ExecQueryAndProcessResults(const char *query, double *elapsed_msec, bool *svpt_g else result = PQgetResult(pset.db);+ last = (result == NULL);++ /*+ * Get timing measure before printing the last result.+ */+ if (last && timing)+ {+ INSTR_TIME_SET_CURRENT(after);+ INSTR_TIME_SUBTRACT(after, before);+ *elapsed_msec = INSTR_TIME_GET_MILLISEC(after);+ }+ continue; } else if (svpt_gone_p && !*svpt_gone_p)ThanksRichard",
"msg_date": "Tue, 10 May 2022 10:54:57 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql now shows zero elapsed time after an error"
},
{
"msg_contents": "Hello,\n\nThanks for the catch and the proposed fix! Indeed, on errors the timing is \nnot updated appropriately.\n\nISTM that the best course is to update the elapsed time whenever a result \nis obtained, so that a sensible value is always available.\n\nSee attached patch which is a variant of Richard's version.\n\n fabien=# SELECT 1 as one \\; SELECT 1/0 \\; SELECT 2 as two;\n ┌─────┐\n │ one │\n ├─────┤\n │ 1 │\n └─────┘\n (1 row)\n\n ERROR: division by zero\n Time: 0,352 ms\n\nProbably it would be appropriate to add a test case. I'll propose \nsomething later.\n\n-- \nFabien.",
"msg_date": "Tue, 10 May 2022 15:42:41 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: psql now shows zero elapsed time after an error"
},
{
"msg_contents": "On 10.05.22 15:42, Fabien COELHO wrote:\n> \n> Hello,\n> \n> Thanks for the catch and the proposed fix! Indeed, on errors the timing \n> is not updated appropriately.\n> \n> ISTM that the best course is to update the elapsed time whenever a \n> result is obtained, so that a sensible value is always available.\n> \n> See attached patch which is a variant of Richard's version.\n> \n> fabien=# SELECT 1 as one \\; SELECT 1/0 \\; SELECT 2 as two;\n> ┌─────┐\n> │ one │\n> ├─────┤\n> │ 1 │\n> └─────┘\n> (1 row)\n> \n> ERROR: division by zero\n> Time: 0,352 ms\n> \n> Probably it would be appropriate to add a test case. I'll propose \n> something later.\n\ncommitted with a test\n\n\n\n",
"msg_date": "Mon, 23 May 2022 10:13:40 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql now shows zero elapsed time after an error"
},
{
"msg_contents": "\n>> Probably it would be appropriate to add a test case. I'll propose something \n>> later.\n>\n> committed with a test\n\nThanks!\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 23 May 2022 10:49:22 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: psql now shows zero elapsed time after an error"
}
] |
[
{
"msg_contents": "Hi,\n\nDuring query tuning, users may want to check if GEQO is used or not\nto generate a plan. However, users can not know it by simply counting\nthe number of tables that appear in SQL. I know we can know it by\nenabling GEQO_DEBUG flag, but it needs recompiling, so I think it is\ninconvenient. \n\nSo, I would like to propose to add a debug level message that shows\nwhen PostgreSQL use GEQO. That enables users to easily see it by\njust changing log_min_messages.\n\nUse cases are as follows:\n- When investigating about the result of planning, user can determine\nwhether the plan is chosen by the standard planning or GEQO.\n\n- When tuning PostgreSQL, user can determine the suitable value of\ngeqo_threshold parameter.\n\nBest regards.\n\n-- \nKAWAMOTO Masaya <kawamoto@sraoss.co.jp>\nSRA OSS, Inc. Japan",
"msg_date": "Tue, 10 May 2022 10:05:48 +0900",
"msg_from": "KAWAMOTO Masaya <kawamoto@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Proposal: add a debug message about using geqo"
},
{
"msg_contents": "If we add that information to EXPLAIN output, the user won't need\naccess to server logs.\n\nMay be we need it in both the places.\n\nOn Tue, May 10, 2022 at 6:35 AM KAWAMOTO Masaya <kawamoto@sraoss.co.jp> wrote:\n>\n> Hi,\n>\n> During query tuning, users may want to check if GEQO is used or not\n> to generate a plan. However, users can not know it by simply counting\n> the number of tables that appear in SQL. I know we can know it by\n> enabling GEQO_DEBUG flag, but it needs recompiling, so I think it is\n> inconvenient.\n>\n> So, I would like to propose to add a debug level message that shows\n> when PostgreSQL use GEQO. That enables users to easily see it by\n> just changing log_min_messages.\n>\n> Use cases are as follows:\n> - When investigating about the result of planning, user can determine\n> whether the plan is chosen by the standard planning or GEQO.\n>\n> - When tuning PostgreSQL, user can determine the suitable value of\n> geqo_threshold parameter.\n>\n> Best regards.\n>\n> --\n> KAWAMOTO Masaya <kawamoto@sraoss.co.jp>\n> SRA OSS, Inc. Japan\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 10 May 2022 18:49:54 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: add a debug message about using geqo"
},
{
"msg_contents": "On Tue, 10 May 2022 18:49:54 +0530\nAshutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n\n> If we add that information to EXPLAIN output, the user won't need\n> access to server logs.\n> \n> May be we need it in both the places.\n\nThat sounds a nice idea. But I don't think that postgres shows in the\nEXPLAIN output why the plan is selected. Would it be appropriate to\nshow that GEQO is used in EXPLAIN output?\n\nAs a test, I created a patch that add information about GEQO to\nEXPLAIN output by the GEQO option. The output example is as follows.\nWhat do you think about the location and content of information about GEQO?\n\npostgres=# explain (geqo) select o.id, o.date, c.name as customer_name, bar.amount as total_amount\nfrom orders o join customer c on o.customer_id = c.id\njoin (select foo.id as id, sum(foo.amount) as amount\n from (select od.order_id as id, p.name as name, od.quantity as quantity, (p.price * od.quantity) as amount\n from order_detail od join product p on od.product_id = p.id\n ) as foo\n group by id) as bar on o.id = bar.id ;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------\n Hash Join (cost=118.75..155.04 rows=200 width=48)\n Hash Cond: (o.customer_id = c.id)\n -> Hash Join (cost=94.58..130.34 rows=200 width=20)\n Hash Cond: (o.id = bar.id)\n -> Seq Scan on orders o (cost=0.00..30.40 rows=2040 width=12)\n -> Hash (cost=92.08..92.08 rows=200 width=12)\n -> Subquery Scan on bar (cost=88.08..92.08 rows=200 width=12)\n -> HashAggregate (cost=88.08..90.08 rows=200 width=12)\n Group Key: od.order_id\n -> Hash Join (cost=37.00..72.78 rows=2040 width=12)\n Hash Cond: (od.product_id = p.id)\n -> Seq Scan on order_detail od (cost=0.00..30.40 rows=2040 width=12)\n -> Hash (cost=22.00..22.00 rows=1200 width=8)\n -> Seq Scan on product p (cost=0.00..22.00 rows=1200 width=8)\n -> Hash (cost=16.30..16.30 rows=630 width=36)\n -> Seq Scan on customer c (cost=0.00..16.30 rows=630 width=36)\n GeqoDetails: GEQO: used, geqo_threshold: 3, Max join nodes: 3\n(17 rows)\n\npostgres=# set geqo_threshold to 16;\nSET\npostgres=# explain (geqo) select ... ;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------\n Hash Join (cost=118.75..155.04 rows=200 width=48)\n Hash Cond: (o.customer_id = c.id)\n -> Hash Join (cost=94.58..130.34 rows=200 width=20)\n Hash Cond: (o.id = bar.id)\n -> Seq Scan on orders o (cost=0.00..30.40 rows=2040 width=12)\n -> Hash (cost=92.08..92.08 rows=200 width=12)\n -> Subquery Scan on bar (cost=88.08..92.08 rows=200 width=12)\n -> HashAggregate (cost=88.08..90.08 rows=200 width=12)\n Group Key: od.order_id\n -> Hash Join (cost=37.00..72.78 rows=2040 width=12)\n Hash Cond: (od.product_id = p.id)\n -> Seq Scan on order_detail od (cost=0.00..30.40 rows=2040 width=12)\n -> Hash (cost=22.00..22.00 rows=1200 width=8)\n -> Seq Scan on product p (cost=0.00..22.00 rows=1200 width=8)\n -> Hash (cost=16.30..16.30 rows=630 width=36)\n -> Seq Scan on customer c (cost=0.00..16.30 rows=630 width=36)\n GeqoDetails: GEQO: not used, geqo_threshold: 16, Max join nodes: 3\n(17 rows)\n\npostgres=# explain (analyze, settings, geqo) select ...;\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------\n--------------------------\n Hash Join (cost=118.75..155.04 rows=200 width=48) (actual time=0.104..0.113 rows=3 loops=1)\n Hash Cond: (o.customer_id = c.id)\n -> Hash Join (cost=94.58..130.34 rows=200 width=20) (actual time=0.042..0.048 rows=3 loops=1)\n Hash Cond: (o.id = bar.id)\n -> Seq Scan on orders o (cost=0.00..30.40 rows=2040 width=12) (actual time=0.003..0.005 rows=3 loops=1)\n -> Hash (cost=92.08..92.08 rows=200 width=12) (actual time=0.034..0.037 rows=3 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Subquery Scan on bar (cost=88.08..92.08 rows=200 width=12) (actual time=0.031..0.035 rows=3 loops=1)\n -> HashAggregate (cost=88.08..90.08 rows=200 width=12) (actual time=0.030..0.033 rows=3 loops=1)\n Group Key: od.order_id\n Batches: 1 Memory Usage: 56kB\n -> Hash Join (cost=37.00..72.78 rows=2040 width=12) (actual time=0.016..0.023 rows=7 loops=\n1)\n Hash Cond: (od.product_id = p.id)\n -> Seq Scan on order_detail od (cost=0.00..30.40 rows=2040 width=12) (actual time=0.0\n03..0.004 rows=7 loops=1)\n -> Hash (cost=22.00..22.00 rows=1200 width=8) (actual time=0.007..0.008 rows=4 loops=\n1)\n Buckets: 2048 Batches: 1 Memory Usage: 17kB\n -> Seq Scan on product p (cost=0.00..22.00 rows=1200 width=8) (actual time=0.00\n4..0.006 rows=4 loops=1)\n -> Hash (cost=16.30..16.30 rows=630 width=36) (actual time=0.019..0.020 rows=3 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Seq Scan on customer c (cost=0.00..16.30 rows=630 width=36) (actual time=0.014..0.016 rows=3 loops=1)\n Settings: geqo_threshold = '16'\n GeqoDetails: GEQO: not used, geqo_threshold: 16, Max join nodes: 3\n Planning Time: 0.516 ms\n Execution Time: 0.190 ms\n(24 rows)\n\n\n\n\n> On Tue, May 10, 2022 at 6:35 AM KAWAMOTO Masaya <kawamoto@sraoss.co.jp> wrote:\n> >\n> > Hi,\n> >\n> > During query tuning, users may want to check if GEQO is used or not\n> > to generate a plan. However, users can not know it by simply counting\n> > the number of tables that appear in SQL. I know we can know it by\n> > enabling GEQO_DEBUG flag, but it needs recompiling, so I think it is\n> > inconvenient.\n> >\n> > So, I would like to propose to add a debug level message that shows\n> > when PostgreSQL use GEQO. That enables users to easily see it by\n> > just changing log_min_messages.\n> >\n> > Use cases are as follows:\n> > - When investigating about the result of planning, user can determine\n> > whether the plan is chosen by the standard planning or GEQO.\n> >\n> > - When tuning PostgreSQL, user can determine the suitable value of\n> > geqo_threshold parameter.\n> >\n> > Best regards.\n> >\n> > --\n> > KAWAMOTO Masaya <kawamoto@sraoss.co.jp>\n> > SRA OSS, Inc. Japan\n> \n> \n> \n> -- \n> Best Wishes,\n> Ashutosh Bapat\n\n\n-- \nKAWAMOTO Masaya <kawamoto@sraoss.co.jp>\nSRA OSS, Inc. Japan",
"msg_date": "Thu, 2 Jun 2022 15:09:39 +0900",
"msg_from": "KAWAMOTO Masaya <kawamoto@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: add a debug message about using geqo"
},
{
"msg_contents": "On Wed, Jun 1, 2022 at 11:09 PM KAWAMOTO Masaya <kawamoto@sraoss.co.jp> wrote:\n> That sounds a nice idea. But I don't think that postgres shows in the\n> EXPLAIN output why the plan is selected. Would it be appropriate to\n> show that GEQO is used in EXPLAIN output?\n\nI'm reminded of Greenplum's \"Optimizer\" line in its EXPLAIN output\n[1], so from that perspective I think it's intuitive.\n\n> As a test, I created a patch that add information about GEQO to\n> EXPLAIN output by the GEQO option. The output example is as follows.\n> What do you think about the location and content of information about GEQO?\n\nI am a little surprised to see GeqoDetails being printed for a plan\nthat didn't use GEQO, but again that's probably because I'm used to\nGPDB's Optimizer output. And I don't have a lot of personal experience\nusing alternative optimizers.\n\nOne way to think about it might be, if we had ten alternatives, would\nwe want a line for each showing why it wasn't selected, or just one\nline showing the optimizer that was selected? The latter is more\ncompact but doesn't help you debug why something else wasn't chosen, I\nsuppose...\n\n--Jacob\n\n[1] https://docs.vmware.com/en/VMware-Tanzu-Greenplum/6/greenplum-database/GUID-ref_guide-sql_commands-EXPLAIN.html#examples-4\n\n\n",
"msg_date": "Fri, 22 Jul 2022 13:19:54 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: add a debug message about using geqo"
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 1:20 PM Jacob Champion <jchampion@timescale.com>\nwrote:\n\n> On Wed, Jun 1, 2022 at 11:09 PM KAWAMOTO Masaya <kawamoto@sraoss.co.jp>\n> wrote:\n> > That sounds a nice idea. But I don't think that postgres shows in the\n> > EXPLAIN output why the plan is selected. Would it be appropriate to\n> > show that GEQO is used in EXPLAIN output?\n>\n> I'm reminded of Greenplum's \"Optimizer\" line in its EXPLAIN output\n> [1], so from that perspective I think it's intuitive.\n>\n> > As a test, I created a patch that add information about GEQO to\n> > EXPLAIN output by the GEQO option. The output example is as follows.\n> > What do you think about the location and content of information about\n> GEQO?\n\n\n> I am a little surprised to see GeqoDetails being printed for a plan\n> that didn't use GEQO, but again that's probably because I'm used to\n> GPDB's Optimizer output. And I don't have a lot of personal experience\n> using alternative optimizers.\n>\n\nI agree this should be part of explain output.\n\nI would not print the current value of geqo_threshold and leave setting\ndisplay the exclusive purview of the settings option.\n\nThe presentation of only a single geqo result seems incorrect given that\nmultiple trees can exist. In the first example below the full outer join\ncauses 3 relations to be seen as a single relation at the top level (hence\nmax join nodes = 4) while in the inner join case we see all 6 join nodes.\nThere should be two outputs of GEQO in the first explain, one with join\nnodes of 3 and the existing one with 4.\n\nI also don't see the point of labelling them \"max\"; \"join nodes\" seems\nsufficient.\n\nWhile it can probably be figured out from the rest of the plan, listing the\nnames of the join nodes may be useful (and give join nodes some company).\n\nDavid J.\n\npostgres=# explain (verbose, geqo) with gs2 (v2) as materialized ( select *\nfrom generate_series(1,1) ) select * from gs2 as gs4 full outer join\n(select gs2a.v2 from gs2 as gs2a, gs2 as gs2b) as gs5 using (v2),\ngenerate_series(1, 1) as gs (v1) cross join gs2 as gs3 where v1 IN (select\nv2 from gs2);\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.07..0.21 rows=1 width=12)\n Output: COALESCE(gs4.v2, gs2a.v2), gs.v1, gs3.v2\n CTE gs2\n -> Function Scan on pg_catalog.generate_series (cost=0.00..0.01\nrows=1 width=4)\n Output: generate_series.generate_series\n Function Call: generate_series(1, 1)\n -> Nested Loop (cost=0.06..0.16 rows=1 width=12)\n Output: gs.v1, gs4.v2, gs2a.v2\n -> Nested Loop (cost=0.02..0.06 rows=1 width=4)\n Output: gs.v1\n Join Filter: (gs.v1 = gs2.v2)\n -> Function Scan on pg_catalog.generate_series gs\n (cost=0.00..0.01 rows=1 width=4)\n Output: gs.v1\n Function Call: generate_series(1, 1)\n -> HashAggregate (cost=0.02..0.03 rows=1 width=4)\n Output: gs2.v2\n Group Key: gs2.v2\n -> CTE Scan on gs2 (cost=0.00..0.02 rows=1 width=4)\n Output: gs2.v2\n -> Hash Full Join (cost=0.03..0.10 rows=1 width=8)\n Output: gs4.v2, gs2a.v2\n Hash Cond: (gs2a.v2 = gs4.v2)\n -> Nested Loop (cost=0.00..0.05 rows=1 width=4)\n Output: gs2a.v2\n -> CTE Scan on gs2 gs2b (cost=0.00..0.02 rows=1\nwidth=0)\n Output: gs2b.v2\n -> CTE Scan on gs2 gs2a (cost=0.00..0.02 rows=1\nwidth=4)\n Output: gs2a.v2\n -> Hash (cost=0.02..0.02 rows=1 width=4)\n Output: gs4.v2\n -> CTE Scan on gs2 gs4 (cost=0.00..0.02 rows=1\nwidth=4)\n Output: gs4.v2\n -> CTE Scan on gs2 gs3 (cost=0.00..0.02 rows=1 width=4)\n Output: gs3.v2\n GeqoDetails: GEQO: used, geqo_threshold: 2, Max join nodes: 4\n(35 rows)\n\npostgres=# explain (verbose, geqo) with gs2 (v2) as materialized ( select *\nfrom generate_series(1,1) ) select * from gs2 as gs4 join (select gs2a.v2\nfrom gs2 as gs2a, gs2 as gs2b) as gs5 using (v2), generate_series(1, 1) as\ngs (v1) cross join gs2 as gs3 where v1 IN (select v2 from gs2);\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.02..0.18 rows=1 width=12)\n Output: gs4.v2, gs.v1, gs3.v2\n CTE gs2\n -> Function Scan on pg_catalog.generate_series (cost=0.00..0.01\nrows=1 width=4)\n Output: generate_series.generate_series\n Function Call: generate_series(1, 1)\n -> Nested Loop (cost=0.00..0.14 rows=1 width=12)\n Output: gs.v1, gs4.v2, gs3.v2\n -> Nested Loop (cost=0.00..0.11 rows=1 width=8)\n Output: gs.v1, gs4.v2\n -> Nested Loop Semi Join (cost=0.00..0.04 rows=1 width=4)\n Output: gs.v1\n Join Filter: (gs.v1 = gs2.v2)\n -> Function Scan on pg_catalog.generate_series gs\n (cost=0.00..0.01 rows=1 width=4)\n Output: gs.v1\n Function Call: generate_series(1, 1)\n -> CTE Scan on gs2 (cost=0.00..0.02 rows=1 width=4)\n Output: gs2.v2\n -> Nested Loop (cost=0.00..0.05 rows=1 width=4)\n Output: gs4.v2\n Join Filter: (gs2a.v2 = gs4.v2)\n -> CTE Scan on gs2 gs2a (cost=0.00..0.02 rows=1\nwidth=4)\n Output: gs2a.v2\n -> CTE Scan on gs2 gs4 (cost=0.00..0.02 rows=1\nwidth=4)\n Output: gs4.v2\n -> CTE Scan on gs2 gs3 (cost=0.00..0.02 rows=1 width=4)\n Output: gs3.v2\n -> CTE Scan on gs2 gs2b (cost=0.00..0.02 rows=1 width=0)\n Output: gs2b.v2\n GeqoDetails: GEQO: used, geqo_threshold: 2, Max join nodes: 6\n(30 rows)\n\nOn Fri, Jul 22, 2022 at 1:20 PM Jacob Champion <jchampion@timescale.com> wrote:On Wed, Jun 1, 2022 at 11:09 PM KAWAMOTO Masaya <kawamoto@sraoss.co.jp> wrote:\n> That sounds a nice idea. But I don't think that postgres shows in the\n> EXPLAIN output why the plan is selected. Would it be appropriate to\n> show that GEQO is used in EXPLAIN output?\n\nI'm reminded of Greenplum's \"Optimizer\" line in its EXPLAIN output\n[1], so from that perspective I think it's intuitive.\n\n> As a test, I created a patch that add information about GEQO to\n> EXPLAIN output by the GEQO option. The output example is as follows.\n> What do you think about the location and content of information about GEQO? \n\nI am a little surprised to see GeqoDetails being printed for a plan\nthat didn't use GEQO, but again that's probably because I'm used to\nGPDB's Optimizer output. And I don't have a lot of personal experience\nusing alternative optimizers.I agree this should be part of explain output.I would not print the current value of geqo_threshold and leave setting display the exclusive purview of the settings option.The presentation of only a single geqo result seems incorrect given that multiple trees can exist. In the first example below the full outer join causes 3 relations to be seen as a single relation at the top level (hence max join nodes = 4) while in the inner join case we see all 6 join nodes. There should be two outputs of GEQO in the first explain, one with join nodes of 3 and the existing one with 4.I also don't see the point of labelling them \"max\"; \"join nodes\" seems sufficient.While it can probably be figured out from the rest of the plan, listing the names of the join nodes may be useful (and give join nodes some company).David J.postgres=# explain (verbose, geqo) with gs2 (v2) as materialized ( select * from generate_series(1,1) ) select * from gs2 as gs4 full outer join (select gs2a.v2 from gs2 as gs2a, gs2 as gs2b) as gs5 using (v2), generate_series(1, 1) as gs (v1) cross join gs2 as gs3 where v1 IN (select v2 from gs2); QUERY PLAN ---------------------------------------------------------------------------------------------------- Nested Loop (cost=0.07..0.21 rows=1 width=12) Output: COALESCE(gs4.v2, gs2a.v2), gs.v1, gs3.v2 CTE gs2 -> Function Scan on pg_catalog.generate_series (cost=0.00..0.01 rows=1 width=4) Output: generate_series.generate_series Function Call: generate_series(1, 1) -> Nested Loop (cost=0.06..0.16 rows=1 width=12) Output: gs.v1, gs4.v2, gs2a.v2 -> Nested Loop (cost=0.02..0.06 rows=1 width=4) Output: gs.v1 Join Filter: (gs.v1 = gs2.v2) -> Function Scan on pg_catalog.generate_series gs (cost=0.00..0.01 rows=1 width=4) Output: gs.v1 Function Call: generate_series(1, 1) -> HashAggregate (cost=0.02..0.03 rows=1 width=4) Output: gs2.v2 Group Key: gs2.v2 -> CTE Scan on gs2 (cost=0.00..0.02 rows=1 width=4) Output: gs2.v2 -> Hash Full Join (cost=0.03..0.10 rows=1 width=8) Output: gs4.v2, gs2a.v2 Hash Cond: (gs2a.v2 = gs4.v2) -> Nested Loop (cost=0.00..0.05 rows=1 width=4) Output: gs2a.v2 -> CTE Scan on gs2 gs2b (cost=0.00..0.02 rows=1 width=0) Output: gs2b.v2 -> CTE Scan on gs2 gs2a (cost=0.00..0.02 rows=1 width=4) Output: gs2a.v2 -> Hash (cost=0.02..0.02 rows=1 width=4) Output: gs4.v2 -> CTE Scan on gs2 gs4 (cost=0.00..0.02 rows=1 width=4) Output: gs4.v2 -> CTE Scan on gs2 gs3 (cost=0.00..0.02 rows=1 width=4) Output: gs3.v2 GeqoDetails: GEQO: used, geqo_threshold: 2, Max join nodes: 4(35 rows)postgres=# explain (verbose, geqo) with gs2 (v2) as materialized ( select * from generate_series(1,1) ) select * from gs2 as gs4 join (select gs2a.v2 from gs2 as gs2a, gs2 as gs2b) as gs5 using (v2), generate_series(1, 1) as gs (v1) cross join gs2 as gs3 where v1 IN (select v2 from gs2); QUERY PLAN ---------------------------------------------------------------------------------------------------------- Nested Loop (cost=0.02..0.18 rows=1 width=12) Output: gs4.v2, gs.v1, gs3.v2 CTE gs2 -> Function Scan on pg_catalog.generate_series (cost=0.00..0.01 rows=1 width=4) Output: generate_series.generate_series Function Call: generate_series(1, 1) -> Nested Loop (cost=0.00..0.14 rows=1 width=12) Output: gs.v1, gs4.v2, gs3.v2 -> Nested Loop (cost=0.00..0.11 rows=1 width=8) Output: gs.v1, gs4.v2 -> Nested Loop Semi Join (cost=0.00..0.04 rows=1 width=4) Output: gs.v1 Join Filter: (gs.v1 = gs2.v2) -> Function Scan on pg_catalog.generate_series gs (cost=0.00..0.01 rows=1 width=4) Output: gs.v1 Function Call: generate_series(1, 1) -> CTE Scan on gs2 (cost=0.00..0.02 rows=1 width=4) Output: gs2.v2 -> Nested Loop (cost=0.00..0.05 rows=1 width=4) Output: gs4.v2 Join Filter: (gs2a.v2 = gs4.v2) -> CTE Scan on gs2 gs2a (cost=0.00..0.02 rows=1 width=4) Output: gs2a.v2 -> CTE Scan on gs2 gs4 (cost=0.00..0.02 rows=1 width=4) Output: gs4.v2 -> CTE Scan on gs2 gs3 (cost=0.00..0.02 rows=1 width=4) Output: gs3.v2 -> CTE Scan on gs2 gs2b (cost=0.00..0.02 rows=1 width=0) Output: gs2b.v2 GeqoDetails: GEQO: used, geqo_threshold: 2, Max join nodes: 6(30 rows)",
"msg_date": "Wed, 27 Jul 2022 16:43:11 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: add a debug message about using geqo"
}
] |
[
{
"msg_contents": "Hi,\n\nIn a couple tests I (IIRC others as well) had the problem that a config reload\nisn't actually synchronous. I.e. a sequence like\n\n$node_primary->reload;\n$node_primary->safe_psql('postgres',...)\n\nisn't actually guaranteed to observe the config as reloaded in the the\nsafe_psql(). It *typically* will see the new config results, but if the system\nbusy and/or slow, the sighup might not yet have been propagated by postmaster\nand/or not yet received by the relevant process.\n\nI don't really see a way to guarantee this with reasonable effort in the\nback-branches. In HEAD we could (with some difficulties around postmaster and\nUI) use a global barrier to wait for the reload to complete. For the\nbackbranches I guess we could hack something using retries and setting a\npseudo-guc to check whether the reload has been processed - but that's not\nbulletproof at all, some process(es) could take longer to receive the signal.\n\nAnybody got a better idea?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 May 2022 18:26:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "waiting for reload in tests"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> In a couple tests I (IIRC others as well) had the problem that a config reload\n> isn't actually synchronous. I.e. a sequence like\n\n> $node_primary->reload;\n> $node_primary->safe_psql('postgres',...)\n\n> isn't actually guaranteed to observe the config as reloaded in the the\n> safe_psql().\n\nBrute force way: s/reload/restart/\n\nLess brute force: wait for \"SHOW variable-you-changed\" to report the\nvalue you expect.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 May 2022 21:29:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: waiting for reload in tests"
},
{
"msg_contents": "On Mon, May 09, 2022 at 09:29:32PM -0400, Tom Lane wrote:\n> Brute force way: s/reload/restart/\n\nThat was my first thought, as it can be tricky to make sure that all\nthe processes got the update because we don't publish such a state.\n\nOne thing I was also thinking about would be to update\npg_stat_activity.state_change when a reload is processed on top of its\ncurrent updates, then wait for it to be effective in all the processes\nreported. The field remains NULL for most non-backend processes,\nwhich would be a compatibility change.\n\n> Less brute force: wait for \"SHOW variable-you-changed\" to report the\n> value you expect.\n\nThis method may still be unreliable in some processes like a logirep\nlauncher/receiver or just autovacuum, no?\n--\nMichael",
"msg_date": "Tue, 10 May 2022 10:37:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: waiting for reload in tests"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, May 09, 2022 at 09:29:32PM -0400, Tom Lane wrote:\n>> Less brute force: wait for \"SHOW variable-you-changed\" to report the\n>> value you expect.\n\n> This method may still be unreliable in some processes like a logirep\n> launcher/receiver or just autovacuum, no?\n\nYeah, if your test case requires knowing that some background process\nhas gotten the word, it's a *lot* harder. I think we'd have to add a\nlast-config-update-time column in pg_stat_activity or something like that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 May 2022 21:42:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: waiting for reload in tests"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-09 21:42:20 -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Mon, May 09, 2022 at 09:29:32PM -0400, Tom Lane wrote:\n> >> Less brute force: wait for \"SHOW variable-you-changed\" to report the\n> >> value you expect.\n> \n> > This method may still be unreliable in some processes like a logirep\n> > launcher/receiver or just autovacuum, no?\n\nYept, that's the problem. In my case it's the startup process...\n\n\n> Yeah, if your test case requires knowing that some background process\n> has gotten the word, it's a *lot* harder. I think we'd have to add a\n> last-config-update-time column in pg_stat_activity or something like that.\n\nThat's basically what I was referencing with global barriers...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 May 2022 18:52:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: waiting for reload in tests"
}
] |
[
{
"msg_contents": "Hi Team,\n\nI have cascade level streaming replication & WAL Apply setup .\n\n*Primary -> Secondary 1 -> Secondary 2,Secondary 3*\n*WAL Archive -> Secondary 4*\n\nI am expanding setup by adding another Replica from *Primary -> Secondary 5*\nfor logical streaming.\n\nDo I need to install and configure pglogical on all the cluster nodes ?\n\nOr just need between *Primary and Secondary 5 ?*\n\nWhat will happen if I miss pglogical configuration in any one of the\nSecondary node?\n\nVersion : >= 9.6\n\nPlease share your experience.\n\nThanks\n\nHi Team,I have cascade level streaming replication & WAL Apply setup .Primary -> Secondary 1 -> Secondary 2,Secondary 3WAL Archive -> Secondary 4I am expanding setup by adding another Replica from Primary -> Secondary 5 for logical streaming.Do I need to install and configure pglogical on all the cluster nodes ?Or just need between Primary and Secondary 5 ?What will happen if I miss pglogical configuration in any one of the Secondary node?Version : >= 9.6Please share your experience.Thanks",
"msg_date": "Mon, 9 May 2022 21:35:28 -0700",
"msg_from": "Perumal Raj <perucinci@gmail.com>",
"msg_from_op": true,
"msg_subject": "pglogical setup in cascade replication architecture"
}
] |
[
{
"msg_contents": "A minor issue, and patch.\n\nREINDEX DATABASE currently requires you to write REINDEX DATABASE\ndbname, which makes this a little less usable than we might like.\n\nREINDEX on the catalog can cause deadlocks, which also makes REINDEX\nDATABASE not much use in practice, and is the reason there is no test\nfor REINDEX DATABASE. Another reason why it is a little less usable\nthan we might like.\n\nSeems we should do something about these historic issues in the name\nof product usability.\n\nAttached patch allows new syntax for REINDEX DATABASE, without needing\nto specify dbname. That version of the command skips catalog tables,\nas a way of avoiding the known deadlocks. Patch also adds a test.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Tue, 10 May 2022 10:13:13 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Allowing REINDEX to have an optional name"
},
{
"msg_contents": "Hi,\n\nAm Dienstag, dem 10.05.2022 um 10:13 +0100 schrieb Simon Riggs:\n> A minor issue, and patch.\n> \n> REINDEX DATABASE currently requires you to write REINDEX DATABASE\n> dbname, which makes this a little less usable than we might like.\n> \n> REINDEX on the catalog can cause deadlocks, which also makes REINDEX\n> DATABASE not much use in practice, and is the reason there is no test\n> for REINDEX DATABASE. Another reason why it is a little less usable\n> than we might like.\n> \n> Seems we should do something about these historic issues in the name\n> of product usability.\n> \n\nWow, i just recently had a look into that code and talked with my\ncolleagues on how the current behavior annoyed me last week....and\nthere you are! This community rocks ;)\n\n> Attached patch allows new syntax for REINDEX DATABASE, without\n> needing\n> to specify dbname. That version of the command skips catalog tables,\n> as a way of avoiding the known deadlocks. Patch also adds a test.\n> \n\n+\t\t/* Unqualified REINDEX DATABASE will skip catalog\ntables */\n+\t\tif (objectKind == REINDEX_OBJECT_DATABASE &&\n+\t\t\tobjectName == NULL &&\n+\t\t\tIsSystemClass(relid, classtuple))\n+\t\t\tcontinue;\n\nHmm, shouldn't we just print a NOTICE or something like this in\naddition to this check to tell the user that we are *not* really\nreindexing all things (and probably give him a hint to use REINDEX\nSYSTEM to cover them)?\n\n\tThanks,\n\t\tBernd\n\n\n\n",
"msg_date": "Tue, 10 May 2022 14:29:31 +0200",
"msg_from": "Bernd Helmle <mailings@oopsware.de>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Tue, May 10, 2022 at 2:43 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> A minor issue, and patch.\n>\n> REINDEX DATABASE currently requires you to write REINDEX DATABASE\n> dbname, which makes this a little less usable than we might like.\n>\n> REINDEX on the catalog can cause deadlocks, which also makes REINDEX\n> DATABASE not much use in practice, and is the reason there is no test\n> for REINDEX DATABASE. Another reason why it is a little less usable\n> than we might like.\n>\n> Seems we should do something about these historic issues in the name\n> of product usability.\n>\n> Attached patch allows new syntax for REINDEX DATABASE, without needing\n> to specify dbname. That version of the command skips catalog tables,\n> as a way of avoiding the known deadlocks. Patch also adds a test.\n>\n\n From the patch it looks like with the patch applied running REINDEX\nDATABASE is equivalent to running REINDEX DATABASE <current database>\nexcept reindexing the shared catalogs. Is that correct?\n\nThough the patch adds following change\n+ Indexes on shared system catalogs are also processed, unless the\n+ database name is omitted, in which case system catalog indexes\nare skipped.\n\nthe syntax looks unintuitive.\n\nI think REINDEX DATABASE reindexing the current database is a good\nusability improvement in itself. But skipping the shared catalogs\nneeds an explicity syntax. Not sure how feasible it is but something\nlike REINDEX DATABASE skip SHARED/SYSTEM.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 10 May 2022 19:17:35 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Tue, 10 May 2022 at 14:47, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Tue, May 10, 2022 at 2:43 PM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> >\n> > A minor issue, and patch.\n> >\n> > REINDEX DATABASE currently requires you to write REINDEX DATABASE\n> > dbname, which makes this a little less usable than we might like.\n> >\n> > REINDEX on the catalog can cause deadlocks, which also makes REINDEX\n> > DATABASE not much use in practice, and is the reason there is no test\n> > for REINDEX DATABASE. Another reason why it is a little less usable\n> > than we might like.\n> >\n> > Seems we should do something about these historic issues in the name\n> > of product usability.\n> >\n> > Attached patch allows new syntax for REINDEX DATABASE, without needing\n> > to specify dbname. That version of the command skips catalog tables,\n> > as a way of avoiding the known deadlocks. Patch also adds a test.\n> >\n>\n> From the patch it looks like with the patch applied running REINDEX\n> DATABASE is equivalent to running REINDEX DATABASE <current database>\n> except reindexing the shared catalogs. Is that correct?\n\nYes\n\n> Though the patch adds following change\n> + Indexes on shared system catalogs are also processed, unless the\n> + database name is omitted, in which case system catalog indexes\n> are skipped.\n>\n> the syntax looks unintuitive.\n>\n> I think REINDEX DATABASE reindexing the current database is a good\n> usability improvement in itself. But skipping the shared catalogs\n> needs an explicity syntax. Not sure how feasible it is but something\n> like REINDEX DATABASE skip SHARED/SYSTEM.\n\nThere are two commands:\n\nREINDEX DATABASE does every except system catalogs\nREINDEX SYSTEM does system catalogs only\n\nSo taken together, the two commands seem intuitive to me.\n\nIt is designed like this because it is dangerous to REINDEX the system\ncatalogs because of potential deadlocks, so we want a way to avoid\nthat problem.\n\nPerhaps I can improve the docs more, will look.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 10 May 2022 15:00:11 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Tue, May 10, 2022 at 7:30 PM Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n\n> On Tue, 10 May 2022 at 14:47, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Tue, May 10, 2022 at 2:43 PM Simon Riggs\n> > <simon.riggs@enterprisedb.com> wrote:\n> > >\n> > > A minor issue, and patch.\n> > >\n> > > REINDEX DATABASE currently requires you to write REINDEX DATABASE\n> > > dbname, which makes this a little less usable than we might like.\n> > >\n> > > REINDEX on the catalog can cause deadlocks, which also makes REINDEX\n> > > DATABASE not much use in practice, and is the reason there is no test\n> > > for REINDEX DATABASE. Another reason why it is a little less usable\n> > > than we might like.\n> > >\n> > > Seems we should do something about these historic issues in the name\n> > > of product usability.\n> > >\n> > > Attached patch allows new syntax for REINDEX DATABASE, without needing\n> > > to specify dbname. That version of the command skips catalog tables,\n> > > as a way of avoiding the known deadlocks. Patch also adds a test.\n> > >\n> >\n> > From the patch it looks like with the patch applied running REINDEX\n> > DATABASE is equivalent to running REINDEX DATABASE <current database>\n> > except reindexing the shared catalogs. Is that correct?\n>\n> Yes\n>\n> > Though the patch adds following change\n> > + Indexes on shared system catalogs are also processed, unless the\n> > + database name is omitted, in which case system catalog indexes\n> > are skipped.\n> >\n> > the syntax looks unintuitive.\n> >\n> > I think REINDEX DATABASE reindexing the current database is a good\n> > usability improvement in itself. But skipping the shared catalogs\n> > needs an explicity syntax. Not sure how feasible it is but something\n> > like REINDEX DATABASE skip SHARED/SYSTEM.\n>\n> There are two commands:\n>\n> REINDEX DATABASE does every except system catalogs\n> REINDEX SYSTEM does system catalogs only\n>\n\nIIUC\n\nREINDEX DATABASE <database name> does system catalogs as well\nREINDEX DATABASE does everything except system catalogs\n\nThat's confusing and unintuitive.\n\nNot providing the database name leads to ignoring system catalogs. I won't\nexpect that from this syntax.\n\n\n> So taken together, the two commands seem intuitive to me.\n>\n> It is designed like this because it is dangerous to REINDEX the system\n> catalogs because of potential deadlocks, so we want a way to avoid\n> that problem.\n>\n\nIt's more clear if we add SKIP SYSTEM CATALOGS or some such explicit syntax.\n\n-- \nBest Wishes,\nAshutosh\n\nOn Tue, May 10, 2022 at 7:30 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:On Tue, 10 May 2022 at 14:47, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Tue, May 10, 2022 at 2:43 PM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> >\n> > A minor issue, and patch.\n> >\n> > REINDEX DATABASE currently requires you to write REINDEX DATABASE\n> > dbname, which makes this a little less usable than we might like.\n> >\n> > REINDEX on the catalog can cause deadlocks, which also makes REINDEX\n> > DATABASE not much use in practice, and is the reason there is no test\n> > for REINDEX DATABASE. Another reason why it is a little less usable\n> > than we might like.\n> >\n> > Seems we should do something about these historic issues in the name\n> > of product usability.\n> >\n> > Attached patch allows new syntax for REINDEX DATABASE, without needing\n> > to specify dbname. That version of the command skips catalog tables,\n> > as a way of avoiding the known deadlocks. Patch also adds a test.\n> >\n>\n> From the patch it looks like with the patch applied running REINDEX\n> DATABASE is equivalent to running REINDEX DATABASE <current database>\n> except reindexing the shared catalogs. Is that correct?\n\nYes\n\n> Though the patch adds following change\n> + Indexes on shared system catalogs are also processed, unless the\n> + database name is omitted, in which case system catalog indexes\n> are skipped.\n>\n> the syntax looks unintuitive.\n>\n> I think REINDEX DATABASE reindexing the current database is a good\n> usability improvement in itself. But skipping the shared catalogs\n> needs an explicity syntax. Not sure how feasible it is but something\n> like REINDEX DATABASE skip SHARED/SYSTEM.\n\nThere are two commands:\n\nREINDEX DATABASE does every except system catalogs\nREINDEX SYSTEM does system catalogs onlyIIUCREINDEX DATABASE <database name> does system catalogs as wellREINDEX DATABASE does everything except system catalogs That's confusing and unintuitive.Not providing the database name leads to ignoring system catalogs. I won't expect that from this syntax.\nSo taken together, the two commands seem intuitive to me.\n\nIt is designed like this because it is dangerous to REINDEX the system\ncatalogs because of potential deadlocks, so we want a way to avoid\nthat problem.It's more clear if we add SKIP SYSTEM CATALOGS or some such explicit syntax.-- Best Wishes,Ashutosh",
"msg_date": "Wed, 11 May 2022 09:54:17 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Wed, May 11, 2022 at 09:54:17AM +0530, Ashutosh Bapat wrote:\n> REINDEX DATABASE <database name> does system catalogs as well\n> REINDEX DATABASE does everything except system catalogs\n> \n> That's confusing and unintuitive.\n\nAgreed. Nobody is going to remember the difference. REINDEX's\nparsing grammar is designed to be extensible because we have the\nparenthesized flavor. Why don't you add an option there to skip the\ncatalogs, like a SKIP_CATALOG?\n\n> Not providing the database name leads to ignoring system catalogs. I won't\n> expect that from this syntax.\n\nI don't disagree with having a shortened grammar where the database\nname is not required because one cannot reindex a database different\nthan the one connected to, but changing a behavior based on such a\ngrammar difference is not a good user experience.\n--\nMichael",
"msg_date": "Wed, 11 May 2022 14:42:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "Am Mittwoch, dem 11.05.2022 um 14:42 +0900 schrieb Michael Paquier:\n> Agreed. Nobody is going to remember the difference. REINDEX's\n> parsing grammar is designed to be extensible because we have the\n> parenthesized flavor. Why don't you add an option there to skip the\n> catalogs, like a SKIP_CATALOG?\n\n+1\n\nHaving an option is probably the best idea. Though we have REINDEX\nSYSTEM, so i throw SKIP_SYSTEM into the ring as an alternative. This\nwould be consistent with the meaning of both commands/options.\n\n\tThanks,\n\t\tBernd\n\n\n\n",
"msg_date": "Wed, 11 May 2022 10:00:56 +0200",
"msg_from": "Bernd Helmle <mailings@oopsware.de>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Wed, 11 May 2022 at 05:24, Ashutosh Bapat\n<ashutosh.bapat@enterprisedb.com> wrote:\n\n>> It is designed like this because it is dangerous to REINDEX the system\n>> catalogs because of potential deadlocks, so we want a way to avoid\n>> that problem.\n>\n> It's more clear if we add SKIP SYSTEM CATALOGS or some such explicit syntax.\n\nClarity is not the issue. I am opposed to a default mode that does\nsomething bad and non-useful.\n\nIf you want to reindex the system catalogs then we already have REINDEX SYSTEM.\nSo REINDEX (SKIP_SYSTEM_CATALOGS OFF) DATABASE would do the same thing.\nBut you don't want to run either of them because of deadlocking.\n\nThe only action that makes sense is to reindex the database, skipping\nthe catalog tables.\n\nSo I'm proposing a command that has useful default behavior.\ni.e. REINDEX DATABASE is the same as REINDEX (SKIP_SYSTEM_CATALOGS ON) DATABASE.\n\nIf you make REINDEX DATABASE the same as REINDEX (SKIP_SYSTEM_CATALOGS\nOFF) DATABASE then it is just dangerous and annoying, i.e. a POLA\nviolation.\n\nThe point of this was a usability improvement, not just new syntax.\n\n--\nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 11 May 2022 15:33:52 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nHello\r\n\r\nThe patch applies and tests fine and I think this patch has good intentions to prevent the default behavior of REINDEX DATABASE to cause a deadlock. However, I am not in favor of simply omitting the database name after DATABASE clause because of consistency. Almost all other queries involving the DATABASE clause require database name to be given following after. For example, ALTER DATABASE [dbname]. \r\n\r\nBeing able to omit database name for REINDEX DATABASE seems inconsistent to me.\r\n\r\nThe documentation states that REINDEX DATABASE only works on the current database, but it still requires the user to provide a database name and require that it must match the current database. Not very useful option, isn’t it? But it is still required from the user to stay consistent with other DATABASE clauses.\r\n\r\nMaybe the best way is to keep the query clause as is (with the database name still required) and simply don’t let it reindex system catalog to prevent deadlock. At the end, give user a notification that system catalogs have not been reindexed, and tell them to use REINDEX SYSTEM instead.\r\n\r\nCary Huang\r\n-----------------\r\nHighGo Software Canada\r\nwww.highgo.ca",
"msg_date": "Fri, 27 May 2022 19:08:51 +0000",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "Am Freitag, dem 27.05.2022 um 19:08 +0000 schrieb Cary Huang:\n\n\n[...]\n\n> The patch applies and tests fine and I think this patch has good\n> intentions to prevent the default behavior of REINDEX DATABASE to\n> cause a deadlock. However, I am not in favor of simply omitting the\n> database name after DATABASE clause because of consistency. Almost\n> all other queries involving the DATABASE clause require database name\n> to be given following after. For example, ALTER DATABASE [dbname]. \n> \n> Being able to omit database name for REINDEX DATABASE seems\n> inconsistent to me.\n> \n> The documentation states that REINDEX DATABASE only works on the\n> current database, but it still requires the user to provide a\n> database name and require that it must match the current database.\n> Not very useful option, isn’t it? But it is still required from the\n> user to stay consistent with other DATABASE clauses.\n> \n\nHmm, right, but you can see this from another perspective, too: For\nexample, ALTER DATABASE works by adjusting properties of other\ndatabases very well, SET TABLESPACE can be used when not connected to\nthe target database only, so you are required to specify its name in\nthat case.\nREINDEX DATABASE cannot reindex other databases than the one we're\nconnected to. Seen from that point, i currently can't see the logical\njustification to have the database name there, besides of \"yes, i\nreally meant that database i am connected to\" or consistency.\n\n> Maybe the best way is to keep the query clause as is (with the\n> database name still required) and simply don’t let it reindex system\n> catalog to prevent deadlock. At the end, give user a notification\n> that system catalogs have not been reindexed, and tell them to use\n> REINDEX SYSTEM instead.\n\n+1\n\n\n\n",
"msg_date": "Tue, 31 May 2022 10:51:39 +0200",
"msg_from": "Bernd Helmle <mailings@oopsware.de>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "Am Dienstag, dem 10.05.2022 um 15:00 +0100 schrieb Simon Riggs:\n\n[...]\n\n> \n> > I think REINDEX DATABASE reindexing the current database is a good\n> > usability improvement in itself. But skipping the shared catalogs\n> > needs an explicity syntax. Not sure how feasible it is but\n> > something\n> > like REINDEX DATABASE skip SHARED/SYSTEM.\n> \n> There are two commands:\n> \n> REINDEX DATABASE does every except system catalogs\n> REINDEX SYSTEM does system catalogs only\n> \n> So taken together, the two commands seem intuitive to me.\n> \n> It is designed like this because it is dangerous to REINDEX the\n> system\n> catalogs because of potential deadlocks, so we want a way to avoid\n> that problem.\n> \n> Perhaps I can improve the docs more, will look.\n> \n\nAnd we already have a situation where this already happens with REINDEX\nDATABASE: if you use CONCURRENTLY, it skips system catalogs already and\nprints a warning. In both cases there are good technical reasons to\nskip catalog indexes and to change the workflow to use separate\ncommands.\n\n\tBernd\n\n\n\n",
"msg_date": "Tue, 31 May 2022 11:04:58 +0200",
"msg_from": "Bernd Helmle <mailings@oopsware.de>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Tue, May 31, 2022 at 11:04:58AM +0200, Bernd Helmle wrote:\n> And we already have a situation where this already happens with REINDEX\n> DATABASE: if you use CONCURRENTLY, it skips system catalogs already and\n> prints a warning. In both cases there are good technical reasons to\n> skip catalog indexes and to change the workflow to use separate\n> commands.\n\nThe case with CONCURRENTLY is different though: the option will never\nwork on system catalogs so we have to skip them. Echoing with others\non this thread, I don't think that we should introduce a different\nbehavior on what's basically the same grammar. That's just going to\nlead to more confusion. So REINDEX DATABASE with or without a\ndatabase name appended to it should always mean to reindex the\ncatalogs on top of the existing relations.\n--\nMichael",
"msg_date": "Tue, 31 May 2022 21:09:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On 2022-May-31, Michael Paquier wrote:\n\n> The case with CONCURRENTLY is different though: the option will never\n> work on system catalogs so we have to skip them. Echoing with others\n> on this thread, I don't think that we should introduce a different\n> behavior on what's basically the same grammar. That's just going to\n> lead to more confusion. So REINDEX DATABASE with or without a\n> database name appended to it should always mean to reindex the\n> catalogs on top of the existing relations.\n\nI was thinking the opposite: REINDEX DATABASE with or without a database\nname should always process the user relations and skip system catalogs.\nIf the user wants to do both, then they can use REINDEX SYSTEM in\naddition.\n\nThe reason for doing it like this is that there is no way to process\nonly user tables and skip catalogs. So this is better for\ncomposability.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Most hackers will be perfectly comfortable conceptualizing users as entropy\n sources, so let's move on.\" (Nathaniel Smith)\n\n\n",
"msg_date": "Tue, 31 May 2022 14:30:32 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Tue, May 31, 2022 at 02:30:32PM +0200, Alvaro Herrera wrote:\n> I was thinking the opposite: REINDEX DATABASE with or without a database\n> name should always process the user relations and skip system catalogs.\n> If the user wants to do both, then they can use REINDEX SYSTEM in\n> addition.\n> \n> The reason for doing it like this is that there is no way to process\n> only user tables and skip catalogs. So this is better for\n> composability.\n\nNo objections from me to keep this distinction at the end, as long as\nthe the database name in the command has no impact on the chosen\nbehavior. Could there be a point in having a REINDEX ALL though that\nwould process both the user relations and the catalogs, doing the same\nthing as REINDEX DATABASE today?\n\nBy the way, the patch had better avoid putting a global REINDEX\ncommand that would process everything. As far as I recall, we've\navoided such things on purpose because they are expensive, keeping\naround only cases that generate errors or skip all the relations.\nSo having that in a TAP test would be better, I assume, for\nisolation.\n--\nMichael",
"msg_date": "Thu, 2 Jun 2022 09:02:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "Maybe the answer is to 1) add a parenthesized option REINDEX(SYSTEM) (to allow\nthe current behavior); and 2) make REINDEX DATABASE an alias which implies\n\"SYSTEM false\"; 3) prohibit REINDEX (SYSTEM true) SYSTEM, or consider removing\n\"REINDEX SYSTEM;\".\n\nThat avoids the opaque and surprising behavior that \"REINDEX DATABASE\" skips\nsystem stuff but \"REINDEX DATABASE foo\" doesn't, while allowing the old\nbehavior (disabled by default).\n\n\n",
"msg_date": "Mon, 27 Jun 2022 19:18:07 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Thu, 2 Jun 2022 at 01:02, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, May 31, 2022 at 02:30:32PM +0200, Alvaro Herrera wrote:\n> > I was thinking the opposite: REINDEX DATABASE with or without a database\n> > name should always process the user relations and skip system catalogs.\n> > If the user wants to do both, then they can use REINDEX SYSTEM in\n> > addition.\n> >\n> > The reason for doing it like this is that there is no way to process\n> > only user tables and skip catalogs. So this is better for\n> > composability.\n>\n> No objections from me to keep this distinction at the end, as long as\n> the the database name in the command has no impact on the chosen\n> behavior.\n\nOK, that's clear. Will progress.\n\n> Could there be a point in having a REINDEX ALL though that\n> would process both the user relations and the catalogs, doing the same\n> thing as REINDEX DATABASE today?\n\nA key point is that REINDEX SYSTEM has problems, so should be avoided.\nHence, including both database and system together in a new command\nwould not be great idea, at this time.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 28 Jun 2022 08:29:07 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Tue, 28 Jun 2022 at 08:29, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Thu, 2 Jun 2022 at 01:02, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Tue, May 31, 2022 at 02:30:32PM +0200, Alvaro Herrera wrote:\n> > > I was thinking the opposite: REINDEX DATABASE with or without a database\n> > > name should always process the user relations and skip system catalogs.\n> > > If the user wants to do both, then they can use REINDEX SYSTEM in\n> > > addition.\n> > >\n> > > The reason for doing it like this is that there is no way to process\n> > > only user tables and skip catalogs. So this is better for\n> > > composability.\n> >\n> > No objections from me to keep this distinction at the end, as long as\n> > the the database name in the command has no impact on the chosen\n> > behavior.\n>\n> OK, that's clear. Will progress.\n\nAttached patch is tested, documented and imho ready to be committed,\nso I will mark it so in CFapp.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Tue, 28 Jun 2022 11:02:25 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Tue, Jun 28, 2022 at 11:02:25AM +0100, Simon Riggs wrote:\n> Attached patch is tested, documented and imho ready to be committed,\n> so I will mark it so in CFapp.\n\nThe behavior introduced by this patch should be reflected in\nreindexdb. See in particular reindex_one_database(), where a\nREINDEX_SYSTEM is enforced first on the catalogs for the\nnon-concurrent mode when running the reindex on a database.\n\n+-- unqualified reindex database\n+-- if you want to test REINDEX DATABASE, uncomment the following line,\n+-- but note that this adds about 0.5s to the regression tests and the\n+-- results are volatile when run in parallel to other tasks. Note also\n+-- that REINDEX SYSTEM is specifically not tested because it can deadlock.\n+-- REINDEX (VERBOSE) DATABASE;\n\nNo need to add that IMHO.\n\n REINDEX [ ( <replaceable class=\"parameter\">option</replaceable> [,\n ...] ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } [\n CONCURRENTLY ] <replaceable class=\"parameter\">name</replaceable>\n+REINDEX [ ( <replaceable class=\"parameter\">option</replaceable> [,\n ...] ) ] { DATABASE | SYSTEM } [ CONCURRENTLY ] [ <replaceable\n class=\"parameter\">name</replaceable> ]\n\nShouldn't you remove DATABASE and SYSTEM from the first line, keeping\nonly INDEX. TABLE and SCHEMA? The second line, with its optional\n\"name\" would cover the DATABASE and SYSTEM cases at 100%.\n\n- if (strcmp(objectName, get_database_name(objectOid)) != 0)\n+ if (objectName && strcmp(objectName, get_database_name(objectOid)) != 0)\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"can only reindex the currently open database\")));\n if (!pg_database_ownercheck(objectOid, GetUserId()))\n aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE,\n- objectName);\n+ get_database_name(objectOid));\n\nThis could call get_database_name() just once.\n\n+ * You might think it would be good to include catalogs,\n+ * but doing that can deadlock, so isn't much use in real world,\n+ * nor can we safely test that it even works.\n\nNot sure what you mean here exactly.\n--\nMichael",
"msg_date": "Wed, 29 Jun 2022 13:35:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Wed, 29 Jun 2022 at 05:35, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jun 28, 2022 at 11:02:25AM +0100, Simon Riggs wrote:\n> > Attached patch is tested, documented and imho ready to be committed,\n> > so I will mark it so in CFapp.\n\nThanks for the review Michael.\n\n> The behavior introduced by this patch should be reflected in\n> reindexdb. See in particular reindex_one_database(), where a\n> REINDEX_SYSTEM is enforced first on the catalogs for the\n> non-concurrent mode when running the reindex on a database.\n\nOriginally, I was trying to avoid changing prior behavior, but now\nthat we have agreed to do so, this makes sense.\n\nThat section of code has been removed, tests updated. No changes to\ndocs seem to be required.\n\n> +-- unqualified reindex database\n> +-- if you want to test REINDEX DATABASE, uncomment the following line,\n> +-- but note that this adds about 0.5s to the regression tests and the\n> +-- results are volatile when run in parallel to other tasks. Note also\n> +-- that REINDEX SYSTEM is specifically not tested because it can deadlock.\n> +-- REINDEX (VERBOSE) DATABASE;\n>\n> No need to add that IMHO.\n\nThat was more a comment to reviewer, but I think something should be\nsaid for later developers.\n\n> REINDEX [ ( <replaceable class=\"parameter\">option</replaceable> [,\n> ...] ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } [\n> CONCURRENTLY ] <replaceable class=\"parameter\">name</replaceable>\n> +REINDEX [ ( <replaceable class=\"parameter\">option</replaceable> [,\n> ...] ) ] { DATABASE | SYSTEM } [ CONCURRENTLY ] [ <replaceable\n> class=\"parameter\">name</replaceable> ]\n>\n> Shouldn't you remove DATABASE and SYSTEM from the first line, keeping\n> only INDEX. TABLE and SCHEMA? The second line, with its optional\n> \"name\" would cover the DATABASE and SYSTEM cases at 100%.\n\nI agree that your proposal is clearer. Done.\n\n> - if (strcmp(objectName, get_database_name(objectOid)) != 0)\n> + if (objectName && strcmp(objectName, get_database_name(objectOid)) != 0)\n> ereport(ERROR,\n> (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> errmsg(\"can only reindex the currently open database\")));\n> if (!pg_database_ownercheck(objectOid, GetUserId()))\n> aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE,\n> - objectName);\n> + get_database_name(objectOid));\n>\n> This could call get_database_name() just once.\n\nIt could, but I couldn't see any benefit in changing that for the code\nunder discussion.\n\nIf calling get_database_name() multiple times is an issue, I've added\na cache for that - another patch attached, if you think its worth it.\n\n> + * You might think it would be good to include catalogs,\n> + * but doing that can deadlock, so isn't much use in real world,\n> + * nor can we safely test that it even works.\n>\n> Not sure what you mean here exactly.\n\nREINDEX SYSTEM can deadlock, which is why we are avoiding it.\n\nThis was a comment to later developers as to why things are done that\nway. Feel free to update the wording or location, but something should\nbe mentioned to avoid later discussion.\n\nThanks for the review, new version attached.\n\n\n--\nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Wed, 29 Jun 2022 15:02:11 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> Thanks for the review, new version attached.\n\nThis is marked as Ready for Committer, but that seems unduly\noptimistic. The cfbot shows that it's failing on all platforms ---\nand not in the same way on each, suggesting there are multiple\nproblems.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 03 Jul 2022 17:41:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Sun, Jul 03, 2022 at 05:41:31PM -0400, Tom Lane wrote:\n> This is marked as Ready for Committer, but that seems unduly\n> optimistic.\n\nPlease note that patch authors should not switch a patch as RfC by\nthemselves. This is something that a reviewer should do.\n\n> The cfbot shows that it's failing on all platforms ---\n> and not in the same way on each, suggesting there are multiple\n> problems.\n\nA wild guess is that this comes from the patch that manipulates\nget_database_name(), something that there is no need for as long as\nthe routine is called once in ReindexMultipleTables().\n--\nMichael",
"msg_date": "Mon, 4 Jul 2022 15:59:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On 2022-Jun-29, Simon Riggs wrote:\n\n> > - if (strcmp(objectName, get_database_name(objectOid)) != 0)\n> > + if (objectName && strcmp(objectName, get_database_name(objectOid)) != 0)\n> > ereport(ERROR,\n> > (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > errmsg(\"can only reindex the currently open database\")));\n> > if (!pg_database_ownercheck(objectOid, GetUserId()))\n> > aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE,\n> > - objectName);\n> > + get_database_name(objectOid));\n> >\n> > This could call get_database_name() just once.\n> \n> It could, but I couldn't see any benefit in changing that for the code\n> under discussion.\n> \n> If calling get_database_name() multiple times is an issue, I've added\n> a cache for that - another patch attached, if you think its worth it.\n\nTBH I doubt that this is an issue: since we're throwing an error anyway,\nthe memory would be released, and error cases are not considered worth\nof performance optimization anyway.\n\nPutting that thought aside, if we were to think that this is an issue, I\ndon't think the cache as implemented here is a good idea, because then\ncaller is responsible for tracking whether to free or not the return\nvalue.\n\nI think that Michaël's idea could be implemented more easily by having a\nlocal variable that receives the return value from get_database_name.\nBut I think the coding as Simon had it was all right.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Las navajas y los monos deben estar siempre distantes\" (Germán Poo)\n\n\n",
"msg_date": "Mon, 4 Jul 2022 10:26:19 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Mon, Jul 04, 2022 at 03:59:55PM +0900, Michael Paquier wrote:\n> Please note that patch authors should not switch a patch as RfC by\n> themselves. This is something that a reviewer should do.\n\nThis patch has been marked as waiting for a review, however the CF bot\nis completely red:\nhttp://commitfest.cputube.org/simon-riggs.html\n\nCould you take care of those issues first?\n--\nMichael",
"msg_date": "Fri, 15 Jul 2022 12:44:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Fri, 15 Jul 2022 at 04:44, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Jul 04, 2022 at 03:59:55PM +0900, Michael Paquier wrote:\n> > Please note that patch authors should not switch a patch as RfC by\n> > themselves. This is something that a reviewer should do.\n>\n> http://commitfest.cputube.org/simon-riggs.html\n\nThanks for showing me that, it is very helpful.\n\n> This patch has been marked as waiting for a review, however the CF bot\n> is completely red:\n\nYes, it is failing, but so is current HEAD, with some kind of libpq\npipelining error.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 15 Jul 2022 08:47:58 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On 2022-Jul-15, Simon Riggs wrote:\n\n> On Fri, 15 Jul 2022 at 04:44, Michael Paquier <michael@paquier.xyz> wrote:\n\n> > This patch has been marked as waiting for a review, however the CF bot\n> > is completely red:\n> \n> Yes, it is failing, but so is current HEAD, with some kind of libpq\n> pipelining error.\n\nHmm, is it? Where can I see that? The buildfarm looks OK ...\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Before you were born your parents weren't as boring as they are now. They\ngot that way paying your bills, cleaning up your room and listening to you\ntell them how idealistic you are.\" -- Charles J. Sykes' advice to teenagers\n\n\n",
"msg_date": "Fri, 15 Jul 2022 10:12:12 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Fri, 15 Jul 2022 at 09:12, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Jul-15, Simon Riggs wrote:\n>\n> > On Fri, 15 Jul 2022 at 04:44, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> > > This patch has been marked as waiting for a review, however the CF bot\n> > > is completely red:\n> >\n> > Yes, it is failing, but so is current HEAD, with some kind of libpq\n> > pipelining error.\n>\n> Hmm, is it? Where can I see that? The buildfarm looks OK ...\n\nI got this... so cleaning up and retesting.\n\n# Looks like you failed 2 tests of 20.\nt/001_libpq_pipeline.pl .. Dubious, test returned 2 (wstat 512, 0x200)\n\nFailed 2/20 subtests\n\nTest Summary Report\n-------------------\nt/001_libpq_pipeline.pl (Wstat: 512 Tests: 20 Failed: 2)\n Failed tests: 9-10\n Non-zero exit status: 2\nFiles=1, Tests=20, 4 wallclock secs ( 0.02 usr 0.00 sys + 0.66 cusr\n 0.64 csys = 1.32 CPU)\nResult: FAIL\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 15 Jul 2022 12:15:32 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Fri, 15 Jul 2022 at 12:15, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Fri, 15 Jul 2022 at 09:12, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2022-Jul-15, Simon Riggs wrote:\n> >\n> > > On Fri, 15 Jul 2022 at 04:44, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > > > This patch has been marked as waiting for a review, however the CF bot\n> > > > is completely red:\n> > >\n> > > Yes, it is failing, but so is current HEAD, with some kind of libpq\n> > > pipelining error.\n> >\n> > Hmm, is it? Where can I see that? The buildfarm looks OK ...\n>\n> I got this... so cleaning up and retesting.\n>\n> # Looks like you failed 2 tests of 20.\n> t/001_libpq_pipeline.pl .. Dubious, test returned 2 (wstat 512, 0x200)\n>\n> Failed 2/20 subtests\n>\n> Test Summary Report\n> -------------------\n> t/001_libpq_pipeline.pl (Wstat: 512 Tests: 20 Failed: 2)\n> Failed tests: 9-10\n> Non-zero exit status: 2\n> Files=1, Tests=20, 4 wallclock secs ( 0.02 usr 0.00 sys + 0.66 cusr\n> 0.64 csys = 1.32 CPU)\n> Result: FAIL\n\nYeh, repeated failures on MacOS Catalina with HEAD.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 15 Jul 2022 12:19:37 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Mon, 4 Jul 2022 at 08:00, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Jul 03, 2022 at 05:41:31PM -0400, Tom Lane wrote:\n> > This is marked as Ready for Committer, but that seems unduly\n> > optimistic.\n>\n> Please note that patch authors should not switch a patch as RfC by\n> themselves. This is something that a reviewer should do.\n>\n> > The cfbot shows that it's failing on all platforms ---\n> > and not in the same way on each, suggesting there are multiple\n> > problems.\n>\n> A wild guess is that this comes from the patch that manipulates\n> get_database_name(), something that there is no need for as long as\n> the routine is called once in ReindexMultipleTables().\n\nOK, let me repost the new patch and see if CFbot likes that better.\n\nThis includes Michael's other requested changes for reindexdb.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Fri, 15 Jul 2022 12:25:33 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On 2022-Jul-15, Simon Riggs wrote:\n\n> > Test Summary Report\n> > -------------------\n> > t/001_libpq_pipeline.pl (Wstat: 512 Tests: 20 Failed: 2)\n> > Failed tests: 9-10\n> > Non-zero exit status: 2\n> > Files=1, Tests=20, 4 wallclock secs ( 0.02 usr 0.00 sys + 0.66 cusr\n> > 0.64 csys = 1.32 CPU)\n> > Result: FAIL\n> \n> Yeh, repeated failures on MacOS Catalina with HEAD.\n\nCan you share the contents of src/test/modules/libpq_pipeline/tmp_check?\nSince these failures are not visible in the buildfarm (where we do have\n11.0 as well as some 10.X versions of macOS), this is surprising.\n\nAnyway, the errors shown for your patch in the cfbot are in the\npg_upgrade test, so I suggest to have a look at the logs for those\nanyway.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\nY una voz del caos me habló y me dijo\n\"Sonríe y sé feliz, podría ser peor\".\nY sonreí. Y fui feliz.\nY fue peor.\n\n\n",
"msg_date": "Fri, 15 Jul 2022 13:29:33 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On 2022-Jul-15, Alvaro Herrera wrote:\n\n> > Yeh, repeated failures on MacOS Catalina with HEAD.\n> \n> Can you share the contents of src/test/modules/libpq_pipeline/tmp_check?\n> Since these failures are not visible in the buildfarm (where we do have\n> 11.0 as well as some 10.X versions of macOS), this is surprising.\n\nSo I got the log files, and the error is clear:\n\n: # Running: libpq_pipeline -r 700 -t /Users/sriggs/pg/pg-git/postgresql/src/test/modules/libpq_pipeline/tmp_check/traces/pipeline_idle.trace pipeline_idle port=57444 host=/var/folders/rd/kxv86w7567z9jk5qt7lhxfwr0000gn/T/Od6QFSH7TB dbname='postgres'\n: \n: pipeline idle...\n: NOTICE 1: message type 0x33 arrived from server while idle\n: \n: libpq_pipeline:1037: got 1 notice(s)\n: [12:17:00.181](0.016s) not ok 9 - libpq_pipeline pipeline_idle\n\nthen the trace test also fails, but only because it is truncated at the\npoint where the notice is reported; up to that point, the trace matches\ncorrectly.\n\nNot sure what to make of this. Maybe the fix in 054325c5eeb3 is not\nright, but I don't understand why it doesn't fail in the macOS machines\nin the buildfarm.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Escucha y olvidarás; ve y recordarás; haz y entenderás\" (Confucio)\n\n\n",
"msg_date": "Fri, 15 Jul 2022 13:58:56 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On 2022-Jul-15, Alvaro Herrera wrote:\n\n> Not sure what to make of this. Maybe the fix in 054325c5eeb3 is not\n> right, but I don't understand why it doesn't fail in the macOS machines\n> in the buildfarm.\n\nAh, one theory is that the libpq_pipeline program is getting linked to\nan installed version of libpq that doesn't contain the fixes. Maybe you\ncan do `ldd /path/to/libpq_pipeline` and see which copy of libpq.so it\nis picking up?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"No deja de ser humillante para una persona de ingenio saber\nque no hay tonto que no le pueda enseñar algo.\" (Jean B. Say)\n\n\n",
"msg_date": "Fri, 15 Jul 2022 15:03:34 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Ah, one theory is that the libpq_pipeline program is getting linked to\n> an installed version of libpq that doesn't contain the fixes. Maybe you\n> can do `ldd /path/to/libpq_pipeline` and see which copy of libpq.so it\n> is picking up?\n\nThat's pronounced \"otool -L\" on macOS. But in any case, it's going\nto point at the installation directory. One of the moving parts here\nis that \"make check\" will try to override the rpath that otool tells\nyou about to make test programs use the libpq.dylib from the build tree.\nI say \"try\" because if you've got SIP enabled (see what \"csrutil status\"\ntells you), it will fail to do so and the installed libpq will be used.\nMaybe that's old.\n\nStandard recommendation on macOS with SIP on is to always do \"make\ninstall\" before \"make check\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Jul 2022 10:03:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Fri, 15 Jul 2022 at 15:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > Ah, one theory is that the libpq_pipeline program is getting linked to\n> > an installed version of libpq that doesn't contain the fixes. Maybe you\n> > can do `ldd /path/to/libpq_pipeline` and see which copy of libpq.so it\n> > is picking up?\n>\n> That's pronounced \"otool -L\" on macOS. But in any case, it's going\n> to point at the installation directory. One of the moving parts here\n> is that \"make check\" will try to override the rpath that otool tells\n> you about to make test programs use the libpq.dylib from the build tree.\n> I say \"try\" because if you've got SIP enabled (see what \"csrutil status\"\n> tells you), it will fail to do so and the installed libpq will be used.\n> Maybe that's old.\n>\n> Standard recommendation on macOS with SIP on is to always do \"make\n> install\" before \"make check\".\n\nThanks, will investigate.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 15 Jul 2022 18:20:16 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Fri, 15 Jul 2022 at 12:25, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Mon, 4 Jul 2022 at 08:00, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Sun, Jul 03, 2022 at 05:41:31PM -0400, Tom Lane wrote:\n> > > This is marked as Ready for Committer, but that seems unduly\n> > > optimistic.\n> >\n> > Please note that patch authors should not switch a patch as RfC by\n> > themselves. This is something that a reviewer should do.\n> >\n> > > The cfbot shows that it's failing on all platforms ---\n> > > and not in the same way on each, suggesting there are multiple\n> > > problems.\n> >\n> > A wild guess is that this comes from the patch that manipulates\n> > get_database_name(), something that there is no need for as long as\n> > the routine is called once in ReindexMultipleTables().\n>\n> OK, let me repost the new patch and see if CFbot likes that better.\n>\n> This includes Michael's other requested changes for reindexdb.\n\nThat's fixed it on the CFbot. Over to you, Michael. Thanks.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 15 Jul 2022 18:21:22 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Fri, Jul 15, 2022 at 06:21:22PM +0100, Simon Riggs wrote:\n> That's fixed it on the CFbot. Over to you, Michael. Thanks.\n\nSure. I have looked over that, and this looks fine overall. I have\nmade two changes though.\n\n if (objectKind == REINDEX_OBJECT_SYSTEM &&\n- !IsSystemClass(relid, classtuple))\n+ !IsCatalogRelationOid(relid))\n+ continue;\n+ else if (objectKind == REINDEX_OBJECT_DATABASE &&\n+ IsCatalogRelationOid(relid))\n\nThe patch originally relied on IsSystemClass() to decide if a relation\nis a catalog table or not. This is not wrong in itself because\nReindexMultipleTables() discards RELKIND_TOAST a couple of lines\nabove, but I think that we should switch to IsCatalogRelationOid() as\nthat's the line drawn to check for the catalog-ness of a relation.\n\nThe second thing is test coverage. Using a REINDEX DATABASE/SYSTEM\nwithin the main regression test suite is not a good idea, but we\nalready have those commands running in the reindexdb suite so I could\nnot resist expanding the test section to track and check relfilenode\nchanges through four relations for these cases:\n- Catalog index.\n- Catalog toast index.\n- User table index.\n- User toast index.\nThe relfilenodes of those relations are saved in a table and\ncross-checked with the contents of pg_class after each REINDEX, on\nSYSTEM or DATABASE. There are no new heavy commands, so it does not\nmake the test longer.\n\nWith all that, I finish with the attached. Does that look fine to\nyou?\n--\nMichael",
"msg_date": "Sun, 17 Jul 2022 15:19:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Sun, 17 Jul 2022 at 07:19, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jul 15, 2022 at 06:21:22PM +0100, Simon Riggs wrote:\n> > That's fixed it on the CFbot. Over to you, Michael. Thanks.\n>\n> Sure. I have looked over that, and this looks fine overall. I have\n> made two changes though.\n>\n> if (objectKind == REINDEX_OBJECT_SYSTEM &&\n> - !IsSystemClass(relid, classtuple))\n> + !IsCatalogRelationOid(relid))\n> + continue;\n> + else if (objectKind == REINDEX_OBJECT_DATABASE &&\n> + IsCatalogRelationOid(relid))\n>\n> The patch originally relied on IsSystemClass() to decide if a relation\n> is a catalog table or not. This is not wrong in itself because\n> ReindexMultipleTables() discards RELKIND_TOAST a couple of lines\n> above, but I think that we should switch to IsCatalogRelationOid() as\n> that's the line drawn to check for the catalog-ness of a relation.\n>\n> The second thing is test coverage. Using a REINDEX DATABASE/SYSTEM\n> within the main regression test suite is not a good idea, but we\n> already have those commands running in the reindexdb suite so I could\n> not resist expanding the test section to track and check relfilenode\n> changes through four relations for these cases:\n> - Catalog index.\n> - Catalog toast index.\n> - User table index.\n> - User toast index.\n> The relfilenodes of those relations are saved in a table and\n> cross-checked with the contents of pg_class after each REINDEX, on\n> SYSTEM or DATABASE. There are no new heavy commands, so it does not\n> make the test longer.\n>\n> With all that, I finish with the attached. Does that look fine to\n> you?\n\nSounds great, looks fine. Thanks for your review.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Sun, 17 Jul 2022 10:58:26 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "Sorry, I meant to send this earlier..\n\nOn Sun, Jul 17, 2022 at 03:19:47PM +0900, Michael Paquier wrote:\n> The second thing is test coverage. Using a REINDEX DATABASE/SYSTEM\n\n> +my $catalog_toast_index = $node->safe_psql('postgres',\n> +\t\"SELECT indexrelid::regclass FROM pg_index WHERE indrelid = '$toast_table'::regclass;\"\n> +);\n> +\n> +# Set of SQL queries to cross-check the state of relfilenodes across\n> +# REINDEX operations. A set of relfilenodes is saved from the catalogs\n> +# and then compared with pg_class.\n> +$node->safe_psql('postgres',\n> +\t'CREATE TABLE toast_relfilenodes (relname regclass, relfilenode oid);');\n\nIt looks like you named the table \"toast_relfilenodes\", but then also store\nto it data for non-toast tables.\n\nIt's also a bit weird to call the column \"relname\" but use it to store the\n::regclass. You later need to cast the column to text, so you may as well\nstore it as text, either relname or oid::regclass.\n\nIt seems like cluster.sql does this more succinctly.\n\n-- Check that clustering sets new relfilenodes:\nCREATE TEMP TABLE old_cluster_info AS SELECT relname, level, relfilenode, relkind FROM pg_partition_tree('clstrpart'::regclass) AS tree JOIN pg_class c ON c.oid=tree.relid ;\nCLUSTER clstrpart USING clstrpart_idx;\nCREATE TEMP TABLE new_cluster_info AS SELECT relname, level, relfilenode, relkind FROM pg_partition_tree('clstrpart'::regclass) AS tree JOIN pg_class c ON c.oid=tree.relid ;\nSELECT relname, old.level, old.relkind, old.relfilenode = new.relfilenode FROM old_cluster_info AS old JOIN new_cluster_info AS new USING (relname) ORDER BY relname COLLATE \"C\";\n\n> +# Save the relfilenode of a set of toast indexes, one from the catalog\n> +# pg_constraint and one from the test table. This data is used for checks\n> +# after some of the REINDEX operations done below, checking if they are\n> +# changed.\n> +my $fetch_toast_relfilenodes = qq{SELECT c.oid::regclass, c.relfilenode\n> + FROM pg_class a\n> + JOIN pg_class b ON (a.oid = b.reltoastrelid)\n> + JOIN pg_index i on (a.oid = i.indrelid)\n> + JOIN pg_class c on (i.indexrelid = c.oid)\n> + WHERE b.oid IN ('pg_constraint'::regclass, 'test1'::regclass)};\n> +# Same for relfilenodes of normal indexes. This saves the relfilenode\n> +# from a catalog of pg_constraint, and the one from the test table.\n> +my $fetch_index_relfilenodes = qq{SELECT oid, relfilenode\n> + FROM pg_class\n> + WHERE relname IN ('pg_constraint_oid_index', 'test1x')};\n> +my $save_relfilenodes =\n> +\t\"INSERT INTO toast_relfilenodes $fetch_toast_relfilenodes;\"\n> + . \"INSERT INTO toast_relfilenodes $fetch_index_relfilenodes;\";\n> +\n> +# Query to compare a set of relfilenodes saved with the contents of pg_class.\n> +# Note that this does not join using OIDs, as CONCURRENTLY would change them\n> +# when reindexing. A filter is applied on the toast index names, even if this\n> +# does not make a difference between the catalog and normal ones, the ordering\n> +# based on the name is enough to ensure a fixed output.\n> +my $compare_relfilenodes =\n> + qq(SELECT regexp_replace(b.relname::text, '(pg_toast.pg_toast_)\\\\d{4,5}(_index)', '\\\\1<oid>\\\\2'),\n\nWhy {4,5} ?\n\n> + CASE WHEN a.relfilenode = b.relfilenode THEN 'relfilenode is unchanged'\n> + ELSE 'relfilenode has changed' END\n> + FROM toast_relfilenodes b\n> + JOIN pg_class a ON b.relname::text = a.oid::regclass::text\n> + ORDER BY b.relname::text);\n> +\n> +# Save the set of relfilenodes and compare them.\n> +$node->safe_psql('postgres', $save_relfilenodes);\n> +$node->issues_sql_like(\n> +\t[ 'reindexdb', 'postgres' ],\n> +\tqr/statement: REINDEX DATABASE postgres;/,\n> +\t'SQL REINDEX run');\n> +my $relnode_info = $node->safe_psql('postgres', $compare_relfilenodes);\n> +is( $relnode_info, qq(pg_constraint_oid_index|relfilenode is unchanged\n> +pg_toast.pg_toast_<oid>_index|relfilenode has changed\n> +pg_toast.pg_toast_<oid>_index|relfilenode is unchanged\n> +test1x|relfilenode has changed), 'relfilenode change after REINDEX DATABASE');\n> +\n> +# Re-save and run the second one.\n> +$node->safe_psql('postgres',\n> +\t\"TRUNCATE toast_relfilenodes; $save_relfilenodes\");\n> +$node->issues_sql_like(\n> +\t[ 'reindexdb', '-s', 'postgres' ],\n> +\tqr/statement: REINDEX SYSTEM postgres;/,\n> +\t'reindex system tables');\n> +$relnode_info = $node->safe_psql('postgres', $compare_relfilenodes);\n> +is( $relnode_info, qq(pg_constraint_oid_index|relfilenode has changed\n> +pg_toast.pg_toast_<oid>_index|relfilenode is unchanged\n> +pg_toast.pg_toast_<oid>_index|relfilenode has changed\n> +test1x|relfilenode is unchanged), 'relfilenode change after REINDEX SYSTEM');\n\n\n",
"msg_date": "Mon, 18 Jul 2022 21:26:53 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Sun, Jul 17, 2022 at 10:58:26AM +0100, Simon Riggs wrote:\n> Sounds great, looks fine. Thanks for your review.\n\nOk, cool. At the end, I have decided to split the tests and the main\npatch into two different commits, as each is useful on its own. Doing\nso also helps in seeing the difference of behavior when issuing a\nREINDEX DATABASE. Another thing that was itching me with the test is\nthat it was not possible to make the difference between the toast\nindex of the catalog and of the user table, so I have added the parent\ntable name as an extra thing stored in the table storing the\nrelfilenodes.\n--\nMichael",
"msg_date": "Tue, 19 Jul 2022 12:43:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 09:26:53PM -0500, Justin Pryzby wrote:\n> Sorry, I meant to send this earlier..\n\nNo problem.\n\n> It looks like you named the table \"toast_relfilenodes\", but then also store\n> to it data for non-toast tables.\n\nHow about naming that index_relfilenodes? One difference with what I\nposted previously and 5fb5b6 is the addition of an extra regclass that\nstores the parent table, for reference in the output.\n\n> It's also a bit weird to call the column \"relname\" but use it to store the\n> ::regclass. You later need to cast the column to text, so you may as well\n> store it as text, either relname or oid::regclass.\n\nI have used \"indname\" at the end.\n\n> It seems like cluster.sql does this more succinctly.\n\nExcept that this does not include the relfilenodes from the toast\nindexes, which is something I wanted to add a check for when it comes\nto both user tables and catalogs.\n\n> Why {4,5} ?\n\nLooks like a brain fade from here, while looking the relation names\nthis generated. This could just match with an integer.\n--\nMichael",
"msg_date": "Tue, 19 Jul 2022 13:13:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 01:13:34PM +0900, Michael Paquier wrote:\n> > It looks like you named the table \"toast_relfilenodes\", but then also store\n> > to it data for non-toast tables.\n> \n> How about naming that index_relfilenodes? One difference with what I\n> posted previously and 5fb5b6 is the addition of an extra regclass that\n> stores the parent table, for reference in the output.\n\nLooks fine\n\n> -\t'CREATE TABLE toast_relfilenodes (parent regclass, indname regclass, relfilenode oid);'\n> +\t'CREATE TABLE index_relfilenodes (parent regclass, indname regclass, relfilenode oid);'\n\n-- \nJusti\n\n\n",
"msg_date": "Wed, 20 Jul 2022 07:27:07 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 07:27:07AM -0500, Justin Pryzby wrote:\n> Looks fine\n\nThanks for checking, applied.\n--\nMichael",
"msg_date": "Thu, 21 Jul 2022 11:02:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Allowing REINDEX to have an optional name"
}
] |
[
{
"msg_contents": "Hi All,\n\nCurrently, in postgres we have two different functions that are\nspecially used to open the WAL files for reading and writing purposes.\nThe first one is XLogFileOpen() that is just used to open the WAL file\nso that we can write WAL data in it. And then we have another function\nnamed XLogFileRead that does the same thing but is used when reading\nthe WAL files during recovery time. How about renaming the function\nXLogFileRead to XLogFileOpenForRead and the other one can be renamed\nto XLogFileOpenForWrite. I think it will make the function name more\nclear and increase the readability. At least XlogFileRead doesn't look\ngood to me, from the function name it actually appears like we are\ntrying to read a WAL file here but actually we are opening it so that\nit can be read by some other routine.\n\nAlso I see that we are passing emode to the XLogFileRead function\nwhich is not being used anywhere in the function, so can we remove it?\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n",
"msg_date": "Tue, 10 May 2022 18:15:55 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "How about renaming XLogFileRead() to XLogFileOpenForRead() and\n XLogFileOpen() to XLogFileOpenForWrite()?"
},
{
"msg_contents": "On Tue, May 10, 2022 at 6:16 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi All,\n>\n> Currently, in postgres we have two different functions that are\n> specially used to open the WAL files for reading and writing purposes.\n> The first one is XLogFileOpen() that is just used to open the WAL file\n> so that we can write WAL data in it. And then we have another function\n> named XLogFileRead that does the same thing but is used when reading\n> the WAL files during recovery time. How about renaming the function\n> XLogFileRead to XLogFileOpenForRead and the other one can be renamed\n> to XLogFileOpenForWrite. I think it will make the function name more\n> clear and increase the readability. At least XlogFileRead doesn't look\n> good to me, from the function name it actually appears like we are\n> trying to read a WAL file here but actually we are opening it so that\n> it can be read by some other routine.\n>\n> Also I see that we are passing emode to the XLogFileRead function\n> which is not being used anywhere in the function, so can we remove it?\n\nRenaming XLogFileOpen to XLogFileOpenForWrite while it uses O_RDWR,\nnot O_RDWR is sort of conflicting. Also, I'm concerned that\nXLogFileOpen is an extern function, the external modules using it\nmight break. XLogFileRead uses O_RDONLY and is a static function, so\nit might be okay to change the name, the only concern is that it\ncreates diff with the older versions as we usually don't backport\nrenaming functions or variables/code improvements/not-so-critical\nchanges.\n\nHaving said that, IMHO, the existing functions and their names look\nfine to me (developers can read the function/function comments to\nunderstand their usage though).\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Tue, 10 May 2022 18:46:36 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How about renaming XLogFileRead() to XLogFileOpenForRead() and\n XLogFileOpen() to XLogFileOpenForWrite()?"
},
{
"msg_contents": "On Tue, May 10, 2022 at 6:46 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, May 10, 2022 at 6:16 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > Hi All,\n> >\n> > Currently, in postgres we have two different functions that are\n> > specially used to open the WAL files for reading and writing purposes.\n> > The first one is XLogFileOpen() that is just used to open the WAL file\n> > so that we can write WAL data in it. And then we have another function\n> > named XLogFileRead that does the same thing but is used when reading\n> > the WAL files during recovery time. How about renaming the function\n> > XLogFileRead to XLogFileOpenForRead and the other one can be renamed\n> > to XLogFileOpenForWrite. I think it will make the function name more\n> > clear and increase the readability. At least XlogFileRead doesn't look\n> > good to me, from the function name it actually appears like we are\n> > trying to read a WAL file here but actually we are opening it so that\n> > it can be read by some other routine.\n> >\n> > Also I see that we are passing emode to the XLogFileRead function\n> > which is not being used anywhere in the function, so can we remove it?\n>\n> Renaming XLogFileOpen to XLogFileOpenForWrite while it uses O_RDWR,\n> not O_RDWR is sort of conflicting. Also, I'm concerned that\n> XLogFileOpen is an extern function, the external modules using it\n> might break.\n\nWhy would the external modules open WAL files to perform write\noperations? AFAIU from the code, this function is specifically written\nto open WAL files during WAL write operation. So I don't see any\nproblem in renaming it to XLogFileOpenForWrite(). Infact as I said, it\ndoes increase the readability. And likewise, XLogFileRead can be\nrenamed to XLogFileOpenForRead which seems you are okay with.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n",
"msg_date": "Tue, 10 May 2022 19:49:45 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How about renaming XLogFileRead() to XLogFileOpenForRead() and\n XLogFileOpen() to XLogFileOpenForWrite()?"
}
] |
[
{
"msg_contents": "I have completed the first draft of the PG 15 release notes and you can\nsee the results here:\n\n https://momjian.us/pgsql_docs/release-15.html\n\nThe feature count is similar to recent major releases:\n\n release-10 195\n release-11 185\n release-12 198\n release-13 183\n release-14 229\n--> release-15 186\n\nI assume there will be major adjustments in the next few weeks based on\nfeedback.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 10 May 2022 11:44:15 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "First draft of the PG 15 release notes"
},
{
"msg_contents": "I'm guessing this should be \"trailing\", not training?\n\n> Prevent numeric literals from having non-numeric training characters (Peter Eisentraut)\n\n\n",
"msg_date": "Tue, 10 May 2022 16:58:50 +0100",
"msg_from": "Geoff Winkless <pgsqladmin@geoff.dj>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, May 10, 2022 at 04:58:50PM +0100, Geoff Winkless wrote:\n> I'm guessing this should be \"trailing\", not training?\n> \n> > Prevent numeric literals from having non-numeric training characters (Peter Eisentraut)\n\nThanks, fixed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 10 May 2022 12:51:38 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "Op 10-05-2022 om 17:44 schreef Bruce Momjian:\n> I have completed the first draft of the PG 15 release notes and you can\n> see the results here:\n> \n> https://momjian.us/pgsql_docs/release-15.html\n\ntypos:\n\n'accept empty array' should be\n'to accept empty array'\n\n'super users' should be\n'superusers'\n (several times)\n\n'The new options causes the column names'\n'The new options cause the column names'\n\n'were always effected.'\n'were always affected.'\n (I think...)\n\n'Previous the actual schema'\n'Previously the actual schema'\n\n'inforcement' should be\n'enforcement'\n (surely?)\n\n'server slide' should be\n'server side'\n\n'Add extensions to define their own' should be\n'Allow extensions to define their own'\n\n\nAnd one strangely unfinished sentence:\n'They also can only be'\n\n\nErik Rijkers\n\n\n",
"msg_date": "Tue, 10 May 2022 20:07:45 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "Thanks for writting this.\n\nComments from my first read:\n\n| This mode could cause server startup failure if the database server stopped abruptly while in this mode.\n\nThis sentence both begins and ends with \"this mode\".\n\n| This allows query hash operations to use double the amount of work_mem memory as other operations. \n\nShould it say \"node\" ?\n\n| Allow tsvector_delete_arr() and tsvector_setweight_by_filter() accept empty array elements (Jean-Christophe Arnu) \n\nTO accept\nI don't think this should be in the \"compatibility\" section?\n\n| This accepts numeric formats like \".1\" and \"1.\", and disallow trailing junk after numeric literals, like \"1.type()\". \n\ndisallows with an ess\n\n| This will cause setseed() followed by random() to return a different value on older servers. \n\n*than* older servers\n\n| The Postgres default has always been to treat NULL indexed values as distinct, but this can now be changed by creating constraints and indexes using UNIQUE NULLS NOT DISTINCT. \n\nshould not say default, since it wasn't previously configurable.\n\"The previous behavior was ...\"\n\n| Have extended statistics track statistics for a table's children separately (Tomas Vondra, Justin Pryzby)\n| Regular statistics already tracked child and non-child statistics separately. \n\n\"separately\" seems vague. Currently in v10-v13, extended stats are collected\nfor partitioned tables, and collected for the \"SELECT FROM ONLY\"/parent case\nfor inheritance parents. This changes to also collect stats for the \"SELECT\nFROM tbl*\" case. See also: 20220204214103.GT23027@telsasoft.com.\n\n| Allow members of the pg_checkpointer predefined role to run the CHECKPOINT command (Jeff Davis)\n| Previously these views could only be run by super users.\n| Allow members of the pg_read_all_stats predefined role to access the views pg_backend_memory_contexts and pg_shmem_allocations (Bharath Rupireddy)\n| Previously these views could only be run by super users. \n\ncheckpoint is not a view (and views cannot be run)\n\n| Previously runtime-computed values data_checksums, wal_segment_size, and data_directory_mode would report values that would not be accurate on the running server. They also can only be \n\nbe what ?\n\n| Add server variable allow_in_place_tablespaces for tablespace testing (Thomas Munro) \n\nThis is a developer setting, so doesn't need to be mentioned ?\n\n| Add function pg_settings_get_flags() to get the flags of server-side variables (Justin Pryzby) \n\nIMO this is the same, but I think maybe Michael things about it differently...\n\n| Allow WAL full page writes to use LZ4 and ZSTD compression (Andrey Borodin, Justin Pryzby) \n| Add support for LZ4 and ZSTD compression of server-side base backups (Jeevan Ladhe, Robert Haas) \n| Allow pg_basebackup to decompress LZ4 and ZSTD compressed server-side base backups, and LZ4 and ZSTD compress output files (Dipesh Pandit, Jeevan Ladhe) \n| Add configure option --with-zstd to enable ZSTD build (Jeevan Ladhe, Robert Haas, Michael Paquier) \n\nMaybe these should say \"Zstandard\" ? See 586955dddecc95e0003262a3954ae83b68ce0372.\n\n| The new options causes the column names to be output, and optionally verified on input. \n\noption\n\n| Previous the actual schema name was used. \n\nPreviously\n\n| When EXPLAIN references the temporary object schema, refer to it as \"pg_temp\" (Amul Sul) \n| When specifying fractional interval values in units greater than months, round to the nearest month (Bruce Momjian)\n| Limit support of psql to servers running PostgreSQL 9.2 and later (Tom Lane) \n| Limit support of pg_dump and pg_dumpall to servers running PostgreSQL 9.2 and later (Tom Lane) \n| Limit support of pg_upgrade to old servers running PostgreSQL 9.2 and later (Tom Lane) \n| Remove server support for old BASE_BACKUP command syntax and base backup protocol (Robert Haas) \n\nDo these need to be in the \"compatibility\" section ?\n\n| Fix inforcement of PL/pgSQL variable CONSTANT markings (Tom Lane) \n\nenforcement\n\n| Allow IP address matching against a server's certificate Subject Alternative Name (Jacob Champion) \n\nShould say \"server certificate's\" ?\n\n| Allow libpq's SSL private to be owned by the root user (David Steele) \n\nprivate *key*\n\n| Have psql output all output if multiple queries are passed to the server at once (Fabien Coelho) \n\nall *results* ?\n\n| This can be disabled setting SHOW_ALL_RESULTS. \n\ndisabled *by* setting\n\n| Allow pg_basebackup's --compress option to control the compression method (Michael Paquier, Robert Haas) \n\nShould probably say \"compression method and options\"\n\n| Allow pg_basebackup to decompress LZ4 and ZSTD compressed server-side base backups, and LZ4 and ZSTD compress output files (Dipesh Pandit, Jeevan Ladhe)\n| Allow pg_basebackup to compress on the server slide and decompress on the client side before storage (Dipesh Pandit) \n\nMaybe these should be combined into one entry ?\n\n| Add the LZ4 compression method to pg_receivewal (Georgios Kokolatos)\n| This is enabled via --compression-method=lz4 and requires binaries to be built using --with-lz4. \n| Redesign pg_receivewal's compression options (Georgios Kokolatos)\n| The new --compression-method option controls the type of compression, rather than just relying on --compress. \n\nIt's --compress since 042a923ad.\n\n| Previously, pg_receivewal would start based on the WAL file stored in the local archive directory, or at the sending server's current WAL flush location. With this change, if the sending server is running Postgres 15 or later, the local archive directory is empty, and a replication slot is specified, the replication slots restart point will be used. \n\nslot's restart point (with a >>'<<)\n\n| Add dump/restore option --no-table-access-method to force restore to use only the default table access method (Justin Pryzby) \n\nremove \"only\" ?\n\n| This is for portability in restoring from systems using non-default table access methods. \n\nI would remove part about \"portability\". The use-case I see for this is\nrestoring something to a different table AM (not just heapam), in the same way\nas is possible for tablespaces:\nPGOPTIONS='-c default-table-access-method=foo' pg_restore --no-table-am ./dump\n\n| Previously only the first invalid connection setting database was reported. \n\n\"only the first database with an invalid connection setting...\"\n\n| Add new protocol message TARGET to specific a new COPY method to be for base backups (Robert Haas) \n\nspecify\n\n| Automatically export server variables using PGDLLIMPORT on Windows (Robert Haas) \n\nI don't think it's \"automatic\" ?\n\n| Allow informational escape sequences to be used in postgres_fdw's application name (Hayato Kuroda, Fujii Masao) \n\nI don't think this should be a separate entry\n\n| This is enabled with the \"parallel_commit\" postgres_fdw option. \n\nIt's an option to the SQL \"SERVER\" command.\n\n\n",
"msg_date": "Tue, 10 May 2022 13:09:35 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, May 10, 2022 at 08:07:45PM +0200, Erik Rijkers wrote:\n> And one strangely unfinished sentence:\n> 'They also can only be'\n\nI suspect this meant to highlight that \"postgres -C\" with runtime-computed\nGUCs doesn't work if the server is running.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 10 May 2022 12:44:56 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "\nAgreed on all of these, and URL contents updated. Thanks.\n\n---------------------------------------------------------------------------\n\nOn Tue, May 10, 2022 at 08:07:45PM +0200, Erik Rijkers wrote:\n> Op 10-05-2022 om 17:44 schreef Bruce Momjian:\n> > I have completed the first draft of the PG 15 release notes and you can\n> > see the results here:\n> > \n> > https://momjian.us/pgsql_docs/release-15.html\n> \n> typos:\n> \n> 'accept empty array' should be\n> 'to accept empty array'\n> \n> 'super users' should be\n> 'superusers'\n> (several times)\n> \n> 'The new options causes the column names'\n> 'The new options cause the column names'\n> \n> 'were always effected.'\n> 'were always affected.'\n> (I think...)\n> \n> 'Previous the actual schema'\n> 'Previously the actual schema'\n> \n> 'inforcement' should be\n> 'enforcement'\n> (surely?)\n> \n> 'server slide' should be\n> 'server side'\n> \n> 'Add extensions to define their own' should be\n> 'Allow extensions to define their own'\n> \n> \n> And one strangely unfinished sentence:\n> 'They also can only be'\n> \n> \n> Erik Rijkers\n> \n> \n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 10 May 2022 16:10:09 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On 5/10/22 11:44 AM, Bruce Momjian wrote:\r\n> I have completed the first draft of the PG 15 release notes and you can\r\n> see the results here:\r\n> \r\n> https://momjian.us/pgsql_docs/release-15.html\r\n\r\nThanks for pulling this together.\r\n\r\n+ Allow logical replication to transfer sequence changes\r\n\r\nI believe this was reverted in 2c7ea57e5, unless some other parts of \r\nthis work made it in.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Tue, 10 May 2022 16:17:59 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, May 10, 2022 at 01:09:35PM -0500, Justin Pryzby wrote:\n> Thanks for writting this.\n> \n> Comments from my first read:\n> \n> | This mode could cause server startup failure if the database server stopped abruptly while in this mode.\n> \n> This sentence both begins and ends with \"this mode\".\n\nAgreed, I rewrote this.\n\n> | This allows query hash operations to use double the amount of work_mem memory as other operations. \n> \n> Should it say \"node\" ?\n\nUh, I think users think of things like operations, e.g. sort operation\nvs sort node.\n\n> | Allow tsvector_delete_arr() and tsvector_setweight_by_filter() accept empty array elements (Jean-Christophe Arnu) \n> \n> TO accept\n> I don't think this should be in the \"compatibility\" section?\n\nYes, moved to Function.\n\n> | This accepts numeric formats like \".1\" and \"1.\", and disallow trailing junk after numeric literals, like \"1.type()\". \n> \n> disallows with an ess\n\nFixed.\n\n> | This will cause setseed() followed by random() to return a different value on older servers. \n> \n> *than* older servers\n\nFixed.\n\n> | The Postgres default has always been to treat NULL indexed values as distinct, but this can now be changed by creating constraints and indexes using UNIQUE NULLS NOT DISTINCT. \n> \n> should not say default, since it wasn't previously configurable.\n> \"The previous behavior was ...\"\n\nAgreed, reworded.\n\n> | Have extended statistics track statistics for a table's children separately (Tomas Vondra, Justin Pryzby)\n> | Regular statistics already tracked child and non-child statistics separately. \n> \n> \"separately\" seems vague. Currently in v10-v13, extended stats are collected\n> for partitioned tables, and collected for the \"SELECT FROM ONLY\"/parent case\n> for inheritance parents. This changes to also collect stats for the \"SELECT\n> FROM tbl*\" case. See also: 20220204214103.GT23027@telsasoft.com.\n\nAgreed, reworded. Can you check you like my new wording?\n\n> | Allow members of the pg_checkpointer predefined role to run the CHECKPOINT command (Jeff Davis)\n> | Previously these views could only be run by super users.\n> | Allow members of the pg_read_all_stats predefined role to access the views pg_backend_memory_contexts and pg_shmem_allocations (Bharath Rupireddy)\n> | Previously these views could only be run by super users. \n> \n> checkpoint is not a view (and views cannot be run)\n\nFixed, was copy/paste error.\n\n> | Previously runtime-computed values data_checksums, wal_segment_size, and data_directory_mode would report values that would not be accurate on the running server. They also can only be \n> \n> be what ?\n\nRemoved.\n\n> | Add server variable allow_in_place_tablespaces for tablespace testing (Thomas Munro) \n> \n> This is a developer setting, so doesn't need to be mentioned ?\n\nMoved to Source Code.\n\n> | Add function pg_settings_get_flags() to get the flags of server-side variables (Justin Pryzby) \n> \n> IMO this is the same, but I think maybe Michael things about it differently...\n\nUh, I thought it might hvae user value as well as developer.\n\n> | Allow WAL full page writes to use LZ4 and ZSTD compression (Andrey Borodin, Justin Pryzby) \n> | Add support for LZ4 and ZSTD compression of server-side base backups (Jeevan Ladhe, Robert Haas) \n> | Allow pg_basebackup to decompress LZ4 and ZSTD compressed server-side base backups, and LZ4 and ZSTD compress output files (Dipesh Pandit, Jeevan Ladhe) \n> | Add configure option --with-zstd to enable ZSTD build (Jeevan Ladhe, Robert Haas, Michael Paquier) \n> \n> Maybe these should say \"Zstandard\" ? See 586955dddecc95e0003262a3954ae83b68ce0372.\n\nI wasn't aware that ZSTD stood for that, so updated.\n\n> | The new options causes the column names to be output, and optionally verified on input. \n> \n> option\n\nFixed.\n> \n> | Previous the actual schema name was used. \n> \n> Previously\n\nFixed.\n\n> | When EXPLAIN references the temporary object schema, refer to it as \"pg_temp\" (Amul Sul) \n> | When specifying fractional interval values in units greater than months, round to the nearest month (Bruce Momjian)\n> | Limit support of psql to servers running PostgreSQL 9.2 and later (Tom Lane) \n> | Limit support of pg_dump and pg_dumpall to servers running PostgreSQL 9.2 and later (Tom Lane) \n> | Limit support of pg_upgrade to old servers running PostgreSQL 9.2 and later (Tom Lane) \n> | Remove server support for old BASE_BACKUP command syntax and base backup protocol (Robert Haas) \n> \n> Do these need to be in the \"compatibility\" section ?\n\nUh, I think of compatibility as breakage, while removing support for\nsomething doesn't seem like breakage. The protocol removal of\nBASE_BACKUP only relates to people writing tools, I thought, so no\nbreakage for non-internals users. I didn't think the fractional\ninterval change would be a breakage, though maybe it is. I didn't think\nEXPLAIN changes were user-parsed, so no breakage?\n\n> | Fix inforcement of PL/pgSQL variable CONSTANT markings (Tom Lane) \n> \n> enforcement\n\nFixed.\n\n> | Allow IP address matching against a server's certificate Subject Alternative Name (Jacob Champion) \n> \n> Should say \"server certificate's\" ?\n\nAgreed.\n\n> | Allow libpq's SSL private to be owned by the root user (David Steele) \n> \n> private *key*\n\nI changed it to \"private key file\".\n\n> | Have psql output all output if multiple queries are passed to the server at once (Fabien Coelho) \n> \n> all *results* ?\n\nYes, fixed.\n\n> | This can be disabled setting SHOW_ALL_RESULTS. \n> \n> disabled *by* setting\n\nAgreed, fixed.\n\n> | Allow pg_basebackup's --compress option to control the compression method (Michael Paquier, Robert Haas) \n> \n> Should probably say \"compression method and options\"\n\nGood point, that feature moved around during the development cycle.\n\n> | Allow pg_basebackup to decompress LZ4 and ZSTD compressed server-side base backups, and LZ4 and ZSTD compress output files (Dipesh Pandit, Jeevan Ladhe)\n> | Allow pg_basebackup to compress on the server slide and decompress on the client side before storage (Dipesh Pandit) \n> \n> Maybe these should be combined into one entry ?\n\nUh, I think it applies to gzip as well so they can't be combined, and\nthey seem to do different things.\n\n> | Add the LZ4 compression method to pg_receivewal (Georgios Kokolatos)\n> | This is enabled via --compression-method=lz4 and requires binaries to be built using --with-lz4. \n> | Redesign pg_receivewal's compression options (Georgios Kokolatos)\n> | The new --compression-method option controls the type of compression, rather than just relying on --compress. \n> \n> It's --compress since 042a923ad.\n\nYep, fixed.\n\n> | Previously, pg_receivewal would start based on the WAL file stored in the local archive directory, or at the sending server's current WAL flush location. With this change, if the sending server is running Postgres 15 or later, the local archive directory is empty, and a replication slot is specified, the replication slots restart point will be used. \n> \n> slot's restart point (with a >>'<<)\n\nFixed.\n\n> | Add dump/restore option --no-table-access-method to force restore to use only the default table access method (Justin Pryzby) \n> \n> remove \"only\" ?\n\nI changed it to \"to only use the default\" since I think that is the\npoint --- it doesn't use anything but the default.\n\n> | This is for portability in restoring from systems using non-default table access methods. \n> \n> I would remove part about \"portability\". The use-case I see for this is\n> restoring something to a different table AM (not just heapam), in the same way\n> as is possible for tablespaces:\n> PGOPTIONS='-c default-table-access-method=foo' pg_restore --no-table-am ./dump\n\nI removed the portability sentence.\n\n> | Previously only the first invalid connection setting database was reported. \n> \n> \"only the first database with an invalid connection setting...\"\n\nYes, reworded.\n\n> | Add new protocol message TARGET to specific a new COPY method to be for base backups (Robert Haas) \n> \n> specify\n\nFixed.\n\n> | Automatically export server variables using PGDLLIMPORT on Windows (Robert Haas) \n> \n> I don't think it's \"automatic\" ?\n\nYes, reworded.\n\n> | Allow informational escape sequences to be used in postgres_fdw's application name (Hayato Kuroda, Fujii Masao) \n> \n> I don't think this should be a separate entry\n\nUh, the entry above is about per-connection application name, while this\nis about escapes --- seems different to me, and hard to combine.\n\n> | This is enabled with the \"parallel_commit\" postgres_fdw option. \n> \n> It's an option to the SQL \"SERVER\" command.\n\nYes, reworded. URL contents updated:\n\n\thttps://momjian.us/pgsql_docs/release-15.html\n\nCan you verify you like the new contents please? Thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 10 May 2022 16:18:10 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, May 10, 2022 at 12:44:56PM -0700, Nathan Bossart wrote:\n> On Tue, May 10, 2022 at 08:07:45PM +0200, Erik Rijkers wrote:\n> > And one strangely unfinished sentence:\n> > 'They also can only be'\n> \n> I suspect this meant to highlight that \"postgres -C\" with runtime-computed\n> GUCs doesn't work if the server is running.\n\nYes, you are correct --- wording updated.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 10 May 2022 16:31:40 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, May 10, 2022 at 04:17:59PM -0400, Jonathan Katz wrote:\n> On 5/10/22 11:44 AM, Bruce Momjian wrote:\n> > I have completed the first draft of the PG 15 release notes and you can\n> > see the results here:\n> > \n> > https://momjian.us/pgsql_docs/release-15.html\n> \n> Thanks for pulling this together.\n> \n> + Allow logical replication to transfer sequence changes\n> \n> I believe this was reverted in 2c7ea57e5, unless some other parts of this\n> work made it in.\n\nYes, sorry, I missed that. Oddly, the unlogged sequence patch was\nretained, even though there is no value for it on the primary. I\nremoved the sentence that mentioned that benefit from the release notes\nsince it doesn't apply to PG 15 anymore.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 10 May 2022 16:32:52 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, May 10, 2022 at 04:18:10PM -0400, Bruce Momjian wrote:\n> > \"separately\" seems vague. Currently in v10-v13, extended stats are collected\n> > for partitioned tables, and collected for the \"SELECT FROM ONLY\"/parent case\n> > for inheritance parents. This changes to also collect stats for the \"SELECT\n> > FROM tbl*\" case. See also: 20220204214103.GT23027@telsasoft.com.\n> \n> Agreed, reworded. Can you check you like my new wording?\n\nNow it says:\n| Allow extended statistics to record statistics for a parent with all it children (Tomas Vondra, Justin Pryzby) \n\nit should say \"its\" children.\n\n> > | Add function pg_settings_get_flags() to get the flags of server-side variables (Justin Pryzby) \n> > \n> > IMO this is the same, but I think maybe Michael things about it differently...\n> \n> Uh, I thought it might hvae user value as well as developer.\n\nThe list of flags it includes is defined as \"the flags we needed to deprecate\n./check_guc\", but it could conceivably be useful to someone else... But it\nseems more like allow_in_place_tablespaces; it could go into the \"source code\"\nsection, or be removed.\n\n> > | When EXPLAIN references the temporary object schema, refer to it as \"pg_temp\" (Amul Sul) \n> > | When specifying fractional interval values in units greater than months, round to the nearest month (Bruce Momjian)\n> > | Limit support of psql to servers running PostgreSQL 9.2 and later (Tom Lane) \n> > | Limit support of pg_dump and pg_dumpall to servers running PostgreSQL 9.2 and later (Tom Lane) \n> > | Limit support of pg_upgrade to old servers running PostgreSQL 9.2 and later (Tom Lane) \n> > | Remove server support for old BASE_BACKUP command syntax and base backup protocol (Robert Haas) \n> > \n> > Do these need to be in the \"compatibility\" section ?\n> \n> Uh, I think of compatibility as breakage, while removing support for\n> something doesn't seem like breakage.\n\nI think removing support which breaks a user-facing behavior is presumptively a\ncompatibility issue.\n\n> I didn't think EXPLAIN changes were user-parsed, so no breakage?\n\nWhy would we have explain(format json/xml) if it wasn't meant to be parsed ?\nAt one point I was parsing its xml.\n\nI'll let other's comment about the rest of the list.\n\n> > | Automatically export server variables using PGDLLIMPORT on Windows (Robert Haas) \n> > \n> > I don't think it's \"automatic\" ?\n> \n> Yes, reworded.\n\nMaybe it's a tiny bit better to say:\n| Export all server variables on Windows using PGDLLIMPORT (Robert Haas) \n\n(Otherwise, \"all server variables using PGDLLIMPORT\" could sound like only\nthose \"server variables [which were] using PGDLLIMPORT\" were exported).\n\n> > | Allow informational escape sequences to be used in postgres_fdw's application name (Hayato Kuroda, Fujii Masao) \n> > \n> > I don't think this should be a separate entry\n> \n> Uh, the entry above is about per-connection application name, while this\n> is about escapes --- seems different to me, and hard to combine.\n\n449ab635052 postgres_fdw: Allow application_name of remote connection to be set via GUC.\n6e0cb3dec10 postgres_fdw: Allow postgres_fdw.application_name to include escape sequences.\n94c49d53402 postgres_fdw: Make postgres_fdw.application_name support more escape sequences.\n\nYou have one entry for 449a, and one entry where you've combined 6e0c and 94c4.\n\nMy point is that the 2nd two commits changed the behavior of the first commit,\nand I don't see why an end-user would want to know about the intermediate\nbehavior from the middle of the development cycle when escape sequences weren't\nexpanded. So I don't know why they'd be listed separately.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 10 May 2022 16:02:35 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "> I have completed the first draft of the PG 15 release notes and you can\n> see the results here:\n> \n> https://momjian.us/pgsql_docs/release-15.html\n\n> Allow pgbench to retry after serialization and deadlock failures (Yugo Nagata, Marina Polyakova)\n\nThis is in the \"Additional Modules\" section. I think this should be in\nthe \"Client Applications\" section because pgbench lives in bin\ndirectory, not in contrib directory. Actually, pgbench was in the\n\"Client Applications\" section in the PG 14 release note.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Wed, 11 May 2022 06:36:12 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, May 10, 2022 at 04:02:35PM -0500, Justin Pryzby wrote:\n> On Tue, May 10, 2022 at 04:18:10PM -0400, Bruce Momjian wrote:\n> > > \"separately\" seems vague. Currently in v10-v13, extended stats are collected\n> > > for partitioned tables, and collected for the \"SELECT FROM ONLY\"/parent case\n> > > for inheritance parents. This changes to also collect stats for the \"SELECT\n> > > FROM tbl*\" case. See also: 20220204214103.GT23027@telsasoft.com.\n> > \n> > Agreed, reworded. Can you check you like my new wording?\n> \n> Now it says:\n> | Allow extended statistics to record statistics for a parent with all it children (Tomas Vondra, Justin Pryzby) \n> \n> it should say \"its\" children.\n\nFixed.\n\n> > > | Add function pg_settings_get_flags() to get the flags of server-side variables (Justin Pryzby) \n> > > \n> > > IMO this is the same, but I think maybe Michael things about it differently...\n> > \n> > Uh, I thought it might hvae user value as well as developer.\n> \n> The list of flags it includes is defined as \"the flags we needed to deprecate\n> ./check_guc\", but it could conceivably be useful to someone else... But it\n> seems more like allow_in_place_tablespaces; it could go into the \"source code\"\n> section, or be removed.\n\nOkay, I moved into source code.\n\n> > > | When EXPLAIN references the temporary object schema, refer to it as \"pg_temp\" (Amul Sul) \n> > > | When specifying fractional interval values in units greater than months, round to the nearest month (Bruce Momjian)\n> > > | Limit support of psql to servers running PostgreSQL 9.2 and later (Tom Lane) \n> > > | Limit support of pg_dump and pg_dumpall to servers running PostgreSQL 9.2 and later (Tom Lane) \n> > > | Limit support of pg_upgrade to old servers running PostgreSQL 9.2 and later (Tom Lane) \n> > > | Remove server support for old BASE_BACKUP command syntax and base backup protocol (Robert Haas) \n> > > \n> > > Do these need to be in the \"compatibility\" section ?\n> > \n> > Uh, I think of compatibility as breakage, while removing support for\n> > something doesn't seem like breakage.\n> \n> I think removing support which breaks a user-facing behavior is presumptively a\n> compatibility issue.\n\nI moved the EXPLAIN and fractional interval items to the compatibility\nsection. I didn't change the psql and pg_dump items since they are\nalready in their own sections and are clearly support removal rather\nthan direct changes in behavior that would be expected. However, if\nsomeone else feels they should be moved, I will move them, so someone\nplease reply if you feel that way.\n\n\n> > I didn't think EXPLAIN changes were user-parsed, so no breakage?\n> \n> Why would we have explain(format json/xml) if it wasn't meant to be parsed ?\n> At one point I was parsing its xml.\n> \n> I'll let other's comment about the rest of the list.\n\nGood point on the formatted EXPLAIN output being affected, which is why\nmoving it does make sense.\n\n> > > | Automatically export server variables using PGDLLIMPORT on Windows (Robert Haas) \n> > > \n> > > I don't think it's \"automatic\" ?\n> > \n> > Yes, reworded.\n> \n> Maybe it's a tiny bit better to say:\n> | Export all server variables on Windows using PGDLLIMPORT (Robert Haas) \n> \n> (Otherwise, \"all server variables using PGDLLIMPORT\" could sound like only\n> those \"server variables [which were] using PGDLLIMPORT\" were exported).\n\nAh, yes, I see the improvement, done.\n\n> > > | Allow informational escape sequences to be used in postgres_fdw's application name (Hayato Kuroda, Fujii Masao) \n> > > \n> > > I don't think this should be a separate entry\n> > \n> > Uh, the entry above is about per-connection application name, while this\n> > is about escapes --- seems different to me, and hard to combine.\n> \n> 449ab635052 postgres_fdw: Allow application_name of remote connection to be set via GUC.\n> 6e0cb3dec10 postgres_fdw: Allow postgres_fdw.application_name to include escape sequences.\n> 94c49d53402 postgres_fdw: Make postgres_fdw.application_name support more escape sequences.\n> \n> You have one entry for 449a, and one entry where you've combined 6e0c and 94c4.\n> \n> My point is that the 2nd two commits changed the behavior of the first commit,\n> and I don't see why an end-user would want to know about the intermediate\n> behavior from the middle of the development cycle when escape sequences weren't\n> expanded. So I don't know why they'd be listed separately.\n\nI see your point --- postgres_fdw.application_name supports escapes that\nthe normal application_name does not. I combined all three items now. \nThanks. The new entry is:\n\n\t<!--\n\tAuthor: Fujii Masao <fujii@postgresql.org>\n\t2021-09-07 [449ab6350] postgres_fdw: Allow application_name of remote connectio\n\tAuthor: Fujii Masao <fujii@postgresql.org>\n\t2021-12-24 [6e0cb3dec] postgres_fdw: Allow postgres_fdw.application_name to inc\n\tAuthor: Fujii Masao <fujii@postgresql.org>\n\t2022-02-18 [94c49d534] postgres_fdw: Make postgres_fdw.application_name support\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tAdd server variable postgres_fdw.application_name to control the\n\tapplication name of postgres_fdw connections (Hayato Kuroda)\n\t</para>\n\t\n\t<para>\n\tPreviously the remote application_name could only be set on the\n\tremote server or via postgres_fdw connection specification.\n\tpostgres_fdw.application_name also supports escape sequences\n\tfor customization.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 10 May 2022 17:45:01 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": ">> I have completed the first draft of the PG 15 release notes and you can\n>> see the results here:\n>> \n>> https://momjian.us/pgsql_docs/release-15.html\n> \n>> Allow pgbench to retry after serialization and deadlock failures (Yugo Nagata, Marina Polyakova)\n> \n> This is in the \"Additional Modules\" section. I think this should be in\n> the \"Client Applications\" section because pgbench lives in bin\n> directory, not in contrib directory. Actually, pgbench was in the\n> \"Client Applications\" section in the PG 14 release note.\n\nI think you missed this:\n\ncommit 06ba4a63b85e5aa47b325c3235c16c05a0b58b96\nUse COPY FREEZE in pgbench for faster benchmark table population.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Wed, 11 May 2022 06:47:13 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, May 11, 2022 at 06:47:13AM +0900, Tatsuo Ishii wrote:\n> >> I have completed the first draft of the PG 15 release notes and you can\n> >> see the results here:\n> >> \n> >> https://momjian.us/pgsql_docs/release-15.html\n> > \n> >> Allow pgbench to retry after serialization and deadlock failures (Yugo Nagata, Marina Polyakova)\n> > \n> > This is in the \"Additional Modules\" section. I think this should be in\n> > the \"Client Applications\" section because pgbench lives in bin\n> > directory, not in contrib directory. Actually, pgbench was in the\n> > \"Client Applications\" section in the PG 14 release note.\n> \n> I think you missed this:\n> \n> commit 06ba4a63b85e5aa47b325c3235c16c05a0b58b96\n> Use COPY FREEZE in pgbench for faster benchmark table population.\n\nI didn't mention it since it is automatic and I didn't think pgbench\nload time was significant enough to mention.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 10 May 2022 17:49:38 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, May 10, 2022 at 05:45:01PM -0400, Bruce Momjian wrote:\n> I see your point --- postgres_fdw.application_name supports escapes that\n> the normal application_name does not. I combined all three items now. \n> Thanks. The new entry is:\n\nThe URL is updated with the current commits:\n\n\thttps://momjian.us/pgsql_docs/release-15.html\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 10 May 2022 17:57:48 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "\n\n> On May 10, 2022, at 8:44 AM, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> I have completed the first draft of the PG 15 release notes and you can\n> see the results here\n\n\nThanks, Bruce! This release note:\n\n\t• Prevent logical replication into tables where the subscription owner is subject to the table's row-level security policies (Mark Dilger)\n\n... should mention, independent of any RLS considerations, subscriptions are now applied under the privilege of the subscription owner. I don't think we can fit it in the release note, but the basic idea is that:\n\n\tCREATE SUBSCRIPTION ... CONNECTION '...' PUBLICATION ... WITH (enabled = false);\n\tALTER SUBSCRIPTION ... OWNER TO nonsuperuser_whoever;\n\tALTER SUBSCRIPTION ... ENABLE;\n\ncan be used to replicate a subscription without sync or apply workers operating as superuser. That's the main advantage. Previously, subscriptions always ran with superuser privilege, which creates security concerns if the publisher is malicious (or foolish). Avoiding any unintentional bypassing of RLS was just a necessary detail to close the security loophole, not the main point of the security enhancement.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 10 May 2022 15:12:18 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On 5/10/22 4:18 PM, Bruce Momjian wrote:\r\n> On Tue, May 10, 2022 at 01:09:35PM -0500, Justin Pryzby wrote:\r\n\r\n> \r\n>> | Allow libpq's SSL private to be owned by the root user (David Steele)\r\n>>\r\n>> private *key*\r\n> \r\n> I changed it to \"private key file\".\r\n\r\nThis was backpatched to all supported versions[1]. While I'm a huge fan \r\nof this behavior change for a plethora of reasons, I'm not sure if this \r\nshould be included as part of the PG15 release notes, given it's in the \r\n14.3 et al.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=2a1f84636dc335a3edf53a8361ae44bb2ae00093",
"msg_date": "Tue, 10 May 2022 20:14:28 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, 11 May 2022 at 03:44, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> I have completed the first draft of the PG 15 release notes and you can\n> see the results here:\n>\n> https://momjian.us/pgsql_docs/release-15.html\n\nThanks for doing that tedious work.\n\nI think the sort improvements done in v15 are worth a mention under\nGeneral Performance. The commits for this were 91e9e89dc, 40af10b57\nand 697492434. I've been running a few benchmarks between v14 and v15\nover the past few days and a fairly average case speedup is about 25%.\nbut there are cases where I've seen up to 400%. I think the increase\nis to an extent that we maybe should have considered making tweaks in\ncost_tuplesort(). I saw some plans that ran in about 60% of the time\nby disabling Hash Agg and allowing Sort / Group Agg to do the work.\n\nDavid\n\n\n",
"msg_date": "Wed, 11 May 2022 12:39:41 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 5/10/22 4:18 PM, Bruce Momjian wrote:\n> | Allow libpq's SSL private to be owned by the root user (David Steele)\n\n> This was backpatched to all supported versions[1]. While I'm a huge fan \n> of this behavior change for a plethora of reasons, I'm not sure if this \n> should be included as part of the PG15 release notes, given it's in the \n> 14.3 et al.\n\nIt should not. However, the backpatch happened later than the commit\nto HEAD, and our git_changelog tool is not smart enough to match them\nup, so Bruce didn't see that there were followup commits. That's a\ngeneric hazard for major-release notes; if you spot any other cases\nplease do mention them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 May 2022 21:16:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, May 10, 2022 at 04:18:10PM -0400, Bruce Momjian wrote:\n> > | Add the LZ4 compression method to pg_receivewal (Georgios Kokolatos)\n> > | This is enabled via --compression-method=lz4 and requires binaries to be built using --with-lz4. \n> > | Redesign pg_receivewal's compression options (Georgios Kokolatos)\n> > | The new --compression-method option controls the type of compression, rather than just relying on --compress. \n> > \n> > It's --compress since 042a923ad.\n> \n> Yep, fixed.\n\nIt now says:\n\n| The new --compression option controls the type of compression, rather than just relying on --compress.\n\nBut the option is --compress, and not --compression.\n\n\n",
"msg_date": "Tue, 10 May 2022 20:28:54 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "| Remove incorrect duplicate partition tables in system view pg_publication_tables (Hou Zhijie)\n\nshould say \"partitions\" ?\n\"Do not show partitions whose parents are also published\" (is that accurate?)\n\n| Allow system and TOAST B-tree indexes to efficiently store duplicates (Peter Geoghegan)\n| Previously de-duplication was disabled for these types of indexes. \n\nI think the user-facing change here is that (in addition to being \"allowed\"),\nit's now enabled by default for catalog indexes. \"Enable de-duplication of\nsystem indexes by default\".\n\n| Prevent changes to columns only indexed by BRIN indexes from preventing HOT updates (Josef Simanek)\n\nsays \"prevent\" twice.\n\"Allow HOT updates when changed columns are only indexed by BRIN indexes\"\n(or \"avoid precluding...\")\n\n| Improve the performance of window functions that use row_number(), rank(), and count() (David Rowley)\n\nThe essential feature is a new kind of \"prosupport\", which is implemented for\nthose core functions. I suggest to add another sentence about how prosupport\ncan also be added to user-defined/non-core functions.\n\n| Store server-level statistics in shared memory (Kyotaro Horiguchi, Andres Freund, Melanie Plageman)\n\nShould this be called \"cumulative\" statistics? As in b3abca68106d518ce5d3c0d9a1e0ec02a647ceda.\n\n| Allows view access to be controlled by privileges of the view user (Christoph Heiss)\n\nAllow\n\n| New function\n\n\"The new function ..\" (a few times)\n\n| Improve the parallel pg_dump performance of TOAST tables (Tom Lane) \n\nI don't think this needs to be mentioned, unless maybe folded into an entry\nlike \"improve performance when dumping with many objects or relations with\nlarge toast tables\".\n\n| Allow pg_basebackup to decompress LZ4 and Zstandard compressed server-side base backups, and LZ4 and Zstandard compress output files (Dipesh Pandit, Jeevan Ladhe) \n\nmaybe: \"... and to compress output files with LZ4 and Zstandard.\"\n\n| Add direct I/O support to macOS (Thomas Munro)\n| This only works if max_wal_senders=0 and wal_level=minimal. \n\nI think this should mention that it's only for WAL.\n\n| Remove status reporting during pg_upgrade operation if the output is not a terminal (Andres Freund)\n\nMaybe: \"By default, do not output status information unless the output is a terminal\"\n\n| Add new protocol message COMPRESSION and COMPRESSION_DETAIL to specify the compression method and level (Robert Haas)\n\ns/level/options/ ?\n\n| Prevent DROP DATABASE, DROP TABLESPACE, and ALTER DATABASE SET TABLESPACE from occasionally failing during concurrent use on Windows (Thomas Munro)\n\nMaybe this doesn't need to be mentioned ?\n\n| Fix pg_statio_all_tables to sum values for the rare case of TOAST tables with multiple indexes (Andrei Zubkov)\n| Previously such cases would have one row for each index. \n\nDoesn't need to be mentioned ?\nIt doesn't seem like a \"compatibility\" issue anyway.\n\nShould this be included?\n6b94e7a6da2 Consider fractional paths in generate_orderedappend_paths\n\nShould any of these be listed as incompatible changes (some of these I asked\nbefore, but the others are from another list).\n\n95ab1e0a9db interval: round values when spilling to months\n9cd28c2e5f1 Remove server support for old BASE_BACKUP command syntax.\n0d4513b6138 Remove server support for the previous base backup protocol.\nccd10a9bfa5 Fix enforcement of PL/pgSQL variable CONSTANT markings (Tom Lane)\n38bfae36526 pg_upgrade: Move all the files generated internally to a subdirectory\n376ce3e404b Prefer $HOME when looking up the current user's home directory.\n7844c9918a4 psql: Show all query results by default\n17a856d08be Change aggregated log format of pgbench.\n? 73508475d69 Remove pg_atoi()\n? aa64f23b029 Remove MaxBackends variable in favor of GetMaxBackends() function.\n? d816f366bc4 psql: Make SSL info display more compact\n? 27b02e070fd pg_upgrade: Don't print progress status when output is not a tty.\n? ab4fd4f868e Remove 'datlastsysoid'.\n\n\n",
"msg_date": "Tue, 10 May 2022 20:31:17 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, May 10, 2022 at 03:12:18PM -0700, Mark Dilger wrote:\n> \n> \n> > On May 10, 2022, at 8:44 AM, Bruce Momjian <bruce@momjian.us> wrote:\n> > \n> > I have completed the first draft of the PG 15 release notes and you can\n> > see the results here\n> \n> \n> Thanks, Bruce! This release note:\n> \n> \t• Prevent logical replication into tables where the subscription owner is subject to the table's row-level security policies (Mark Dilger)\n> \n> ... should mention, independent of any RLS considerations, subscriptions are now applied under the privilege of the subscription owner. I don't think we can fit it in the release note, but the basic idea is that:\n> \n> \tCREATE SUBSCRIPTION ... CONNECTION '...' PUBLICATION ... WITH (enabled = false);\n> \tALTER SUBSCRIPTION ... OWNER TO nonsuperuser_whoever;\n> \tALTER SUBSCRIPTION ... ENABLE;\n> \n> can be used to replicate a subscription without sync or apply workers operating as superuser. That's the main advantage. Previously, subscriptions always ran with superuser privilege, which creates security concerns if the publisher is malicious (or foolish). Avoiding any unintentional bypassing of RLS was just a necessary detail to close the security loophole, not the main point of the security enhancement.\n\nOh, interesting. New text:\n\n\t<!--\n\tAuthor: Jeff Davis <jdavis@postgresql.org>\n\t2022-01-07 [a2ab9c06e] Respect permissions within logical replication.\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tAllow logical replication to run as the owner of the publication (Mark Dilger)\n\t</para>\n\t\n\t<para>\n\tBecause row-level security policies are not checked, only\n\tsuperusers, roles with bypassrls, and table owners can replicate\n\tinto tables with row-level security policies.\n\t</para>\n\t</listitem>\n\nHow is this?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 10 May 2022 21:46:42 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, May 10, 2022 at 08:14:28PM -0400, Jonathan Katz wrote:\n> On 5/10/22 4:18 PM, Bruce Momjian wrote:\n> > On Tue, May 10, 2022 at 01:09:35PM -0500, Justin Pryzby wrote:\n> \n> > \n> > > | Allow libpq's SSL private to be owned by the root user (David Steele)\n> > > \n> > > private *key*\n> > \n> > I changed it to \"private key file\".\n> \n> This was backpatched to all supported versions[1]. While I'm a huge fan of\n> this behavior change for a plethora of reasons, I'm not sure if this should\n> be included as part of the PG15 release notes, given it's in the 14.3 et al.\n\nRight, is should be removed, and I have done so.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 10 May 2022 21:47:42 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, May 10, 2022 at 09:16:33PM -0400, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> > On 5/10/22 4:18 PM, Bruce Momjian wrote:\n> > | Allow libpq's SSL private to be owned by the root user (David Steele)\n> \n> > This was backpatched to all supported versions[1]. While I'm a huge fan \n> > of this behavior change for a plethora of reasons, I'm not sure if this \n> > should be included as part of the PG15 release notes, given it's in the \n> > 14.3 et al.\n> \n> It should not. However, the backpatch happened later than the commit\n> to HEAD, and our git_changelog tool is not smart enough to match them\n> up, so Bruce didn't see that there were followup commits. That's a\n> generic hazard for major-release notes; if you spot any other cases\n> please do mention them.\n\nYes, known problem. :-(\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 10 May 2022 21:48:10 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "\n\n> On May 10, 2022, at 6:46 PM, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> Allow logical replication to run as the owner of the publication\n\nMake that \"owner of the subscription\". This change operates on the subscriber-side.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 10 May 2022 18:49:48 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, May 10, 2022 at 08:28:54PM -0500, Justin Pryzby wrote:\n> On Tue, May 10, 2022 at 04:18:10PM -0400, Bruce Momjian wrote:\n> > > | Add the LZ4 compression method to pg_receivewal (Georgios Kokolatos)\n> > > | This is enabled via --compression-method=lz4 and requires binaries to be built using --with-lz4. \n> > > | Redesign pg_receivewal's compression options (Georgios Kokolatos)\n> > > | The new --compression-method option controls the type of compression, rather than just relying on --compress. \n> > > \n> > > It's --compress since 042a923ad.\n> > \n> > Yep, fixed.\n> \n> It now says:\n> \n> | The new --compression option controls the type of compression, rather than just relying on --compress.\n> \n> But the option is --compress, and not --compression.\n\nAgreed, fixed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 10 May 2022 21:52:05 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, May 10, 2022 at 06:49:48PM -0700, Mark Dilger wrote:\n> \n> \n> > On May 10, 2022, at 6:46 PM, Bruce Momjian <bruce@momjian.us> wrote:\n> > \n> > Allow logical replication to run as the owner of the publication\n> \n> Make that \"owner of the subscription\". This change operates on the subscriber-side.\n\nThanks, done.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 10 May 2022 21:59:05 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, May 11, 2022 at 12:39:41PM +1200, David Rowley wrote:\n> On Wed, 11 May 2022 at 03:44, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > I have completed the first draft of the PG 15 release notes and you can\n> > see the results here:\n> >\n> > https://momjian.us/pgsql_docs/release-15.html\n> \n> Thanks for doing that tedious work.\n> \n> I think the sort improvements done in v15 are worth a mention under\n> General Performance. The commits for this were 91e9e89dc, 40af10b57\n> and 697492434. I've been running a few benchmarks between v14 and v15\n> over the past few days and a fairly average case speedup is about 25%.\n> but there are cases where I've seen up to 400%. I think the increase\n> is to an extent that we maybe should have considered making tweaks in\n> cost_tuplesort(). I saw some plans that ran in about 60% of the time\n> by disabling Hash Agg and allowing Sort / Group Agg to do the work.\n\nGood point. Do you have any suggested text? I can't really see it\nclearly based on the commits, except \"sorting is faster\".\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 10 May 2022 22:02:49 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, 11 May 2022 at 14:02, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Wed, May 11, 2022 at 12:39:41PM +1200, David Rowley wrote:\n> > I think the sort improvements done in v15 are worth a mention under\n> > General Performance. The commits for this were 91e9e89dc, 40af10b57\n> > and 697492434. I've been running a few benchmarks between v14 and v15\n> > over the past few days and a fairly average case speedup is about 25%.\n> > but there are cases where I've seen up to 400%. I think the increase\n> > is to an extent that we maybe should have considered making tweaks in\n> > cost_tuplesort(). I saw some plans that ran in about 60% of the time\n> > by disabling Hash Agg and allowing Sort / Group Agg to do the work.\n>\n> Good point. Do you have any suggested text? I can't really see it\n> clearly based on the commits, except \"sorting is faster\".\n\nIf we're going to lump those into a single line then maybe something\nalong the lines of:\n\n* Reduce memory consumption and improve performance of sorting tuples in memory\n\nI think one line is fine from a user's perspective, but it's slightly\nharder to know the order of the names in the credits given the 3\nindependent commits.\n\nDavid\n\n\n",
"msg_date": "Wed, 11 May 2022 14:31:08 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, May 11, 2022 at 12:39:41PM +1200, David Rowley wrote:\n> I think the sort improvements done in v15 are worth a mention under\n> General Performance. The commits for this were 91e9e89dc, 40af10b57\n> and 697492434. I've been running a few benchmarks between v14 and v15\n> over the past few days and a fairly average case speedup is about 25%.\n> but there are cases where I've seen up to 400%. I think the increase\n> is to an extent that we maybe should have considered making tweaks in\n> cost_tuplesort(). I saw some plans that ran in about 60% of the time\n> by disabling Hash Agg and allowing Sort / Group Agg to do the work.\n\nIs there any reason not to consider it now ? Either for v15 or v15+1.\n\nI wonder if this is also relevant.\n\n65014000b35 Replace polyphase merge algorithm with a simple balanced k-way merge.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 10 May 2022 21:38:19 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes (sorting)"
},
{
"msg_contents": "On 5/10/22 9:48 PM, Bruce Momjian wrote:\r\n> On Tue, May 10, 2022 at 09:16:33PM -0400, Tom Lane wrote:\r\n>> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>>> On 5/10/22 4:18 PM, Bruce Momjian wrote:\r\n>>> | Allow libpq's SSL private to be owned by the root user (David Steele)\r\n>>\r\n>>> This was backpatched to all supported versions[1]. While I'm a huge fan\r\n>>> of this behavior change for a plethora of reasons, I'm not sure if this\r\n>>> should be included as part of the PG15 release notes, given it's in the\r\n>>> 14.3 et al.\r\n>>\r\n>> It should not. However, the backpatch happened later than the commit\r\n>> to HEAD, and our git_changelog tool is not smart enough to match them\r\n>> up, so Bruce didn't see that there were followup commits. That's a\r\n>> generic hazard for major-release notes; if you spot any other cases\r\n>> please do mention them.\r\n> \r\n> Yes, known problem. :-(\r\n\r\nGot it.\r\n\r\nI did a scan of the release notes and diff'd with the 14.[1-3] and did \r\nnot find any additional overlap.\r\n\r\nJonathan",
"msg_date": "Tue, 10 May 2022 22:38:26 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On 5/10/22 10:31 PM, David Rowley wrote:\r\n> On Wed, 11 May 2022 at 14:02, Bruce Momjian <bruce@momjian.us> wrote:\r\n>>\r\n>> On Wed, May 11, 2022 at 12:39:41PM +1200, David Rowley wrote:\r\n>>> I think the sort improvements done in v15 are worth a mention under\r\n>>> General Performance. The commits for this were 91e9e89dc, 40af10b57\r\n>>> and 697492434. I've been running a few benchmarks between v14 and v15\r\n>>> over the past few days and a fairly average case speedup is about 25%.\r\n>>> but there are cases where I've seen up to 400%. I think the increase\r\n>>> is to an extent that we maybe should have considered making tweaks in\r\n>>> cost_tuplesort(). I saw some plans that ran in about 60% of the time\r\n>>> by disabling Hash Agg and allowing Sort / Group Agg to do the work.\r\n>>\r\n>> Good point. Do you have any suggested text? I can't really see it\r\n>> clearly based on the commits, except \"sorting is faster\".\r\n> \r\n> If we're going to lump those into a single line then maybe something\r\n> along the lines of:\r\n> \r\n> * Reduce memory consumption and improve performance of sorting tuples in memory\r\n> \r\n> I think one line is fine from a user's perspective, but it's slightly\r\n> harder to know the order of the names in the credits given the 3\r\n> independent commits.\r\n\r\nI think a brief description following the one-liner would be useful for \r\nthe release notes.\r\n\r\nIf you can share a few more details about the benchmarks, we can expand \r\non the one-liner in the release announcement (as this sounds like one of \r\nthose cool, buzz-y things :)\r\n\r\nJonathan",
"msg_date": "Tue, 10 May 2022 22:41:00 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, 11 May 2022 at 14:38, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, May 11, 2022 at 12:39:41PM +1200, David Rowley wrote:\n> > I think the sort improvements done in v15 are worth a mention under\n> > General Performance. The commits for this were 91e9e89dc, 40af10b57\n> > and 697492434. I've been running a few benchmarks between v14 and v15\n> > over the past few days and a fairly average case speedup is about 25%.\n> > but there are cases where I've seen up to 400%. I think the increase\n> > is to an extent that we maybe should have considered making tweaks in\n> > cost_tuplesort(). I saw some plans that ran in about 60% of the time\n> > by disabling Hash Agg and allowing Sort / Group Agg to do the work.\n>\n> Is there any reason not to consider it now ? Either for v15 or v15+1.\n\nIf the changes done had resulted in a change to the number of expected\noperations as far as big-O notation goes, then I think we might be\nable to do something.\n\nHowever, nothing changed in the number of operations. We only sped up\nthe constant factors. If it were possible to adjust those constant\nfactors based on some performance benchmarks results that were spat\nout by some single machine somewhere, then maybe we could do some\ntweaks. The problem is that to know that we're actually making some\nmeaningful improvements to the costs, we'd want to get the opinion of\n>1 machine and likely >1 CPU architecture. That feels like something\nthat would be much better to do during a release cycle rather than at\nthis very late hour. The majority of my benchmarks were on AMD zen2\nhardware. That's likely not going to reflect well on what the average\nhardware is that runs PostgreSQL.\n\nAlso, I've no idea at this stage what we'd even do to\ncost_tuplesort(). The nruns calculation is a bit fuzzy and never\nreally took the power-of-2 wastage that 40af10b57 reduces. Maybe\nthere's some argument for adjusting the 2.0 constant in\ncompute_cpu_sort_cost() based on what's done in 697492434. But there's\nplenty of datatypes that don't use the new sort specialization\nfunctions. Would we really want to add extra code to the planner to\nget it to try and figure that out?\n\nDavid\n\n\n",
"msg_date": "Wed, 11 May 2022 15:15:27 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes (sorting)"
},
{
"msg_contents": "On Tue, May 10, 2022 at 08:31:17PM -0500, Justin Pryzby wrote:\n> | Remove incorrect duplicate partition tables in system view pg_publication_tables (Hou Zhijie)\n> \n> should say \"partitions\" ?\n> \"Do not show partitions whose parents are also published\" (is that accurate?)\n\nI went with:\n\n\tRemove incorrect duplicate partitions in system view\n\tpg_publication_tables (Hou Zhijie)\n\n> | Allow system and TOAST B-tree indexes to efficiently store duplicates (Peter Geoghegan)\n> | Previously de-duplication was disabled for these types of indexes. \n> \n> I think the user-facing change here is that (in addition to being \"allowed\"),\n> it's now enabled by default for catalog indexes. \"Enable de-duplication of\n> system indexes by default\".\n\nI went with:\n\n\tEnable system and TOAST B-tree indexes to efficiently store duplicates\n\t(Peter Geoghegan)\n\n> | Prevent changes to columns only indexed by BRIN indexes from preventing HOT updates (Josef Simanek)\n> \n> says \"prevent\" twice.\n> \"Allow HOT updates when changed columns are only indexed by BRIN indexes\"\n> (or \"avoid precluding...\")\n\nI went with:\n\n\tPrevent changes to columns only indexed by BRIN indexes from\n\tdisabling HOT updates (Josef Simanek)\n\n> | Improve the performance of window functions that use row_number(), rank(), and count() (David Rowley)\n> \n> The essential feature is a new kind of \"prosupport\", which is implemented for\n> those core functions. I suggest to add another sentence about how prosupport\n> can also be added to user-defined/non-core functions.\n\nUh, I don't see how \"prosupport\" would be relevant for users to know\nabout.\n\n> | Store server-level statistics in shared memory (Kyotaro Horiguchi, Andres Freund, Melanie Plageman)\n> \n> Should this be called \"cumulative\" statistics? As in b3abca68106d518ce5d3c0d9a1e0ec02a647ceda.\n\nUh, they are counters, which I guess is cummulative, but that doesn't\nseem very descriptive. The documentation call it the statistics\ncollector, but I am not sure we even have that anymore with an in-memory\nimplementation. I am kind of not sure what to call it.\n\n> | Allows view access to be controlled by privileges of the view user (Christoph Heiss)\n> \n> Allow\n\nFixed.\n\n> | New function\n> \n> \"The new function ..\" (a few times)\n\nUh, I only see it once.\n\n> | Improve the parallel pg_dump performance of TOAST tables (Tom Lane) \n> \n> I don't think this needs to be mentioned, unless maybe folded into an entry\n> like \"improve performance when dumping with many objects or relations with\n> large toast tables\".\n\nI mentioned it because I thought users who tried parallelism might find\nit is faster now so they should re-test it, no?\n\n> | Allow pg_basebackup to decompress LZ4 and Zstandard compressed server-side base backups, and LZ4 and Zstandard compress output files (Dipesh Pandit, Jeevan Ladhe) \n> \n> maybe: \"... and to compress output files with LZ4 and Zstandard.\"\n\nYes, I like that better, done.\n\n> | Add direct I/O support to macOS (Thomas Munro)\n> | This only works if max_wal_senders=0 and wal_level=minimal. \n> \n> I think this should mention that it's only for WAL.\n\nAgreed, done.\n\n> | Remove status reporting during pg_upgrade operation if the output is not a terminal (Andres Freund)\n> \n> Maybe: \"By default, do not output status information unless the output is a terminal\"\n\nI went with:\n\n\tDisable default status reporting during pg_upgrade operation if\n\tthe output is not a terminal (Andres Freund)\n\n> | Add new protocol message COMPRESSION and COMPRESSION_DETAIL to specify the compression method and level (Robert Haas)\n> \n> s/level/options/ ?\n\nAh, yes, this changed to be more generic than level, done.\n\n> | Prevent DROP DATABASE, DROP TABLESPACE, and ALTER DATABASE SET TABLESPACE from occasionally failing during concurrent use on Windows (Thomas Munro)\n> \n> Maybe this doesn't need to be mentioned ?\n\nUh, the previous behavior seems pretty bad so I wanted to mention it\nwill not happen anymore.\n\n> | Fix pg_statio_all_tables to sum values for the rare case of TOAST tables with multiple indexes (Andrei Zubkov)\n> | Previously such cases would have one row for each index. \n> \n> Doesn't need to be mentioned ?\n> It doesn't seem like a \"compatibility\" issue anyway.\n\nUh, there were certain cases where multiple indexes happened and I think\nwe need to tell people it is no longer a problem to work around, no?\n\n> Should this be included?\n> 6b94e7a6da2 Consider fractional paths in generate_orderedappend_paths\n\nI looked at that but didn't see how it would be relevent for users. Do\nyou have a suggestion for text?\n\n> Should any of these be listed as incompatible changes (some of these I asked\n> before, but the others are from another list).\n> \n> 95ab1e0a9db interval: round values when spilling to months\n\nYes, moved already.\n\n> 9cd28c2e5f1 Remove server support for old BASE_BACKUP command syntax.\n\nSeems internal-only so moved to Source Code.\n\n> 0d4513b6138 Remove server support for the previous base backup protocol.\n\nSame.\n\n> ccd10a9bfa5 Fix enforcement of PL/pgSQL variable CONSTANT markings (Tom Lane)\n\nI didn't see not enforcing constant as an incompatibility, but rather a\nbug.\n\n> 38bfae36526 pg_upgrade: Move all the files generated internally to a subdirectory\n\nI think since we have a pg_upgrade section, it seems better there.\n\n> 376ce3e404b Prefer $HOME when looking up the current user's home directory.\n\nUh, I didn't think so.\n\n> 7844c9918a4 psql: Show all query results by default\n\nSame.\n\n> 17a856d08be Change aggregated log format of pgbench.\n\nWe have a pgbench section and I can't see it. I am trying to keep\nincompatiblities as things related to in-production problems or\nsurprises.\n\n> ? 73508475d69 Remove pg_atoi()\n\nI don't see who would care except for internals folks.\n\n> ? aa64f23b029 Remove MaxBackends variable in favor of GetMaxBackends() function.\n\nSame.\n\n> ? d816f366bc4 psql: Make SSL info display more compact\n\nI did look at that but considered that this wouldn't be something that\nwould break anything.\n\n> ? 27b02e070fd pg_upgrade: Don't print progress status when output is not a tty.\n\nSame.\n\n> ? ab4fd4f868e Remove 'datlastsysoid'.\n\nSeemed too internal.\n\nThanks for all these ideas!\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 10 May 2022 23:41:08 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, 11 May 2022 at 15:41, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Tue, May 10, 2022 at 08:31:17PM -0500, Justin Pryzby wrote:\n> > | Improve the performance of window functions that use row_number(), rank(), and count() (David Rowley)\n> >\n> > The essential feature is a new kind of \"prosupport\", which is implemented for\n> > those core functions. I suggest to add another sentence about how prosupport\n> > can also be added to user-defined/non-core functions.\n>\n> Uh, I don't see how \"prosupport\" would be relevant for users to know\n> about.\n\nI'd say if it's not mentioned in our documentation then we shouldn't\nput it in the release notes. Currently, it's only documented in the\nsource code. If someone felt strongly that something should be written\nin the documents about this then I'd say only then should we consider\nmentioning it in the release notes.\n\nDavid\n\n\n",
"msg_date": "Wed, 11 May 2022 15:54:19 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, May 11, 2022 at 12:44 AM Bruce Momjian <bruce@momjian.us> wrote:\n> I have completed the first draft of the PG 15 release notes\n\nThanks. Regarding:\n\n<!--\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\n2022-03-20 [ba9a7e392] Enforce foreign key correctly during cross-partition upd\n-->\n\n<listitem>\n<para>\nImprove the trigger behavior of updates on partitioned tables that\nmove rows between partitions (Amit Langote)\n</para>\n\n<para>\nPreviously, such updates fired delete triggers on the source partition\nand fired insert triggers on the target partition. PostgreSQL will\nnow fire an update trigger on the partition root. This makes\nforeign key behavior more consistent. ALL TRIGGERS?\n</para>\n</listitem>\n\nThe commit is intended to only change the behavior of RI triggers,\nwhile leaving user-defined triggers firing as before. I think it\nmight be a good idea to be specific by wording this, maybe as follows?\n\nImprove the firing of foreign key triggers during cross-partition\nupdates of partitioned tables (Amit Langote)\n\nPreviously, such updates fired delete triggers on the source partition\nand insert triggers on the target partition, whereas PostgreSQL will\nnow fire update triggers on the partitioned table mentioned in the\nquery, which makes the behavior of foreign keys pointing into that\ntable more consistent. Note that other user-defined triggers are\nfired as they were before.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 11 May 2022 16:02:31 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, 10 May 2022 at 17:44, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> I have completed the first draft of the PG 15 release notes and you can\n> see the results here:\n>\n> https://momjian.us/pgsql_docs/release-15.html\n>\n> The feature count is similar to recent major releases:\n>\n> release-10 195\n> release-11 185\n> release-12 198\n> release-13 183\n> release-14 229\n> --> release-15 186\n>\n> I assume there will be major adjustments in the next few weeks based on\n> feedback.\n\nWith 10a8d138 (an extension on the work of PG14's 3c3b8a4b, which was\nlisted in the release notes for PG14) we now truncate the LP array in\nmore cases, which means that overall a workload with mainly HOT\nupdates might require VACUUM to run less often to keep the relation\nsize stable.\n\nI will admit that I am tooting my own horn, but I think it is worth\nmentioning for DBAs, as it reduces the bloating behaviour of certain\nclasses of bloat-generating workloads (and thus they might need to\nre-tune any vacuum-related settings).\n\n-Matthias\n\n\n",
"msg_date": "Wed, 11 May 2022 14:19:20 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "I think this item is pointless:\n\n Remove unused function parameter in get_qual_from_partbound() (Hou Zhijie)\n\nit's just a C-level code, and we don't document such API changes. If we\nwere to document them all, it'd be a very very long document.\n\nHere:\n Improve the algorithm used to compute random() (Fabien Coelho)\n\n This will cause setseed() followed by random() to return a different\n value than on older servers.\nMaybe it's clearer as \"This will cause random() sequences to differ from\nwhat was emitted by prior versions for the same seed values.\" If I\ndon't know anything about the random() API, I understand this as saying\nthat setseed() returns a value, and we only changed that value when\nrandom() is called afterwards.\n\nHere:\n Fix ALTER TRIGGER RENAME on partitioned tables to rename partitions\n (Arne Roland, Álvaro Herrera)\n\n Also prohibit cloned triggers from being renamed.\n\n\"... to rename the corresponding triggers on partitions.\n\nAlso prohibit such triggers on partitions from being renamed.\"\n(It's not the *partitions* that are renamed but the triggers,\nobviously.)\n\nHere:\n Add server variable recursive_worktable_factor to allow the user to\n specify the expected recursive query worktable size (Simon Riggs)\n\n WHAT IS A WORKTABLE? NOT DEFINED.\nDo we need to explain in the relnotes that this is relevant to planning\nof WITH RECURSIVE queries?\n\n Generate periodic log message during slow server starts (Nitin Jadhav,\n Robert Haas, Álvaro Herrera)\nPlease credit only Nitin and Robert, not me. I only edited the docs.\n\n Allow members of the pg_checkpointer predefined role to run the\n CHECKPOINT command (Jeff Davis)\n\nThe 11-era entry said that we *added* new roles for the tasks, and I\nthink we should do likewise here:\n Add predefined role pg_checkpointer that enables to run CHECKPOINT\nOtherwise it sounds like pg_checkpointer already existed and we gave it\nthis new responsibility.\n\nHere:\n Create unlogged sequences and allow them to be skipped in logical\n replication (Peter Eisentraut)\nThis is not specific to logical replication, actually; it's a generic\nnew feature of sequences. So I don't think it belongs in the logical\nreplication section. But it's not clear to me where to put it.\n\n\nHere:\n Add SQL MERGE command to adjust one table to match another (Pavan\n Deolasee, Álvaro Herrera, Amit Langote, Simon Riggs)\nI'm not sure this accurately describes the purpose of the command.\nMaybe \"Add SQL MERGE command that allows to run INSERT, UPDATE, DELETE \nsubcommands based on another table or the output of a query.\"\nAlso, it doesn't belong in the Utilities section. Maybe it should be in\nthe SELECT,INSERT section, and rename the section to something like\n\"SQL Queries\", and put the whole JSON subsection inside that section\n(rather than inside the Functions section).\nI think Simon should appear as first author here.\n\n\n Add new protocol message TARGET to specify a new COPY method to be for\n base backups (Robert Haas)\nI think this one should be in some other section, maybe \"Streaming\nReplication and Recovery\".\n\n\n Add server variable archive_library to specify the library to be called\n for archiving (Nathan Bossart)\nMaybe \"Allow site-specific WAL archiving, which may no longer use shell\ncommands.\" or something to that effect? The reference to a library is\na bit obscure.\n\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"In fact, the basic problem with Perl 5's subroutines is that they're not\ncrufty enough, so the cruft leaks out into user-defined code instead, by\nthe Conservation of Cruft Principle.\" (Larry Wall, Apocalypse 6)\n\n\n",
"msg_date": "Wed, 11 May 2022 16:12:23 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, May 11, 2022 at 02:31:08PM +1200, David Rowley wrote:\n> On Wed, 11 May 2022 at 14:02, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Wed, May 11, 2022 at 12:39:41PM +1200, David Rowley wrote:\n> > > I think the sort improvements done in v15 are worth a mention under\n> > > General Performance. The commits for this were 91e9e89dc, 40af10b57\n> > > and 697492434. I've been running a few benchmarks between v14 and v15\n> > > over the past few days and a fairly average case speedup is about 25%.\n> > > but there are cases where I've seen up to 400%. I think the increase\n> > > is to an extent that we maybe should have considered making tweaks in\n> > > cost_tuplesort(). I saw some plans that ran in about 60% of the time\n> > > by disabling Hash Agg and allowing Sort / Group Agg to do the work.\n> >\n> > Good point. Do you have any suggested text? I can't really see it\n> > clearly based on the commits, except \"sorting is faster\".\n> \n> If we're going to lump those into a single line then maybe something\n> along the lines of:\n> \n> * Reduce memory consumption and improve performance of sorting tuples in memory\n> \n> I think one line is fine from a user's perspective, but it's slightly\n> harder to know the order of the names in the credits given the 3\n> independent commits.\n\nOkay, I went with this:\n\n\t<!--\n\tAuthor: David Rowley <drowley@postgresql.org>\n\t2021-07-22 [91e9e89dc] Make nodeSort.c use Datum sorts for single column sorts\n\tAuthor: David Rowley <drowley@postgresql.org>\n\t2022-04-04 [40af10b57] Use Generation memory contexts to store tuples in sorts\n\tAuthor: John Naylor <john.naylor@postgresql.org>\n\t2022-04-02 [697492434] Specialize tuplesort routines for different kinds of abb\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tImprove performance and reduce memory consumption of in-memory\n\tsorts (Ronan Dunklau, David Rowley, Thomas Munro)\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 11 May 2022 10:32:21 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, May 11, 2022 at 04:02:31PM +0900, Amit Langote wrote:\n> On Wed, May 11, 2022 at 12:44 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > I have completed the first draft of the PG 15 release notes\n> The commit is intended to only change the behavior of RI triggers,\n> while leaving user-defined triggers firing as before. I think it\n> might be a good idea to be specific by wording this, maybe as follows?\n> \n> Improve the firing of foreign key triggers during cross-partition\n> updates of partitioned tables (Amit Langote)\n> \n> Previously, such updates fired delete triggers on the source partition\n> and insert triggers on the target partition, whereas PostgreSQL will\n> now fire update triggers on the partitioned table mentioned in the\n> query, which makes the behavior of foreign keys pointing into that\n> table more consistent. Note that other user-defined triggers are\n> fired as they were before.\n\nYes, this is what I needed to know. The updated text is:\n\n\t<!--\n\tAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\n\t2022-03-20 [ba9a7e392] Enforce foreign key correctly during cross-partition upd\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tImprove foreign key behavior of updates on partitioned tables\n\tthat move rows between partitions (Amit Langote)\n\t</para>\n\t\n\t<para>\n\tPreviously, such updates ran delete actions on the source partition\n\tand insert actions on the target partition. PostgreSQL will now\n\trun update actions on the partition root.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 11 May 2022 10:41:52 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, May 11, 2022 at 02:19:20PM +0200, Matthias van de Meent wrote:\n> On Tue, 10 May 2022 at 17:44, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > I have completed the first draft of the PG 15 release notes and you can\n> > see the results here:\n> >\n> > https://momjian.us/pgsql_docs/release-15.html\n> >\n> > The feature count is similar to recent major releases:\n> >\n> > release-10 195\n> > release-11 185\n> > release-12 198\n> > release-13 183\n> > release-14 229\n> > --> release-15 186\n> >\n> > I assume there will be major adjustments in the next few weeks based on\n> > feedback.\n> \n> With 10a8d138 (an extension on the work of PG14's 3c3b8a4b, which was\n> listed in the release notes for PG14) we now truncate the LP array in\n> more cases, which means that overall a workload with mainly HOT\n> updates might require VACUUM to run less often to keep the relation\n> size stable.\n> \n> I will admit that I am tooting my own horn, but I think it is worth\n> mentioning for DBAs, as it reduces the bloating behaviour of certain\n> classes of bloat-generating workloads (and thus they might need to\n> re-tune any vacuum-related settings).\n\nWe barely document HOT updates, and this change seems too marginal to\nmention. I think we already have enough VACUUM changes that people will\nneed to reevaluate their vacuum settings anyway.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 11 May 2022 10:50:43 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, May 11, 2022 at 04:12:23PM +0200, Álvaro Herrera wrote:\n> I think this item is pointless:\n> \n> Remove unused function parameter in get_qual_from_partbound() (Hou Zhijie)\n> \n> it's just a C-level code, and we don't document such API changes. If we\n> were to document them all, it'd be a very very long document.\n\nOkay, removed. I had added it because of this commit text:\n\n This is an external function that extensions could use, so this is\n potentially a breaking change. No external callers are known, however,\n and this will make it simpler to write such callers in the future.\n\n> Here:\n> Improve the algorithm used to compute random() (Fabien Coelho)\n> \n> This will cause setseed() followed by random() to return a different\n> value than on older servers.\n> Maybe it's clearer as \"This will cause random() sequences to differ from\n> what was emitted by prior versions for the same seed values.\" If I\n> don't know anything about the random() API, I understand this as saying\n> that setseed() returns a value, and we only changed that value when\n> random() is called afterwards.\n\nYes, better, thanks.\n\n> Here:\n> Fix ALTER TRIGGER RENAME on partitioned tables to rename partitions\n> (Arne Roland, Álvaro Herrera)\n> \n> Also prohibit cloned triggers from being renamed.\n> \n> \"... to rename the corresponding triggers on partitions.\n> \n> Also prohibit such triggers on partitions from being renamed.\"\n> (It's not the *partitions* that are renamed but the triggers,\n> obviously.)\n\nOkay, new text:\n\n\tFix ALTER TRIGGER RENAME on partitioned tables to properly rename\n\ttriggers an all partitions (Arne Roland, Álvaro Herrera)\n\n> Here:\n> Add server variable recursive_worktable_factor to allow the user to\n> specify the expected recursive query worktable size (Simon Riggs)\n> \n> WHAT IS A WORKTABLE? NOT DEFINED.\n> Do we need to explain in the relnotes that this is relevant to planning\n> of WITH RECURSIVE queries?\n\nYou mean the syntax? I figured \"recursive query\" was enough, but the\nitem clearly needs help.\n\n> Generate periodic log message during slow server starts (Nitin Jadhav,\n> Robert Haas, Álvaro Herrera)\n> Please credit only Nitin and Robert, not me. I only edited the docs.\n\nOkay, done.\n\n> Allow members of the pg_checkpointer predefined role to run the\n> CHECKPOINT command (Jeff Davis)\n> \n> The 11-era entry said that we *added* new roles for the tasks, and I\n> think we should do likewise here:\n> Add predefined role pg_checkpointer that enables to run CHECKPOINT\n> Otherwise it sounds like pg_checkpointer already existed and we gave it\n> this new responsibility.\n\nAgreed, much better. New text:\n\n\tAdd predefined role pg_checkpointer that allows members to run\n\tCHECKPOINT (Jeff Davis)\n\n\n> Here:\n> Create unlogged sequences and allow them to be skipped in logical\n> replication (Peter Eisentraut)\n> This is not specific to logical replication, actually; it's a generic\n> new feature of sequences. So I don't think it belongs in the logical\n> replication section. But it's not clear to me where to put it.\n\nOh, yeah, I had it there because that was its value, but now that we\ndon't replication sequences, it needs to moved. I put it in the \"Data\nTypes\" section.\n\n> Here:\n> Add SQL MERGE command to adjust one table to match another (Pavan\n> Deolasee, Álvaro Herrera, Amit Langote, Simon Riggs)\n> I'm not sure this accurately describes the purpose of the command.\n> Maybe \"Add SQL MERGE command that allows to run INSERT, UPDATE, DELETE \n> subcommands based on another table or the output of a query.\"\n\nUh, that sounds odd to me, though I realize it is accurate.\n\n> Also, it doesn't belong in the Utilities section. Maybe it should be in\n> the SELECT,INSERT section, and rename the section to something like\n> \"SQL Queries\", and put the whole JSON subsection inside that section\n\nUh, SQL queries seems very vague --- isn't SELECT the only actual query,\nand if not, aren't all commands queries.\n\n> (rather than inside the Functions section).\n> I think Simon should appear as first author here.\n\nDone.\n\n> Add new protocol message TARGET to specify a new COPY method to be for\n> base backups (Robert Haas)\n> I think this one should be in some other section, maybe \"Streaming\n> Replication and Recovery\".\n\nI didn't think anyone cared about the protocol so I put it in Source\nCode.\n> \n> Add server variable archive_library to specify the library to be called\n> for archiving (Nathan Bossart)\n> Maybe \"Allow site-specific WAL archiving, which may no longer use shell\n> commands.\" or something to that effect? The reference to a library is\n> a bit obscure.\n\nI added this sentence below it:\n\n Previously only shell commands could be called to perform archiving.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 11 May 2022 11:26:58 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, 11 May 2022 at 14:38, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I wonder if this is also relevant.\n>\n> 65014000b35 Replace polyphase merge algorithm with a simple balanced k-way merge.\n\nThanks for highlighting that. It very much is relevant. In fact, it\nseems to account for most of the 25% I mentioned. That particular\ntest was sorting 10 million tuples with 4MB of work_mem.\n\nI think that \"Improve sorting performance (Heikki Linnakangas)\" should\nbe moved out from \"E.1.3.1.2. Indexes\" and put below \"E.1.3.1.4.\nGeneral Performance\"\n\nThe text likely should include the words \"disk-based\" so that it's\nclear that it's not the same as the other line about \"in-memory\nsorts\". I'd also be open to just having a single line too. I'd vote\nto put Heikki's name first if we did that.\n\nMaybe:\n\n* Improve performance of sorting tuples (Heikki Linnakangas, Ronan\nDunklau, David Rowley, Thomas Munro)\n\nThis improves the merging performance of individual on-disk sort\nbatches, reduces memory consumption for in-memory sorts and reduces\nCPU overheads for certain in-memory sorts.\n\nDavid\n\n\n",
"msg_date": "Thu, 12 May 2022 10:38:42 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes (sorting)"
},
{
"msg_contents": "On Thu, May 12, 2022 at 10:38:42AM +1200, David Rowley wrote:\n> On Wed, 11 May 2022 at 14:38, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I wonder if this is also relevant.\n> >\n> > 65014000b35 Replace polyphase merge algorithm with a simple balanced k-way merge.\n> \n> Thanks for highlighting that. It very much is relevant. In fact, it\n> seems to account for most of the 25% I mentioned. That particular\n> test was sorting 10 million tuples with 4MB of work_mem.\n> \n> I think that \"Improve sorting performance (Heikki Linnakangas)\" should\n> be moved out from \"E.1.3.1.2. Indexes\" and put below \"E.1.3.1.4.\n> General Performance\"\n\nYes, good point, moved.\n\n> The text likely should include the words \"disk-based\" so that it's\n> clear that it's not the same as the other line about \"in-memory\n> sorts\". I'd also be open to just having a single line too. I'd vote\n> to put Heikki's name first if we did that.\n> \n> Maybe:\n> \n> * Improve performance of sorting tuples (Heikki Linnakangas, Ronan\n> Dunklau, David Rowley, Thomas Munro)\n> \n> This improves the merging performance of individual on-disk sort\n> batches, reduces memory consumption for in-memory sorts and reduces\n> CPU overheads for certain in-memory sorts.\n\nI kept separate entries:\n\n\t<!--\n\tAuthor: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n\t2021-10-18 [65014000b] Replace polyphase merge algorithm with a simple balanced\n\tAuthor: Heikki Linnakangas <heikki.linnakangas@iki.fi>\n\t2021-10-25 [166f94377] Clarify the logic in a few places in the new balanced me\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tImprove performance for sorts that exceed work_mem (Heikki Linnakangas)\n\t</para>\n\t\n\t<para>\n\tSpecifically, switch to a batch sorting algorithm that uses more\n\toutput streams internally.\n\t</para>\n\t</listitem>\n\t\n\t<!--\n\tAuthor: David Rowley <drowley@postgresql.org>\n\t2021-07-22 [91e9e89dc] Make nodeSort.c use Datum sorts for single column sorts\n\tAuthor: David Rowley <drowley@postgresql.org>\n\t2022-04-04 [40af10b57] Use Generation memory contexts to store tuples in sorts\n\tAuthor: John Naylor <john.naylor@postgresql.org>\n\t2022-04-02 [697492434] Specialize tuplesort routines for different kinds of abb\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tImprove performance and reduce memory consumption of in-memory\n\tsorts (Ronan Dunklau, David Rowley, Thomas Munro)\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 11 May 2022 20:53:25 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes (sorting)"
},
{
"msg_contents": "2022年5月11日(水) 0:44 Bruce Momjian <bruce@momjian.us>:\n>\n> I have completed the first draft of the PG 15 release notes and you can\n> see the results here:\n>\n> https://momjian.us/pgsql_docs/release-15.html\n>\n> The feature count is similar to recent major releases:\n>\n> release-10 195\n> release-11 185\n> release-12 198\n> release-13 183\n> release-14 229\n> --> release-15 186\n>\n> I assume there will be major adjustments in the next few weeks based on\n> feedback.\n\nRe this:\n\n>> Remove exclusive backup mode (David Steele, Nathan Bossart)\n>>\n>> If the database server stops abruptly while in this mode, the server could fail to start.\n>> The non-exclusive backup mode requires a continuous database connection during the backup.\n\nIt'd be useful to mention exclusive backup mode has been deprecated since 9.6,\nlest the impression arise that an important-sounding feature has been torn out\nsuddenly. Also not sure why we need to mention that non-exclusive backup\nrequires a continuous database connection, AFAIR that was also the case with\nexclusive backups.\n\nThe patch also removed 4 and added 2 new functions, a change which anyone\nmaintaining backup utilities would need to be aware of.\n\nPatch attached with suggested changes.\n\nRegards\n\nIan Barwick\n\n\n-- \nEnterpriseDB: https://www.enterprisedb.com",
"msg_date": "Thu, 12 May 2022 10:37:56 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Thu, May 12, 2022 at 12:53 PM Bruce Momjian <bruce@momjian.us> wrote:\n> <listitem>\n> <para>\n> Improve performance and reduce memory consumption of in-memory\n> sorts (Ronan Dunklau, David Rowley, Thomas Munro)\n> </para>\n> </listitem>\n\nI'd also add John Naylor here, as he did a lot of work to validate and\npolish the specialisation stuff.\n\n\n",
"msg_date": "Thu, 12 May 2022 13:59:36 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes (sorting)"
},
{
"msg_contents": "On Thu, May 12, 2022 at 10:37:56AM +0900, Ian Lawrence Barwick wrote:\n> >> Remove exclusive backup mode (David Steele, Nathan Bossart)\n> >>\n> >> If the database server stops abruptly while in this mode, the server could fail to start.\n> >> The non-exclusive backup mode requires a continuous database connection during the backup.\n> \n> It'd be useful to mention exclusive backup mode has been deprecated since 9.6,\n> lest the impression arise that an important-sounding feature has been torn out\n> suddenly. Also not sure why we need to mention that non-exclusive backup\n\nWell, the documentation was clear about it being deprecated, so I don't\nsee a need to mention it in the release notes.\n\n> requires a continuous database connection, AFAIR that was also the case with\n> exclusive backups.\n\nUh, you could do pg_backup_start, disconnect, then pg_backup_stop, no? \nI thought the non-exclusive mode required a continuous connection\nbecause it aborts if you disconnect.\n\n> The patch also removed 4 and added 2 new functions, a change which anyone\n> maintaining backup utilities would need to be aware of.\n> \n> Patch attached with suggested changes.\n\nOh, good points, I had not noticed those renames and removals. URL\nupdated with new text.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 11 May 2022 22:01:35 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Thu, May 12, 2022 at 01:59:36PM +1200, Thomas Munro wrote:\n> On Thu, May 12, 2022 at 12:53 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > <listitem>\n> > <para>\n> > Improve performance and reduce memory consumption of in-memory\n> > sorts (Ronan Dunklau, David Rowley, Thomas Munro)\n> > </para>\n> > </listitem>\n> \n> I'd also add John Naylor here, as he did a lot of work to validate and\n> polish the specialisation stuff.\n\nSure, done.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 11 May 2022 22:17:47 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes (sorting)"
},
{
"msg_contents": "2022年5月12日(木) 11:01 Bruce Momjian <bruce@momjian.us>:\n>\n> On Thu, May 12, 2022 at 10:37:56AM +0900, Ian Lawrence Barwick wrote:\n> > >> Remove exclusive backup mode (David Steele, Nathan Bossart)\n> > >>\n> > >> If the database server stops abruptly while in this mode, the server could fail to start.\n> > >> The non-exclusive backup mode requires a continuous database connection during the backup.\n> >\n> > It'd be useful to mention exclusive backup mode has been deprecated since 9.6,\n> > lest the impression arise that an important-sounding feature has been torn out\n> > suddenly.\n>\n> Well, the documentation was clear about it being deprecated, so I don't\n> see a need to mention it in the release notes.\n\nLooking at the release notes from the point of view of someone who has maybe\nnot been following the long-running debate on removing exclusive backups:\n\n\"Important-sounding backup thing is suddenly gone! What was that\nagain? Hmm, can't\nfind anything in the now-current Pg 15 docs [*], do I need to worry\nabout this!?\"\n\n[*] the backup section has removed all mention of the word \"exclusive\"\nhttps://www.postgresql.org/docs/devel/continuous-archiving.html#BACKUP-LOWLEVEL-BASE-BACKUP\n\nversus:\n\n\"Long-deprecated thing is finally gone, ah OK whatever\".\n\nI am thinking back here to a point in my working life where the\nrelease notes were reviewed\n(by a team including non-Pg specialists) for potential issues when\nconsidering a major\nupgrade - from experience the more clarity with this kind of change\nthe better so\nas not to unnecessarily raise alarm bells.\n\n> > Also not sure why we need to mention that non-exclusive backup\n> > requires a continuous database connection, AFAIR that was also the case with\n> > exclusive backups.\n>\n> Uh, you could do pg_backup_start, disconnect, then pg_backup_stop, no?\n> I thought the non-exclusive mode required a continuous connection\n> because it aborts if you disconnect.\n\nAha, you are right, I was conflating server shutdown with disconnection.\n\n> > The patch also removed 4 and added 2 new functions, a change which anyone\n> > maintaining backup utilities would need to be aware of.\n> >\n> > Patch attached with suggested changes.\n>\n> Oh, good points, I had not noticed those renames and removals. URL\n> updated with new text.\n\nThanks!\n\n\nRegards\n\nIan Barwick\n\n-- \nEnterpriseDB: https://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 12 May 2022 11:40:17 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Thu, May 12, 2022 at 10:37:56AM +0900, Ian Lawrence Barwick wrote:\n>>>> Remove exclusive backup mode (David Steele, Nathan Bossart)\n\n>> It'd be useful to mention exclusive backup mode has been deprecated since 9.6,\n>> lest the impression arise that an important-sounding feature has been torn out\n>> suddenly.\n\n> Well, the documentation was clear about it being deprecated, so I don't\n> see a need to mention it in the release notes.\n\nYeah, but somebody reading these notes doesn't necessarily have that\nold documentation at hand.\n\nI think writing \"Remove the long-deprecated exclusive backup mode\"\nwould do nicely to make this point without many extra words.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 11 May 2022 22:44:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Thu, May 12, 2022 at 11:40:17AM +0900, Ian Lawrence Barwick wrote:\n> Looking at the release notes from the point of view of someone who has maybe\n> not been following the long-running debate on removing exclusive backups:\n> \n> \"Important-sounding backup thing is suddenly gone! What was that\n> again? Hmm, can't\n> find anything in the now-current Pg 15 docs [*], do I need to worry\n> about this!?\"\n> \n> [*] the backup section has removed all mention of the word \"exclusive\"\n> https://www.postgresql.org/docs/devel/continuous-archiving.html#BACKUP-LOWLEVEL-BASE-BACKUP\n> \n> versus:\n> \n> \"Long-deprecated thing is finally gone, ah OK whatever\".\n> \n> I am thinking back here to a point in my working life where the\n> release notes were reviewed\n> (by a team including non-Pg specialists) for potential issues when\n> considering a major\n> upgrade - from experience the more clarity with this kind of change\n> the better so\n> as not to unnecessarily raise alarm bells.\n\nAh, you are right. I thought I had \"deprecated\" in the text, but I now\nsee I did not, and we do have cases where we mention the deprecated\nstatus in previous release notes, so the new text is:\n\n\tRemove long-deprecated exclusive backup mode (David Steele, Nathan\n\tBossart)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 11 May 2022 22:46:27 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "2022年5月12日(木) 11:46 Bruce Momjian <bruce@momjian.us>:\n>\n> On Thu, May 12, 2022 at 11:40:17AM +0900, Ian Lawrence Barwick wrote:\n> > Looking at the release notes from the point of view of someone who has maybe\n> > not been following the long-running debate on removing exclusive backups:\n> >\n> > \"Important-sounding backup thing is suddenly gone! What was that\n> > again? Hmm, can't\n> > find anything in the now-current Pg 15 docs [*], do I need to worry\n> > about this!?\"\n> >\n> > [*] the backup section has removed all mention of the word \"exclusive\"\n> > https://www.postgresql.org/docs/devel/continuous-archiving.html#BACKUP-LOWLEVEL-BASE-BACKUP\n> >\n> > versus:\n> >\n> > \"Long-deprecated thing is finally gone, ah OK whatever\".\n> >\n> > I am thinking back here to a point in my working life where the\n> > release notes were reviewed\n> > (by a team including non-Pg specialists) for potential issues when\n> > considering a major\n> > upgrade - from experience the more clarity with this kind of change\n> > the better so\n> > as not to unnecessarily raise alarm bells.\n>\n> Ah, you are right. I thought I had \"deprecated\" in the text, but I now\n> see I did not, and we do have cases where we mention the deprecated\n> status in previous release notes, so the new text is:\n>\n> Remove long-deprecated exclusive backup mode (David Steele, Nathan\n> Bossart)\n\n\nThat works, thanks!\n\nRegards\n\nIan Barwick\n\n-- \nEnterpriseDB: https://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 12 May 2022 11:50:37 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, May 11, 2022 at 11:41 PM Bruce Momjian <bruce@momjian.us> wrote:\n> On Wed, May 11, 2022 at 04:02:31PM +0900, Amit Langote wrote:\n> > On Wed, May 11, 2022 at 12:44 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > > I have completed the first draft of the PG 15 release notes\n> > The commit is intended to only change the behavior of RI triggers,\n> > while leaving user-defined triggers firing as before. I think it\n> > might be a good idea to be specific by wording this, maybe as follows?\n> >\n> > Improve the firing of foreign key triggers during cross-partition\n> > updates of partitioned tables (Amit Langote)\n> >\n> > Previously, such updates fired delete triggers on the source partition\n> > and insert triggers on the target partition, whereas PostgreSQL will\n> > now fire update triggers on the partitioned table mentioned in the\n> > query, which makes the behavior of foreign keys pointing into that\n> > table more consistent. Note that other user-defined triggers are\n> > fired as they were before.\n>\n> Yes, this is what I needed to know. The updated text is:\n>\n> <!--\n> Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> 2022-03-20 [ba9a7e392] Enforce foreign key correctly during cross-partition upd\n> -->\n>\n> <listitem>\n> <para>\n> Improve foreign key behavior of updates on partitioned tables\n> that move rows between partitions (Amit Langote)\n> </para>\n>\n> <para>\n> Previously, such updates ran delete actions on the source partition\n> and insert actions on the target partition. PostgreSQL will now\n> run update actions on the partition root.\n> </para>\n> </listitem>\n\nLooks fine to me. Though I think maybe we should write the last\nsentence as \"PostgreSQL will now run update actions on the partition\nroot mentioned in the query\" to be less ambiguous about which \"root\",\nbecause it can also mean the actual root table in the partition tree.\nA user may be updating only a particular subtree by mentioning that\nsubtree's root in the query, for example.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 12 May 2022 14:27:26 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, May 11, 2022 at 2:02 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Tue, May 10, 2022 at 04:17:59PM -0400, Jonathan Katz wrote:\n> > On 5/10/22 11:44 AM, Bruce Momjian wrote:\n> > > I have completed the first draft of the PG 15 release notes and you can\n> > > see the results here:\n> > >\n> > > https://momjian.us/pgsql_docs/release-15.html\n> >\n> > Thanks for pulling this together.\n> >\n> > + Allow logical replication to transfer sequence changes\n> >\n> > I believe this was reverted in 2c7ea57e5, unless some other parts of this\n> > work made it in.\n>\n> Yes, sorry, I missed that. Oddly, the unlogged sequence patch was\n> retained, even though there is no value for it on the primary. I\n> removed the sentence that mentioned that benefit from the release notes\n> since it doesn't apply to PG 15 anymore.\n>\n\n+ Create unlogged sequences and allow them to be skipped in logical replication\n\nIs it right to say the second part of the sentence: \"allow them to be\nskipped in logical replication\" when we are not replicating them in\nthe first place?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 12 May 2022 14:25:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Thu, May 12, 2022 at 2:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, May 11, 2022 at 2:02 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Tue, May 10, 2022 at 04:17:59PM -0400, Jonathan Katz wrote:\n> > > On 5/10/22 11:44 AM, Bruce Momjian wrote:\n> > > > I have completed the first draft of the PG 15 release notes and you can\n> > > > see the results here:\n> > > >\n> > > > https://momjian.us/pgsql_docs/release-15.html\n> > >\n> > > Thanks for pulling this together.\n> > >\n> > > + Allow logical replication to transfer sequence changes\n> > >\n> > > I believe this was reverted in 2c7ea57e5, unless some other parts of this\n> > > work made it in.\n> >\n> > Yes, sorry, I missed that. Oddly, the unlogged sequence patch was\n> > retained, even though there is no value for it on the primary. I\n> > removed the sentence that mentioned that benefit from the release notes\n> > since it doesn't apply to PG 15 anymore.\n> >\n>\n> + Create unlogged sequences and allow them to be skipped in logical replication\n>\n> Is it right to say the second part of the sentence: \"allow them to be\n> skipped in logical replication\" when we are not replicating them in\n> the first place?\n>\n\nOne more point related to logical replication features:\n\n>\nAdd SQL functions to monitor the directory contents of replication\nslots (Bharath Rupireddy)\n\nSpecifically, the functions are pg_ls_logicalsnapdir(),\npg_ls_logicalmapdir(), and pg_ls_replslotdir(). They can be run by\nmembers of the predefined pg_monitor role.\n>\n\nThis feature is currently under the section \"Streaming Replication and\nRecovery\". Shouldn't it be under \"Logical Replication\"? The function\nnames themselves seem to indicate that they are used for logical\nreplication contents. I think the replication slot-related function\nwould fall under both categories but overall it seems to belong to the\n\"Logical Replication\" section.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 12 May 2022 16:40:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, May 11, 2022 at 1:44 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> I have completed the first draft of the PG 15 release notes and you can\n> see the results here:\n>\n> https://momjian.us/pgsql_docs/release-15.html\n>\n> The feature count is similar to recent major releases:\n>\n> release-10 195\n> release-11 185\n> release-12 198\n> release-13 183\n> release-14 229\n> --> release-15 186\n>\n> I assume there will be major adjustments in the next few weeks based on\n> feedback.\n>\n\nI wonder if this is worth mentioning:\n\nSkip empty transactions for logical replication.\ncommit d5a9d86d8ffcadc52ff3729cd00fbd83bc38643c\n\nhttps://github.com/postgres/postgres/commit/d5a9d86d8ffcadc52ff3729cd00fbd83bc38643c\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 12 May 2022 21:32:48 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On 5/11/22 10:50 PM, Ian Lawrence Barwick wrote:\n> 2022年5月12日(木) 11:46 Bruce Momjian <bruce@momjian.us>:\n>>\n>> On Thu, May 12, 2022 at 11:40:17AM +0900, Ian Lawrence Barwick wrote:\n>>> Looking at the release notes from the point of view of someone who has maybe\n>>> not been following the long-running debate on removing exclusive backups:\n>>>\n>>> \"Important-sounding backup thing is suddenly gone! What was that\n>>> again? Hmm, can't\n>>> find anything in the now-current Pg 15 docs [*], do I need to worry\n>>> about this!?\"\n>>>\n>>> [*] the backup section has removed all mention of the word \"exclusive\"\n>>> https://www.postgresql.org/docs/devel/continuous-archiving.html#BACKUP-LOWLEVEL-BASE-BACKUP\n>>>\n>>> versus:\n>>>\n>>> \"Long-deprecated thing is finally gone, ah OK whatever\".\n>>>\n>>> I am thinking back here to a point in my working life where the\n>>> release notes were reviewed\n>>> (by a team including non-Pg specialists) for potential issues when\n>>> considering a major\n>>> upgrade - from experience the more clarity with this kind of change\n>>> the better so\n>>> as not to unnecessarily raise alarm bells.\n>>\n>> Ah, you are right. I thought I had \"deprecated\" in the text, but I now\n>> see I did not, and we do have cases where we mention the deprecated\n>> status in previous release notes, so the new text is:\n>>\n>> Remove long-deprecated exclusive backup mode (David Steele, Nathan\n>> Bossart)\n> \n> \n> That works, thanks!\n\nA bit late to this conversation, but +1 from me.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 12 May 2022 08:33:10 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wednesday, May 11, 2022 12:44 AM Bruce Momjian <bruce@momjian.us> wrote:\n> I have completed the first draft of the PG 15 release notes and you can see the\n> results here:\n> \n> https://momjian.us/pgsql_docs/release-15.html\n> \n> The feature count is similar to recent major releases:\n> \n> release-10 195\n> release-11 185\n> release-12 198\n> release-13 183\n> release-14 229\n> --> release-15 186\n> \n> I assume there will be major adjustments in the next few weeks based on\n> feedback.\nHi,\n\n\nI'd like to suggest that we mention a new option for subscription 'disable_on_error'.\n\n\nhttps://github.com/postgres/postgres/commit/705e20f8550c0e8e47c0b6b20b5f5ffd6ffd9e33\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Thu, 12 May 2022 13:35:39 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Thu, May 12, 2022 at 02:27:26PM +0900, Amit Langote wrote:\n> > Yes, this is what I needed to know. The updated text is:\n> >\n> > <!--\n> > Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> > 2022-03-20 [ba9a7e392] Enforce foreign key correctly during cross-partition upd\n> > -->\n> >\n> > <listitem>\n> > <para>\n> > Improve foreign key behavior of updates on partitioned tables\n> > that move rows between partitions (Amit Langote)\n> > </para>\n> >\n> > <para>\n> > Previously, such updates ran delete actions on the source partition\n> > and insert actions on the target partition. PostgreSQL will now\n> > run update actions on the partition root.\n> > </para>\n> > </listitem>\n> \n> Looks fine to me. Though I think maybe we should write the last\n> sentence as \"PostgreSQL will now run update actions on the partition\n> root mentioned in the query\" to be less ambiguous about which \"root\",\n> because it can also mean the actual root table in the partition tree.\n> A user may be updating only a particular subtree by mentioning that\n> subtree's root in the query, for example.\n\nOkay, I went with:\n\n\tPreviously, such updates ran delete actions on the source\n\tpartition and insert actions on the target partition. PostgreSQL will\n\tnow run update actions on the referenced partition root.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 12 May 2022 09:52:03 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Thu, May 12, 2022 at 02:25:36PM +0530, Amit Kapila wrote:\n> On Wed, May 11, 2022 at 2:02 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > Yes, sorry, I missed that. Oddly, the unlogged sequence patch was\n> > retained, even though there is no value for it on the primary. I\n> > removed the sentence that mentioned that benefit from the release notes\n> > since it doesn't apply to PG 15 anymore.\n> >\n> \n> + Create unlogged sequences and allow them to be skipped in logical replication\n> \n> Is it right to say the second part of the sentence: \"allow them to be\n> skipped in logical replication\" when we are not replicating them in\n> the first place?\n\nOops, yeah, that second part was reverted; new text:\n\n\tAllow the creation of unlogged sequences (Peter Eisentraut)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 12 May 2022 09:53:45 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Thu, May 12, 2022 at 04:40:49PM +0530, Amit Kapila wrote:\n> One more point related to logical replication features:\n> \n> >\n> Add SQL functions to monitor the directory contents of replication\n> slots (Bharath Rupireddy)\n> \n> Specifically, the functions are pg_ls_logicalsnapdir(),\n> pg_ls_logicalmapdir(), and pg_ls_replslotdir(). They can be run by\n> members of the predefined pg_monitor role.\n> >\n> \n> This feature is currently under the section \"Streaming Replication and\n> Recovery\". Shouldn't it be under \"Logical Replication\"? The function\n> names themselves seem to indicate that they are used for logical\n> replication contents. I think the replication slot-related function\n> would fall under both categories but overall it seems to belong to the\n> \"Logical Replication\" section.\n\nOh, very good point! I missed that this is logical-slot-only\nmonitoring, so I moved the item to logical replication and changed the\ndescription to:\n\n\tAdd SQL functions to monitor the directory contents of logical\n\treplication slots (Bharath Rupireddy)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 12 May 2022 10:01:31 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Thu, May 12, 2022 at 09:32:48PM +1000, Ajin Cherian wrote:\n> On Wed, May 11, 2022 at 1:44 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > I have completed the first draft of the PG 15 release notes and you can\n> > see the results here:\n> >\n> > https://momjian.us/pgsql_docs/release-15.html\n> >\n> > The feature count is similar to recent major releases:\n> >\n> > release-10 195\n> > release-11 185\n> > release-12 198\n> > release-13 183\n> > release-14 229\n> > --> release-15 186\n> >\n> > I assume there will be major adjustments in the next few weeks based on\n> > feedback.\n> >\n> \n> I wonder if this is worth mentioning:\n> \n> Skip empty transactions for logical replication.\n> commit d5a9d86d8ffcadc52ff3729cd00fbd83bc38643c\n> \n> https://github.com/postgres/postgres/commit/d5a9d86d8ffcadc52ff3729cd00fbd83bc38643c\n\nI looked at that but thought that everyone would already assume we\nskipped replication of empty transactions, and I didn't see much impact\nfor the user, so I didn't include it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 12 May 2022 10:03:22 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Thu, May 12, 2022 at 01:35:39PM +0000, osumi.takamichi@fujitsu.com wrote:\n> I'd like to suggest that we mention a new option for subscription 'disable_on_error'.\n> \n> \n> https://github.com/postgres/postgres/commit/705e20f8550c0e8e47c0b6b20b5f5ffd6ffd9e33\n\nYes, I missed that one, added:\n\n\t<!--\n\tAuthor: Amit Kapila <akapila@postgresql.org>\n\t2022-03-14 [705e20f85] Optionally disable subscriptions on error.\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tAllow subscribers to stop logical replication application on error\n\t(Osumi Takamichi, Mark Dilger)\n\t</para>\n\t\n\t<para>\n\tThis is enabled with the subscriber option \"disable_on_error\"\n\tand avoids possible infinite loops during stream application.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 12 May 2022 10:10:18 -0400",
"msg_from": "'Bruce Momjian' <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Thu, May 12, 2022, at 11:03 AM, Bruce Momjian wrote:\n> I looked at that but thought that everyone would already assume we\n> skipped replication of empty transactions, and I didn't see much impact\n> for the user, so I didn't include it.\nIt certainly has an impact on heavy workloads that replicate tables with few\nmodifications. It receives a high traffic of 'begin' and 'commit' messages that\nthe previous Postgres versions have to handle (discard). I would classify it as\na performance improvement for logical replication. Don't have a strong opinion\nif it should be mentioned or not.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, May 12, 2022, at 11:03 AM, Bruce Momjian wrote:I looked at that but thought that everyone would already assume weskipped replication of empty transactions, and I didn't see much impactfor the user, so I didn't include it.It certainly has an impact on heavy workloads that replicate tables with fewmodifications. It receives a high traffic of 'begin' and 'commit' messages thatthe previous Postgres versions have to handle (discard). I would classify it asa performance improvement for logical replication. Don't have a strong opinionif it should be mentioned or not.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 12 May 2022 11:12:54 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Thu, May 12, 2022 at 11:12:54AM -0300, Euler Taveira wrote:\nOB> On Thu, May 12, 2022, at 11:03 AM, Bruce Momjian wrote:\n> \n> I looked at that but thought that everyone would already assume we\n> skipped replication of empty transactions, and I didn't see much impact\n> for the user, so I didn't include it.\n> \n> It certainly has an impact on heavy workloads that replicate tables with few\n> modifications. It receives a high traffic of 'begin' and 'commit' messages that\n> the previous Postgres versions have to handle (discard). I would classify it as\n> a performance improvement for logical replication. Don't have a strong opinion\n> if it should be mentioned or not.\n\nOh, so your point is that a transaction that only has SELECT would\npreviously send an empty transaction? I thought this was only for apps\nthat create literal empty transactions, which seem rare.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 12 May 2022 10:22:17 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "I wonder if this is worth mentioning:\n\nRaise a WARNING for missing publications.\ncommit 8f2e2bbf145384784bad07a96d461c6bbd91f597\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=8f2e2bbf145384784bad07a96d461c6bbd91f597\n\nRegards,\nVignesh\n\n\nOn Thu, May 12, 2022 at 7:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Thu, May 12, 2022 at 11:12:54AM -0300, Euler Taveira wrote:\n> OB> On Thu, May 12, 2022, at 11:03 AM, Bruce Momjian wrote:\n> >\n> > I looked at that but thought that everyone would already assume we\n> > skipped replication of empty transactions, and I didn't see much impact\n> > for the user, so I didn't include it.\n> >\n> > It certainly has an impact on heavy workloads that replicate tables with few\n> > modifications. It receives a high traffic of 'begin' and 'commit' messages that\n> > the previous Postgres versions have to handle (discard). I would classify it as\n> > a performance improvement for logical replication. Don't have a strong opinion\n> > if it should be mentioned or not.\n>\n> Oh, so your point is that a transaction that only has SELECT would\n> previously send an empty transaction? I thought this was only for apps\n> that create literal empty transactions, which seem rare.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> Indecision is a decision. Inaction is an action. Mark Batterson\n>\n>\n>\n\n\n",
"msg_date": "Thu, 12 May 2022 20:05:40 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Thu, May 12, 2022 at 08:05:40PM +0530, vignesh C wrote:\n> I wonder if this is worth mentioning:\n> \n> Raise a WARNING for missing publications.\n> commit 8f2e2bbf145384784bad07a96d461c6bbd91f597\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=8f2e2bbf145384784bad07a96d461c6bbd91f597\n\nReading the commit message, it looked like only a warning was being\nadded, which was more of a helpful change rather than something we need\nto mention. However, if this means you could can now create a\nsubscription for a missing publication that you couldn't do before, it\nshould be added --- I couldn't tell from the patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 12 May 2022 15:14:51 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Thu, May 12, 2022, at 11:22 AM, Bruce Momjian wrote:\n> On Thu, May 12, 2022 at 11:12:54AM -0300, Euler Taveira wrote:\n> OB> On Thu, May 12, 2022, at 11:03 AM, Bruce Momjian wrote:\n> > \n> > I looked at that but thought that everyone would already assume we\n> > skipped replication of empty transactions, and I didn't see much impact\n> > for the user, so I didn't include it.\n> > \n> > It certainly has an impact on heavy workloads that replicate tables with few\n> > modifications. It receives a high traffic of 'begin' and 'commit' messages that\n> > the previous Postgres versions have to handle (discard). I would classify it as\n> > a performance improvement for logical replication. Don't have a strong opinion\n> > if it should be mentioned or not.\n> \n> Oh, so your point is that a transaction that only has SELECT would\n> previously send an empty transaction? I thought this was only for apps\n> that create literal empty transactions, which seem rare.\nNo. It should be a write transaction. If you have a replication setup that\npublish only table foo (that isn't modified often) and most of your\nworkload does not contain table foo, Postgres sends 'begin' and 'commit'\nmessages to subscriber even if there is no change to replicate.\n\nLet me show you an example:\n\npostgres=# CREATE TABLE foo (a integer primary key, b text);\nCREATE TABLE\npostgres=# CREATE TABLE bar (c integer primary key, d text);\nCREATE TABLE\npostgres=# CREATE TABLE baz (e integer primary key, f text);\nCREATE TABLE\npostgres=# CREATE PUBLICATION pubfoo FOR TABLE foo;\nCREATE PUBLICATION\npostgres=# SELECT pg_create_logical_replication_slot('slotfoo', 'pgoutput');\npg_create_logical_replication_slot \n------------------------------------\n(slotfoo,0/E709AC50)\n(1 row)\n\nLet's create a transaction without table foo:\n\npostgres=# BEGIN;\nBEGIN\npostgres=*# INSERT INTO bar (c, d) VALUES(1, 'blah');\nINSERT 0 1\npostgres=*# INSERT INTO baz (e, f) VALUES(2, 'xpto');\nINSERT 0 1\npostgres=*# COMMIT;\nCOMMIT\n\nAs you can see, the replication slot contains messages for that transaction.\nAlthough, table bar and baz are NOT published, the begin (B) and commit (C)\nmessages that refers to this transaction are sent to subscriber.\n\npostgres=# SELECT chr(get_byte(data, 0)) FROM \npg_logical_slot_peek_binary_changes('slotfoo', NULL, NULL, \n'proto_version', '1', 'publication_names', 'pubfoo');\nchr \n-----\nB\nC\n(2 rows)\n\nIf you execute another transaction without table foo, there will be another B/C\npair.\n\npostgres=# DELETE FROM baz WHERE e = 2;\nDELETE 1\npostgres=# SELECT chr(get_byte(data, 0)) FROM \npg_logical_slot_peek_binary_changes('slotfoo', NULL, NULL, \n'proto_version', '1', 'publication_names', 'pubfoo');\nchr \n-----\nB\nC\nB\nC\n(4 rows)\n\nLet's create a transaction that uses table foo but also table bar:\n\npostgres=# BEGIN;\nBEGIN\npostgres=*# INSERT INTO foo (a, b) VALUES(100, 'asdf');\nINSERT 0 1\npostgres=*# INSERT INTO bar (c, d) VALUES(200, 'qwert');\nINSERT 0 1\npostgres=*# COMMIT;\nCOMMIT\n\nIn this case, there will be other messages since the publication pubfoo\npublishes table foo. ('I' means there is an INSERT for table foo).\n\npostgres=# SELECT chr(get_byte(data, 0)), length(data) FROM \npg_logical_slot_peek_binary_changes('slotfoo', NULL, NULL, \n'proto_version', '1', 'publication_names', 'pubfoo');\nchr | length \n-----+--------\nB | 21\nC | 26\nB | 21\nC | 26\nB | 21\nR | 41\nI | 25\nC | 26\n(8 rows)\n\n\nIn summary, a logical replication setup sends 47 bytes per skipped transaction.\nv15 won't send the first 2 B/C pairs. Discussion started here [1].\n\n[1] https://postgr.es/m/CAMkU=1yohp9-dv48FLoSPrMqYEyyS5ZWkaZGD41RJr10xiNo_Q@mail.gmail.com\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, May 12, 2022, at 11:22 AM, Bruce Momjian wrote:On Thu, May 12, 2022 at 11:12:54AM -0300, Euler Taveira wrote:OB> On Thu, May 12, 2022, at 11:03 AM, Bruce Momjian wrote:> > I looked at that but thought that everyone would already assume we> skipped replication of empty transactions, and I didn't see much impact> for the user, so I didn't include it.> > It certainly has an impact on heavy workloads that replicate tables with few> modifications. It receives a high traffic of 'begin' and 'commit' messages that> the previous Postgres versions have to handle (discard). I would classify it as> a performance improvement for logical replication. Don't have a strong opinion> if it should be mentioned or not.Oh, so your point is that a transaction that only has SELECT wouldpreviously send an empty transaction? I thought this was only for appsthat create literal empty transactions, which seem rare.No. It should be a write transaction. If you have a replication setup thatpublish only table foo (that isn't modified often) and most of yourworkload does not contain table foo, Postgres sends 'begin' and 'commit'messages to subscriber even if there is no change to replicate.Let me show you an example:postgres=# CREATE TABLE foo (a integer primary key, b text);CREATE TABLEpostgres=# CREATE TABLE bar (c integer primary key, d text);CREATE TABLEpostgres=# CREATE TABLE baz (e integer primary key, f text);CREATE TABLEpostgres=# CREATE PUBLICATION pubfoo FOR TABLE foo;CREATE PUBLICATIONpostgres=# SELECT pg_create_logical_replication_slot('slotfoo', 'pgoutput');pg_create_logical_replication_slot ------------------------------------(slotfoo,0/E709AC50)(1 row)Let's create a transaction without table foo:postgres=# BEGIN;BEGINpostgres=*# INSERT INTO bar (c, d) VALUES(1, 'blah');INSERT 0 1postgres=*# INSERT INTO baz (e, f) VALUES(2, 'xpto');INSERT 0 1postgres=*# COMMIT;COMMITAs you can see, the replication slot contains messages for that transaction.Although, table bar and baz are NOT published, the begin (B) and commit (C)messages that refers to this transaction are sent to subscriber.postgres=# SELECT chr(get_byte(data, 0)) FROM pg_logical_slot_peek_binary_changes('slotfoo', NULL, NULL, 'proto_version', '1', 'publication_names', 'pubfoo');chr -----BC(2 rows)If you execute another transaction without table foo, there will be another B/Cpair.postgres=# DELETE FROM baz WHERE e = 2;DELETE 1postgres=# SELECT chr(get_byte(data, 0)) FROM pg_logical_slot_peek_binary_changes('slotfoo', NULL, NULL, 'proto_version', '1', 'publication_names', 'pubfoo');chr -----BCBC(4 rows)Let's create a transaction that uses table foo but also table bar:postgres=# BEGIN;BEGINpostgres=*# INSERT INTO foo (a, b) VALUES(100, 'asdf');INSERT 0 1postgres=*# INSERT INTO bar (c, d) VALUES(200, 'qwert');INSERT 0 1postgres=*# COMMIT;COMMITIn this case, there will be other messages since the publication pubfoopublishes table foo. ('I' means there is an INSERT for table foo).postgres=# SELECT chr(get_byte(data, 0)), length(data) FROM pg_logical_slot_peek_binary_changes('slotfoo', NULL, NULL, 'proto_version', '1', 'publication_names', 'pubfoo');chr | length -----+--------B | 21C | 26B | 21C | 26B | 21R | 41I | 25C | 26(8 rows)In summary, a logical replication setup sends 47 bytes per skipped transaction.v15 won't send the first 2 B/C pairs. Discussion started here [1].[1] https://postgr.es/m/CAMkU=1yohp9-dv48FLoSPrMqYEyyS5ZWkaZGD41RJr10xiNo_Q@mail.gmail.com--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 12 May 2022 21:31:20 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Thursday, May 12, 2022 11:10 PM 'Bruce Momjian' <bruce@momjian.us> wrote:\n> On Thu, May 12, 2022 at 01:35:39PM +0000, osumi.takamichi@fujitsu.com\n> wrote:\n> > I'd like to suggest that we mention a new option for subscription\n> 'disable_on_error'.\n> >\n> >\n> >\n> https://github.com/postgres/postgres/commit/705e20f8550c0e8e47c0b6b20\n> b5f5ffd6ffd9e33\n> \n> Yes, I missed that one, added:\n> \n> \t<!--\n> \tAuthor: Amit Kapila <akapila@postgresql.org>\n> \t2022-03-14 [705e20f85] Optionally disable subscriptions on error.\n> \t-->\n> \n> \t<listitem>\n> \t<para>\n> \tAllow subscribers to stop logical replication application on error\n> \t(Osumi Takamichi, Mark Dilger)\n> \t</para>\n> \n> \t<para>\n> \tThis is enabled with the subscriber option \"disable_on_error\"\n> \tand avoids possible infinite loops during stream application.\n> \t</para>\n> \t</listitem>\nThank you !\n\nIn this last paragraph, how about replacing \"infinite loops\"\nwith \"infinite error loops\" ? I think it makes the situation somewhat\nclear for readers.\n\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Fri, 13 May 2022 01:36:04 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Thu, May 12, 2022 at 10:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Okay, I went with:\n>\n> Previously, such updates ran delete actions on the source\n> partition and insert actions on the target partition. PostgreSQL will\n> now run update actions on the referenced partition root.\n\nWFM, thanks.\n\nBtw, perhaps the following should be listed under E.1.3.2.1. Logical\nReplication, not E.1.3.1.1. Partitioning?\n\n<!--\nAuthor: Amit Kapila <akapila@postgresql.org>\n2021-12-08 [a61bff2bf] De-duplicate the result of pg_publication_tables view.\n-->\n\n<listitem>\n<para>\nRemove incorrect duplicate partitions in system view\npg_publication_tables (Hou Zhijie)\n</para>\n</listitem>\n\nAttached a patch to do so.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 13 May 2022 10:48:41 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Fri, May 13, 2022 at 7:19 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Thu, May 12, 2022 at 10:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Okay, I went with:\n> >\n> > Previously, such updates ran delete actions on the source\n> > partition and insert actions on the target partition. PostgreSQL will\n> > now run update actions on the referenced partition root.\n>\n> WFM, thanks.\n>\n> Btw, perhaps the following should be listed under E.1.3.2.1. Logical\n> Replication, not E.1.3.1.1. Partitioning?\n>\n\nRight.\n\n> <!--\n> Author: Amit Kapila <akapila@postgresql.org>\n> 2021-12-08 [a61bff2bf] De-duplicate the result of pg_publication_tables view.\n> -->\n>\n> <listitem>\n> <para>\n> Remove incorrect duplicate partitions in system view\n> pg_publication_tables (Hou Zhijie)\n> </para>\n> </listitem>\n>\n> Attached a patch to do so.\n>\n\nI don't see any attachment.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 13 May 2022 08:14:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Fri, May 13, 2022 at 11:44 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Fri, May 13, 2022 at 7:19 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > On Thu, May 12, 2022 at 10:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > Okay, I went with:\n> > >\n> > > Previously, such updates ran delete actions on the source\n> > > partition and insert actions on the target partition. PostgreSQL will\n> > > now run update actions on the referenced partition root.\n> >\n> > WFM, thanks.\n> >\n> > Btw, perhaps the following should be listed under E.1.3.2.1. Logical\n> > Replication, not E.1.3.1.1. Partitioning?\n> >\n>\n> Right.\n>\n> > <!--\n> > Author: Amit Kapila <akapila@postgresql.org>\n> > 2021-12-08 [a61bff2bf] De-duplicate the result of pg_publication_tables view.\n> > -->\n> >\n> > <listitem>\n> > <para>\n> > Remove incorrect duplicate partitions in system view\n> > pg_publication_tables (Hou Zhijie)\n> > </para>\n> > </listitem>\n> >\n> > Attached a patch to do so.\n> >\n>\n> I don't see any attachment.\n\nOops, attached this time.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 13 May 2022 11:45:39 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Fri, May 13, 2022 at 6:02 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Thu, May 12, 2022, at 11:22 AM, Bruce Momjian wrote:\n>\n> On Thu, May 12, 2022 at 11:12:54AM -0300, Euler Taveira wrote:\n> OB> On Thu, May 12, 2022, at 11:03 AM, Bruce Momjian wrote:\n> >\n> > I looked at that but thought that everyone would already assume we\n> > skipped replication of empty transactions, and I didn't see much impact\n> > for the user, so I didn't include it.\n> >\n> > It certainly has an impact on heavy workloads that replicate tables with few\n> > modifications. It receives a high traffic of 'begin' and 'commit' messages that\n> > the previous Postgres versions have to handle (discard). I would classify it as\n> > a performance improvement for logical replication. Don't have a strong opinion\n> > if it should be mentioned or not.\n>\n> Oh, so your point is that a transaction that only has SELECT would\n> previously send an empty transaction? I thought this was only for apps\n> that create literal empty transactions, which seem rare.\n>\n> No. It should be a write transaction. If you have a replication setup that\n> publish only table foo (that isn't modified often) and most of your\n> workload does not contain table foo, Postgres sends 'begin' and 'commit'\n> messages to subscriber even if there is no change to replicate.\n>\n\nIt reduces network traffic and improves performance by 3-14% on simple\ntests [1] like the one shown by Euler. I see a value in adding this as\nfor the workloads where it hits, it seems more than 99% of network\ntraffic [2] is due to these empty messages.\n\n[1] - https://www.postgresql.org/message-id/OSZPR01MB63105A71CFAA46F5BD7C9D7CFD1E9%40OSZPR01MB6310.jpnprd01.prod.outlook.com\n[2] - https://www.postgresql.org/message-id/CAMkU=1yohp9-dv48FLoSPrMqYEyyS5ZWkaZGD41RJr10xiNo_Q@mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 13 May 2022 08:24:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Fri, May 13, 2022 at 01:36:04AM +0000, osumi.takamichi@fujitsu.com wrote:\n> > \t<para>\n> > \tThis is enabled with the subscriber option \"disable_on_error\"\n> > \tand avoids possible infinite loops during stream application.\n> > \t</para>\n> > \t</listitem>\n> Thank you !\n> \n> In this last paragraph, how about replacing \"infinite loops\"\n> with \"infinite error loops\" ? I think it makes the situation somewhat\n> clear for readers.\n\nAgreed, done.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 13 May 2022 11:06:40 -0400",
"msg_from": "'Bruce Momjian' <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Fri, May 13, 2022 at 10:48:41AM +0900, Amit Langote wrote:\n> On Thu, May 12, 2022 at 10:52 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Okay, I went with:\n> >\n> > Previously, such updates ran delete actions on the source\n> > partition and insert actions on the target partition. PostgreSQL will\n> > now run update actions on the referenced partition root.\n> \n> WFM, thanks.\n> \n> Btw, perhaps the following should be listed under E.1.3.2.1. Logical\n> Replication, not E.1.3.1.1. Partitioning?\n> \n> <!--\n> Author: Amit Kapila <akapila@postgresql.org>\n> 2021-12-08 [a61bff2bf] De-duplicate the result of pg_publication_tables view.\n> -->\n> \n> <listitem>\n> <para>\n> Remove incorrect duplicate partitions in system view\n> pg_publication_tables (Hou Zhijie)\n> </para>\n> </listitem>\n> \n> Attached a patch to do so.\n\nAgreed, done.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 13 May 2022 11:42:25 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Fri, May 13, 2022 at 08:24:53AM +0530, Amit Kapila wrote:\n> On Fri, May 13, 2022 at 6:02 AM Euler Taveira <euler@eulerto.com> wrote:\n> >\n> > On Thu, May 12, 2022, at 11:22 AM, Bruce Momjian wrote:\n> >\n> > On Thu, May 12, 2022 at 11:12:54AM -0300, Euler Taveira wrote:\n> > OB> On Thu, May 12, 2022, at 11:03 AM, Bruce Momjian wrote:\n> > >\n> > > I looked at that but thought that everyone would already assume we\n> > > skipped replication of empty transactions, and I didn't see much impact\n> > > for the user, so I didn't include it.\n> > >\n> > > It certainly has an impact on heavy workloads that replicate tables with few\n> > > modifications. It receives a high traffic of 'begin' and 'commit' messages that\n> > > the previous Postgres versions have to handle (discard). I would classify it as\n> > > a performance improvement for logical replication. Don't have a strong opinion\n> > > if it should be mentioned or not.\n> >\n> > Oh, so your point is that a transaction that only has SELECT would\n> > previously send an empty transaction? I thought this was only for apps\n> > that create literal empty transactions, which seem rare.\n> >\n> > No. It should be a write transaction. If you have a replication setup that\n> > publish only table foo (that isn't modified often) and most of your\n> > workload does not contain table foo, Postgres sends 'begin' and 'commit'\n> > messages to subscriber even if there is no change to replicate.\n> >\n> \n> It reduces network traffic and improves performance by 3-14% on simple\n> tests [1] like the one shown by Euler. I see a value in adding this as\n> for the workloads where it hits, it seems more than 99% of network\n> traffic [2] is due to these empty messages.\n\nI see the point now --- new item:\n\n\t<!--\n\tAuthor: Amit Kapila <akapila@postgresql.org>\n\t2022-03-30 [d5a9d86d8] Skip empty transactions for logical replication.\n\t-->\n\t\n\t<listitem>\n\t<para>\n\tPrevent logical replication of empty transactions (Ajin Cherian,\n\tHou Zhijie, Euler Taveira)\n\t</para>\n\t\n\t<para>\n\tPreviously, write transactions would send empty transactions to\n\tsubscribers if subscribed tables were not modified.\n\t</para>\n\t</listitem>\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 13 May 2022 11:48:42 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Fri, May 13, 2022 at 9:18 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Fri, May 13, 2022 at 08:24:53AM +0530, Amit Kapila wrote:\n> > On Fri, May 13, 2022 at 6:02 AM Euler Taveira <euler@eulerto.com> wrote:\n> > >\n> > > On Thu, May 12, 2022, at 11:22 AM, Bruce Momjian wrote:\n> > >\n> > > On Thu, May 12, 2022 at 11:12:54AM -0300, Euler Taveira wrote:\n> > > OB> On Thu, May 12, 2022, at 11:03 AM, Bruce Momjian wrote:\n> > > >\n> > > > I looked at that but thought that everyone would already assume we\n> > > > skipped replication of empty transactions, and I didn't see much impact\n> > > > for the user, so I didn't include it.\n> > > >\n> > > > It certainly has an impact on heavy workloads that replicate tables with few\n> > > > modifications. It receives a high traffic of 'begin' and 'commit' messages that\n> > > > the previous Postgres versions have to handle (discard). I would classify it as\n> > > > a performance improvement for logical replication. Don't have a strong opinion\n> > > > if it should be mentioned or not.\n> > >\n> > > Oh, so your point is that a transaction that only has SELECT would\n> > > previously send an empty transaction? I thought this was only for apps\n> > > that create literal empty transactions, which seem rare.\n> > >\n> > > No. It should be a write transaction. If you have a replication setup that\n> > > publish only table foo (that isn't modified often) and most of your\n> > > workload does not contain table foo, Postgres sends 'begin' and 'commit'\n> > > messages to subscriber even if there is no change to replicate.\n> > >\n> >\n> > It reduces network traffic and improves performance by 3-14% on simple\n> > tests [1] like the one shown by Euler. I see a value in adding this as\n> > for the workloads where it hits, it seems more than 99% of network\n> > traffic [2] is due to these empty messages.\n>\n> I see the point now --- new item:\n>\n> <!--\n> Author: Amit Kapila <akapila@postgresql.org>\n> 2022-03-30 [d5a9d86d8] Skip empty transactions for logical replication.\n> -->\n>\n> <listitem>\n> <para>\n> Prevent logical replication of empty transactions (Ajin Cherian,\n> Hou Zhijie, Euler Taveira)\n> </para>\n>\n> <para>\n> Previously, write transactions would send empty transactions to\n> subscribers if subscribed tables were not modified.\n> </para>\n> </listitem>\n>\n\nThanks!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 14 May 2022 10:22:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Sat, May 14, 2022 at 10:22:10AM +0530, Amit Kapila wrote:\n> > I see the point now --- new item:\n> >\n> > <!--\n> > Author: Amit Kapila <akapila@postgresql.org>\n> > 2022-03-30 [d5a9d86d8] Skip empty transactions for logical replication.\n> > -->\n> >\n> > <listitem>\n> > <para>\n> > Prevent logical replication of empty transactions (Ajin Cherian,\n> > Hou Zhijie, Euler Taveira)\n> > </para>\n> >\n> > <para>\n> > Previously, write transactions would send empty transactions to\n> > subscribers if subscribed tables were not modified.\n> > </para>\n> > </listitem>\n> >\n> \n> Thanks!\n\nI will admit I had a little trouble with the wording of this item. :-)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Sat, 14 May 2022 12:51:20 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Saturday, May 14, 2022 12:07 AM 'Bruce Momjian' <bruce@momjian.us> wrote:\n> On Fri, May 13, 2022 at 01:36:04AM +0000, osumi.takamichi@fujitsu.com wrote:\n> > > \t<para>\n> > > \tThis is enabled with the subscriber option \"disable_on_error\"\n> > > \tand avoids possible infinite loops during stream application.\n> > > \t</para>\n> > > \t</listitem>\n> > Thank you !\n> >\n> > In this last paragraph, how about replacing \"infinite loops\"\n> > with \"infinite error loops\" ? I think it makes the situation somewhat\n> > clear for readers.\n> \n> Agreed, done.\nThanks !\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Mon, 16 May 2022 00:58:46 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: First draft of the PG 15 release notes"
},
{
"msg_contents": "Hi Bruce,\n\n\"Improve validation of ASCII and UTF-8 text by processing 16 bytes at\na time (John Naylor)\"\n\nThe reader might assume here that ASCII is optimized regardless of\nencoding, but it is only optimized in the context of UTF-8. So I would\njust mention UTF-8.\n\nThanks!\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 16 May 2022 13:21:22 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Mon, May 16, 2022 at 01:21:22PM +0700, John Naylor wrote:\n> Hi Bruce,\n> \n> \"Improve validation of ASCII and UTF-8 text by processing 16 bytes at\n> a time (John Naylor)\"\n> \n> The reader might assume here that ASCII is optimized regardless of\n> encoding, but it is only optimized in the context of UTF-8. So I would\n> just mention UTF-8.\n\nI struggled with this item because it seemed to me that even if the\nUTF-8 text was only ASCII, it would benefit, so I just rewrote it to:\n\n\tImprove validation of UTF-8 text (even ASCII-only) by processing 16\n\tbytes at a time (John Naylor)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Mon, 16 May 2022 10:09:18 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Mon, May 16, 2022 at 10:09:18AM -0400, Bruce Momjian wrote:\n> On Mon, May 16, 2022 at 01:21:22PM +0700, John Naylor wrote:\n> > Hi Bruce,\n> > \n> > \"Improve validation of ASCII and UTF-8 text by processing 16 bytes at\n> > a time (John Naylor)\"\n> > \n> > The reader might assume here that ASCII is optimized regardless of\n> > encoding, but it is only optimized in the context of UTF-8. So I would\n> > just mention UTF-8.\n> \n> I struggled with this item because it seemed to me that even if the\n> UTF-8 text was only ASCII, it would benefit, so I just rewrote it to:\n> \n> \tImprove validation of UTF-8 text (even ASCII-only) by processing 16\n> \tbytes at a time (John Naylor)\n\nNewer wording:\n\n\tImprove validation of UTF-8 text (even if only ASCII) by processing\n\t16 bytes at a time (John Naylor)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Mon, 16 May 2022 10:18:27 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On 2022-May-10, Bruce Momjian wrote:\n\n> I have completed the first draft of the PG 15 release notes and you can\n> see the results here:\n> \n> https://momjian.us/pgsql_docs/release-15.html\n\nJust to be clear -- 15beta1 will be released with the \"new features and\nenhancements\" list as empty, is that right?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 16 May 2022 17:41:08 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Mon, May 16, 2022 at 05:41:08PM +0200, Alvaro Herrera wrote:\n> On 2022-May-10, Bruce Momjian wrote:\n> \n> > I have completed the first draft of the PG 15 release notes and you can\n> > see the results here:\n> > \n> > https://momjian.us/pgsql_docs/release-15.html\n> \n> Just to be clear -- 15beta1 will be released with the \"new features and\n> enhancements\" list as empty, is that right?\n\nI assume so. Last year, it was empty until *after* 14rc1.\n\nhttps://www.postgresql.org/message-id/flat/CAH2-WzntQzn_jJSUZYZMPSTgc0C98_mZXYBU0TxzHLvkieTGUA@mail.gmail.com\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 16 May 2022 11:02:39 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Mon, May 16, 2022 at 11:02:39AM -0500, Justin Pryzby wrote:\n> On Mon, May 16, 2022 at 05:41:08PM +0200, Alvaro Herrera wrote:\n> > On 2022-May-10, Bruce Momjian wrote:\n> > \n> > > I have completed the first draft of the PG 15 release notes and you can\n> > > see the results here:\n> > > \n> > > https://momjian.us/pgsql_docs/release-15.html\n> > \n> > Just to be clear -- 15beta1 will be released with the \"new features and\n> > enhancements\" list as empty, is that right?\n> \n> I assume so. Last year, it was empty until *after* 14rc1.\n> \n> https://www.postgresql.org/message-id/flat/CAH2-WzntQzn_jJSUZYZMPSTgc0C98_mZXYBU0TxzHLvkieTGUA@mail.gmail.com\n\nYeah, Jonathan does that, and it is pulled usually from the press\nrelease.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Mon, 16 May 2022 12:42:57 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Sat, May 14, 2022 at 12:42 AM Bruce Momjian <bruce@momjian.us> wrote:\n> On Fri, May 13, 2022 at 10:48:41AM +0900, Amit Langote wrote:\n> > Btw, perhaps the following should be listed under E.1.3.2.1. Logical\n> > Replication, not E.1.3.1.1. Partitioning?\n> >\n> > <!--\n> > Author: Amit Kapila <akapila@postgresql.org>\n> > 2021-12-08 [a61bff2bf] De-duplicate the result of pg_publication_tables view.\n> > -->\n> >\n> > <listitem>\n> > <para>\n> > Remove incorrect duplicate partitions in system view\n> > pg_publication_tables (Hou Zhijie)\n> > </para>\n> > </listitem>\n> >\n> > Attached a patch to do so.\n>\n> Agreed, done.\n\nThank you.\n\nThough a bit late given beta is now wrapped, I have another partition\nitem wording improvement suggestion:\n\n-Previously, a partitioned table with any LIST partition containing\nmultiple values could not be used for ordered partition scans. Now\nonly non-pruned LIST partitions are checked. This also helps with\n-partitioned tables with DEFAULT partitions.\n\n+Previously, an ordered partition scan would not be considered for a\nLIST-partitioned table with any partition containing multiple values,\nnor for partitioned tables with DEFAULT partition.\n\nI think the \"Now only non-pruned LIST partitions are checked\" bit in\nthe original wording is really an implementation detail of the actual\nimprovement that ordered partition scans are now possible in more\ncases -- it simply became easier for the code that implements this\noptimization to refer to non-pruned partitions, using a bitmapset\nrather than having to trawl through the whole array of partition rels,\nwhich is what I think the commit message of this item mentions. David\ncan correct me if I got that wrong.\n\nAttached a patch.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 19 May 2022 11:40:46 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Thu, 19 May 2022 at 14:41, Amit Langote <amitlangote09@gmail.com> wrote:\n> Though a bit late given beta is now wrapped, I have another partition\n> item wording improvement suggestion:\n>\n> -Previously, a partitioned table with any LIST partition containing\n> multiple values could not be used for ordered partition scans. Now\n> only non-pruned LIST partitions are checked. This also helps with\n> -partitioned tables with DEFAULT partitions.\n>\n> +Previously, an ordered partition scan would not be considered for a\n> LIST-partitioned table with any partition containing multiple values,\n> nor for partitioned tables with DEFAULT partition.\n\nI think your proposed wording does not really improve things. The\n\"Now only non-pruned LIST partitions are checked\" is important and I\nthink Bruce did the right thing to mention that. Prior to this change,\nordered scans were not possible if there was a DEFAULT or if any LIST\npartition allowed >1 value. Now, if the default partition is pruned\nand there are no non-pruned partitions that allow Datum values that\nare inter-mixed with ones from another non-pruned partition, then an\nordered scan can be performed.\n\nFor example, non-pruned partition a allows IN(1,3), and non-pruned\npartition b allows IN(2,4), we cannot do the ordered scan. With\nIN(1,2), IN(3,4), we can.\n\nDavid\n\n\n",
"msg_date": "Thu, 19 May 2022 17:55:54 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Thu, May 19, 2022 at 2:56 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Thu, 19 May 2022 at 14:41, Amit Langote <amitlangote09@gmail.com> wrote:\n> > Though a bit late given beta is now wrapped, I have another partition\n> > item wording improvement suggestion:\n> >\n> > -Previously, a partitioned table with any LIST partition containing\n> > multiple values could not be used for ordered partition scans. Now\n> > only non-pruned LIST partitions are checked. This also helps with\n> > -partitioned tables with DEFAULT partitions.\n> >\n> > +Previously, an ordered partition scan would not be considered for a\n> > LIST-partitioned table with any partition containing multiple values,\n> > nor for partitioned tables with DEFAULT partition.\n>\n> I think your proposed wording does not really improve things. The\n> \"Now only non-pruned LIST partitions are checked\" is important and I\n> think Bruce did the right thing to mention that. Prior to this change,\n> ordered scans were not possible if there was a DEFAULT or if any LIST\n> partition allowed >1 value. Now, if the default partition is pruned\n> and there are no non-pruned partitions that allow Datum values that\n> are inter-mixed with ones from another non-pruned partition, then an\n> ordered scan can be performed.\n>\n> For example, non-pruned partition a allows IN(1,3), and non-pruned\n> partition b allows IN(2,4), we cannot do the ordered scan. With\n> IN(1,2), IN(3,4), we can.\n\nI think that's what I understood this change to be about. Before this\nchange, partitions_are_ordered() only returned true if *all*\npartitions of a parent are known to be ordered, which they're not in\nthe presence of the default partition and of a list partition\ncontaining out-of-order values. It didn't matter to\npartitions_are_ordered() that the caller might not care about those\npartitions being present in the PartitionDesc because of having been\npruned by the query, but that information was not readily available .\nSo, you added PartitionBoundInfo.interleaved_parts to record indexes\nof partitions containing out-of-order values and RelOptInfo.live_parts\nto record non-pruned partitions, which made it more feasible for\npartitions_are_ordered() to address those cases. I suppose you think\nit's better to be verbose by mentioning that partitions_are_ordered()\nnow considers only non-pruned partitions which allows supporting more\ncases, but I see that as mentioning implementation details\nunnecessarily.\n\nOr maybe we could mention that but use a wording that doesn't make it\nsound like an implementation detail, like:\n\n+Previously, an ordered partition scan could not be used for a\nLIST-partitioned table with any partition containing multiple values,\nnor for partitioned tables with DEFAULT partition. Now it can be used\nin those cases at least for queries in which such partitions are\npruned.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 19 May 2022 18:13:28 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Mon, May 16, 2022 at 9:18 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> Newer wording:\n>\n> Improve validation of UTF-8 text (even if only ASCII) by processing\n> 16 bytes at a time (John Naylor)\n\nThanks! I also think Heikki should be mentioned as a coauthor here --\nthe ASCII coding was his work in large part.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 May 2022 12:16:10 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, May 24, 2022 at 12:16:10PM +0700, John Naylor wrote:\n> On Mon, May 16, 2022 at 9:18 PM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > Newer wording:\n> >\n> > Improve validation of UTF-8 text (even if only ASCII) by processing\n> > 16 bytes at a time (John Naylor)\n> \n> Thanks! I also think Heikki should be mentioned as a coauthor here --\n> the ASCII coding was his work in large part.\n\nSure, done.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 24 May 2022 19:31:56 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Thu, May 19, 2022 at 06:13:28PM +0900, Amit Langote wrote:\n> Or maybe we could mention that but use a wording that doesn't make it\n> sound like an implementation detail, like:\n> \n> +Previously, an ordered partition scan could not be used for a\n> LIST-partitioned table with any partition containing multiple values,\n> nor for partitioned tables with DEFAULT partition. Now it can be used\n> in those cases at least for queries in which such partitions are\n> pruned.\n\nSorry, I just don't see this as an improvement because it starts with a\ncomplex term \"an ordered partition scan\" rather than simply \"a\npartitioned table\".\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 24 May 2022 19:36:00 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, May 25, 2022 at 8:36 AM Bruce Momjian <bruce@momjian.us> wrote:\n> On Thu, May 19, 2022 at 06:13:28PM +0900, Amit Langote wrote:\n> > Or maybe we could mention that but use a wording that doesn't make it\n> > sound like an implementation detail, like:\n> >\n> > +Previously, an ordered partition scan could not be used for a\n> > LIST-partitioned table with any partition containing multiple values,\n> > nor for partitioned tables with DEFAULT partition. Now it can be used\n> > in those cases at least for queries in which such partitions are\n> > pruned.\n>\n> Sorry, I just don't see this as an improvement because it starts with a\n> complex term \"an ordered partition scan\" rather than simply \"a\n> partitioned table\".\n\nThe headline says \"Allow ordered scans of partitions to avoid sorting\nin more cases\", so I proposed starting the description too with \"an\nordered scan\". Also, not sure about going with:\n\n\"previously, <table-with-limiting-properties> could not be used for\n<scan-method>, but now it can be provided <conditions>\"\n\ninstead of:\n\n\"previously, <scan-method> could not be used for\n<table-with-limiting-properties>, but now it can be provided\n<conditions>\"\n\nas in my proposed wording, but maybe that's just me.\n\nAnyway, I still think it would be better to fix the description such\nthat the cases in which ordered scans will continue to not be usable\nare clear. The existing text doesn't make clear, for example, that a\nDEFAULT partition if present must have been pruned for an ordered scan\nto be used. So I propose:\n\n+Previously, a partitioned table with DEFAULT partition or a LIST\npartition containing multiple values could not be used for ordered\npartition scans. Now it can be used at least in the cases where such\npartitions are pruned.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 25 May 2022 12:00:47 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, 25 May 2022 at 15:01, Amit Langote <amitlangote09@gmail.com> wrote:\n> +Previously, a partitioned table with DEFAULT partition or a LIST\n> partition containing multiple values could not be used for ordered\n> partition scans. Now it can be used at least in the cases where such\n> partitions are pruned.\n\nI think this one is an improvement. I'd drop \"at least\".\n\nDavid\n\n\n",
"msg_date": "Wed, 25 May 2022 15:44:15 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, May 25, 2022 at 12:44 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Wed, 25 May 2022 at 15:01, Amit Langote <amitlangote09@gmail.com> wrote:\n> > +Previously, a partitioned table with DEFAULT partition or a LIST\n> > partition containing multiple values could not be used for ordered\n> > partition scans. Now it can be used at least in the cases where such\n> > partitions are pruned.\n>\n> I think this one is an improvement. I'd drop \"at least\".\n\nOkay, I can agree that \"at least\" sounds a bit extraneous, so removed.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 25 May 2022 13:04:31 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, May 25, 2022 at 1:04 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, May 25, 2022 at 12:44 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > On Wed, 25 May 2022 at 15:01, Amit Langote <amitlangote09@gmail.com> wrote:\n> > > +Previously, a partitioned table with DEFAULT partition or a LIST\n> > > partition containing multiple values could not be used for ordered\n> > > partition scans. Now it can be used at least in the cases where such\n> > > partitions are pruned.\n> >\n> > I think this one is an improvement. I'd drop \"at least\".\n>\n> Okay, I can agree that \"at least\" sounds a bit extraneous, so removed.\n\n* I think it's better to s/...or a LIST partition/...or with a LIST partition\n\n* The capitalization of DEFAULT and LIST seems unnecessary.\n\nUpdated the patch on those points.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 26 May 2022 10:31:14 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Thu, May 26, 2022 at 10:31:14AM +0900, Amit Langote wrote:\n> * I think it's better to s/...or a LIST partition/...or with a LIST partition\n> \n> * The capitalization of DEFAULT and LIST seems unnecessary.\n> \n> Updated the patch on those points.\n\nI went with this text:\n\n\tPreviously, a partitioned table with a DEFAULT partition or a LIST\n\tpartition containing multiple values could not be used for ordered\n\tpartition scans. Now they can be used if these partitions are pruned.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 25 May 2022 22:17:51 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Thu, May 26, 2022 at 11:17 AM Bruce Momjian <bruce@momjian.us> wrote:\n> On Thu, May 26, 2022 at 10:31:14AM +0900, Amit Langote wrote:\n> > * I think it's better to s/...or a LIST partition/...or with a LIST partition\n> >\n> > * The capitalization of DEFAULT and LIST seems unnecessary.\n> >\n> > Updated the patch on those points.\n>\n> I went with this text:\n>\n> Previously, a partitioned table with a DEFAULT partition or a LIST\n> partition containing multiple values could not be used for ordered\n> partition scans. Now they can be used if these partitions are pruned.\n\nGood enough for me, thanks.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 26 May 2022 11:24:52 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, May 10, 2022 at 11:44:15AM -0400, Bruce Momjian wrote:\n> I have completed the first draft of the PG 15 release notes\n\n> <!--\n> Author: Noah Misch <noah@leadboat.com>\n> 2021-09-09 [b073c3ccd] Revoke PUBLIC CREATE from public schema, now owned by pg\n> -->\n> \n> <listitem>\n> <para>\n> Remove <literal>PUBLIC</literal> creation permission on the <link\n> linkend=\"ddl-schemas-public\"><literal>public</literal> schema</link>\n> (Noah Misch)\n> </para>\n> \n> <para>\n> This is a change in the default for newly-created databases in\n> existing clusters and for new clusters; <literal>USAGE</literal>\n\nIf you dump/reload an unmodified v14 template1 (as pg_dumpall and pg_upgrade\ndo), your v15 template1 will have a v14 ACL on its public schema. At that\npoint, the fate of \"newly-created databases in existing clusters\" depends on\nwhether you clone template1 or template0. Does any of that detail belong\nhere, or does the existing text suffice?\n\n> permissions on the <literal>public</literal> schema has not\n> been changed. Databases restored from previous Postgres releases\n> will be restored with their current permissions. Users wishing\n> to have the old permissions on new objects will need to grant\n\nThe phrase \"old permissions on new objects\" doesn't sound right to me, but I'm\nnot sure why. I think you're aiming for the fact that this is just a default;\none can still change the ACL to anything, including to the old default. If\nthese notes are going to mention the old default like they do so far, I think\nthey should also urge readers to understand\nhttps://www.postgresql.org/docs/devel/ddl-schemas.html#DDL-SCHEMAS-PATTERNS\nbefore returning to the old default. What do you think?\n\n> <literal>CREATE</literal> permission for <literal>PUBLIC</literal>\n> on the <literal>public</literal> schema; this change can be made\n> on <literal>template1</literal> to cause all new databases\n> to have these permissions. <literal>template1</literal>\n> permissions for <application>pg_dumpall</application> and\n> <application>pg_upgrade</application>?\n\npg_dumpall will change template1. I think pg_upgrade will too, and neither\nprogram will change template0.\n\n> </para>\n> </listitem>\n> \n> <!--\n> Author: Noah Misch <noah@leadboat.com>\n> 2021-09-09 [b073c3ccd] Revoke PUBLIC CREATE from public schema, now owned by pg\n> -->\n> \n> <listitem>\n> <para>\n> Change the owner of the <literal>public</literal> schema to\n> <literal>pg_database_owner</literal> (Noah Misch)\n> </para>\n> \n> <para>\n> Previously it was the literal user name of the database owner.\n\nIt was the bootstrap superuser.\n\n> Databases restored from previous Postgres releases will be restored\n> with their current owner specification.\n> </para>\n> </listitem>\n\n\n",
"msg_date": "Mon, 27 Jun 2022 23:37:19 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Mon, Jun 27, 2022 at 11:37:19PM -0700, Noah Misch wrote:\n> On Tue, May 10, 2022 at 11:44:15AM -0400, Bruce Momjian wrote:\n> > I have completed the first draft of the PG 15 release notes\n> \n> > <!--\n> > Author: Noah Misch <noah@leadboat.com>\n> > 2021-09-09 [b073c3ccd] Revoke PUBLIC CREATE from public schema, now owned by pg\n> > -->\n> > \n> > <listitem>\n> > <para>\n> > Remove <literal>PUBLIC</literal> creation permission on the <link\n> > linkend=\"ddl-schemas-public\"><literal>public</literal> schema</link>\n> > (Noah Misch)\n> > </para>\n> > \n> > <para>\n> > This is a change in the default for newly-created databases in\n> > existing clusters and for new clusters; <literal>USAGE</literal>\n> \n> If you dump/reload an unmodified v14 template1 (as pg_dumpall and pg_upgrade\n> do), your v15 template1 will have a v14 ACL on its public schema. At that\n> point, the fate of \"newly-created databases in existing clusters\" depends on\n> whether you clone template1 or template0. Does any of that detail belong\n> here, or does the existing text suffice?\n\nI think it is very confusing to have template0 have one value and\ntemplate1 have a different one, but as I understand it template0 will\nonly be used for pg_dump comparison, and that will keep template1 with\nthe same permissions, so I guess it is okay.\n\n> > permissions on the <literal>public</literal> schema has not\n> > been changed. Databases restored from previous Postgres releases\n> > will be restored with their current permissions. Users wishing\n> > to have the old permissions on new objects will need to grant\n> \n> The phrase \"old permissions on new objects\" doesn't sound right to me, but I'm\n> not sure why. I think you're aiming for the fact that this is just a default;\n> one can still change the ACL to anything, including to the old default. If\n> these notes are going to mention the old default like they do so far, I think\n> they should also urge readers to understand\n> https://www.postgresql.org/docs/devel/ddl-schemas.html#DDL-SCHEMAS-PATTERNS\n> before returning to the old default. What do you think?\n\nAgreed, the new text is:\n\n\tUsers wishing to have the former permissions will need to grant\n\t<literal>CREATE</literal> permission for <literal>PUBLIC</literal> on\n\tthe <literal>public</literal> schema; this change can be made on\n\t<literal>template1</literal> to cause all new databases to have these\n\tpermissions.\n\n> \n> > <literal>CREATE</literal> permission for <literal>PUBLIC</literal>\n> > on the <literal>public</literal> schema; this change can be made\n> > on <literal>template1</literal> to cause all new databases\n> > to have these permissions. <literal>template1</literal>\n> > permissions for <application>pg_dumpall</application> and\n> > <application>pg_upgrade</application>?\n> \n> pg_dumpall will change template1. I think pg_upgrade will too, and neither\n> program will change template0.\n\nOkay, I will remove that question mark sentence.\n\n> > </para>\n> > </listitem>\n> > \n> > <!--\n> > Author: Noah Misch <noah@leadboat.com>\n> > 2021-09-09 [b073c3ccd] Revoke PUBLIC CREATE from public schema, now owned by pg\n> > -->\n> > \n> > <listitem>\n> > <para>\n> > Change the owner of the <literal>public</literal> schema to\n> > <literal>pg_database_owner</literal> (Noah Misch)\n> > </para>\n> > \n> > <para>\n> > Previously it was the literal user name of the database owner.\n> \n> It was the bootstrap superuser.\n\nOkay, text updated, thanks. Applied patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Tue, 28 Jun 2022 16:35:45 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, Jun 28, 2022 at 1:35 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Okay, text updated, thanks. Applied patch attached.\n\nI have some notes on these items:\n\n1. \"Allow vacuum to be more aggressive in setting the oldest frozenxid\n(Peter Geoghegan)\"\n\n2. \"Add additional information to VACUUM VERBOSE and autovacuum\nlogging messages (Peter Geoghegan)\"\n\nThe main enhancement to VACUUM for Postgres 15 was item 1, which\ntaught VACUUM to dynamically track the oldest remaining XID (and the\noldest remaining MXID) that will remain in the table at the end of the\nsame VACUUM operation. These final/oldest XID/MXID values are what we\nnow use to set relfrozenxid and relminmxid in pg_class. Previously we\njust set relfrozenxid/relminmxid to whatever XID/MXID value was used\nto determine which XIDs/MXIDs needed to be frozen. These values often\nindicated more about VACUUM implementation details (like the\nvacuum_freeze_min_age GUc's value) than the actual true contents of the\ntable at the end of the most recent VACUUM.\n\nIt might be worth explaining the shift directly in the release notes.\nThe new approach is simpler and makes a lot more sense -- why should\nthe relfrozenxid be closely tied to freezing? We don't necessarily\nhave to freeze any tuple to advance relfrozenxid right up to the\nremovable cutoff/OldestXmin used by VACUUM. For example,\nanti-wraparound VACUUMs that run against static tables now set\nrelfrozenxid/relminmxid to VACUUM's removable cutoff/OldestXmin\ndirectly, without freezing anything (after the first time). Same with\ntables that happen to have every row deleted -- only the actual\nunfrozen XIDs/MXIDs left in the table matter, and if there happen to\nbe none at all then we can use the same relfrozenxid as we would for\na CREATE TABLE. All depends on what the workload allows.\n\nThere will also be a real practical benefit for users that allocate a\nlot of MultiXactIds: We'll now have pg_class.relminmxid values that\nare much more reliable indicators of what is really going on in the\ntable, MultiXactId-wise. I expect that this will make it much less\nlikely that anti-wraparound VACUUMs will run needlessly against the\nlargest tables, where there probably wasn't ever one single\nMultiXactId. In other words, the implementation will have more\naccurate information at the level of each table, and won't .\n\nI think that very uneven consumption of MultiXactIds at the table\nlevel is probably common in real databases. Plus VACUUM can usually\nremove a non-running MultiXact from a tuple's xmax, regardless of\nwhether or not the mxid happens to be before the\nvacuum_multixact_freeze_min_age-based MXID cutoff -- VACUUM has\nalways just set xmax to InvalidXid in passing when it's possible to do so\neasily. MultiXacts are inherently pretty short-lived information about\nrow lockers at a point in time. We don't really need to keep them\naround for very long. We may now be able to truncate the two MultiXact\nrelated SLRUs much more frequently with some workloads.\n\nFinally, note that the new VACUUM VERBOSE output (which is now pretty\nmuch the same as the autovacuum log output) shows when and how\nrelfrozenxid/relminmxid have advanced. This should make it relatively\neasy to observe these effects where they exist.\n\nThanks\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 28 Jun 2022 17:32:26 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, Jun 28, 2022 at 04:35:45PM -0400, Bruce Momjian wrote:\n> On Mon, Jun 27, 2022 at 11:37:19PM -0700, Noah Misch wrote:\n> > On Tue, May 10, 2022 at 11:44:15AM -0400, Bruce Momjian wrote:\n> > > I have completed the first draft of the PG 15 release notes\n> > \n> > > <!--\n> > > Author: Noah Misch <noah@leadboat.com>\n> > > 2021-09-09 [b073c3ccd] Revoke PUBLIC CREATE from public schema, now owned by pg\n> > > -->\n> > > \n> > > <listitem>\n> > > <para>\n> > > Remove <literal>PUBLIC</literal> creation permission on the <link\n> > > linkend=\"ddl-schemas-public\"><literal>public</literal> schema</link>\n> > > (Noah Misch)\n> > > </para>\n> > > \n> > > <para>\n> > > This is a change in the default for newly-created databases in\n> > > existing clusters and for new clusters; <literal>USAGE</literal>\n> > \n> > If you dump/reload an unmodified v14 template1 (as pg_dumpall and pg_upgrade\n> > do), your v15 template1 will have a v14 ACL on its public schema. At that\n> > point, the fate of \"newly-created databases in existing clusters\" depends on\n> > whether you clone template1 or template0. Does any of that detail belong\n> > here, or does the existing text suffice?\n> \n> I think it is very confusing to have template0 have one value and\n> template1 have a different one, but as I understand it template0 will\n> only be used for pg_dump comparison, and that will keep template1 with\n> the same permissions, so I guess it is okay.\n\nIt's an emergent property of two decisions. In the interest of backward\ncompatibility, I decided to have v15 pg_dump emit GRANT for the public schema\neven when the source is an unmodified v14- database. When that combines with\nthe ancient decision that a pg_dumpall or pg_upgrade covers template1 but not\ntemplate0, one gets the above consequences. I don't see a way to improve on\nthis outcome.\n\n> > > permissions on the <literal>public</literal> schema has not\n> > > been changed. Databases restored from previous Postgres releases\n> > > will be restored with their current permissions. Users wishing\n> > > to have the old permissions on new objects will need to grant\n> > \n> > The phrase \"old permissions on new objects\" doesn't sound right to me, but I'm\n> > not sure why. I think you're aiming for the fact that this is just a default;\n> > one can still change the ACL to anything, including to the old default. If\n> > these notes are going to mention the old default like they do so far, I think\n> > they should also urge readers to understand\n> > https://www.postgresql.org/docs/devel/ddl-schemas.html#DDL-SCHEMAS-PATTERNS\n> > before returning to the old default. What do you think?\n> \n> Agreed, the new text is:\n> \n> \tUsers wishing to have the former permissions will need to grant\n> \t<literal>CREATE</literal> permission for <literal>PUBLIC</literal> on\n> \tthe <literal>public</literal> schema; this change can be made on\n> \t<literal>template1</literal> to cause all new databases to have these\n> \tpermissions.\n\nWhat do you think about the \"should also urge readers ...\" part of my message?\n\n\n",
"msg_date": "Wed, 29 Jun 2022 22:08:08 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, Jun 28, 2022 at 05:32:26PM -0700, Peter Geoghegan wrote:\n> The main enhancement to VACUUM for Postgres 15 was item 1, which\n> taught VACUUM to dynamically track the oldest remaining XID (and the\n> oldest remaining MXID) that will remain in the table at the end of the\n> same VACUUM operation. These final/oldest XID/MXID values are what we\n> now use to set relfrozenxid and relminmxid in pg_class. Previously we\n> just set relfrozenxid/relminmxid to whatever XID/MXID value was used\n> to determine which XIDs/MXIDs needed to be frozen. These values often\n> indicated more about VACUUM implementation details (like the\n> vacuum_freeze_min_age GUc's value) than the actual true contents of the\n> table at the end of the most recent VACUUM.\n> \n> It might be worth explaining the shift directly in the release notes.\n> The new approach is simpler and makes a lot more sense -- why should\n> the relfrozenxid be closely tied to freezing? We don't necessarily\n\nI don't think this is an appropriate detail for the release notes.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 1 Jul 2022 12:41:38 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Fri, Jul 1, 2022 at 9:41 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > It might be worth explaining the shift directly in the release notes.\n> > The new approach is simpler and makes a lot more sense -- why should\n> > the relfrozenxid be closely tied to freezing? We don't necessarily\n>\n> I don't think this is an appropriate detail for the release notes.\n\nOkay. What about saying something about relminmxid advancement where\nthe database consumes lots of multixacts?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 1 Jul 2022 09:56:17 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, Jun 29, 2022 at 10:08:08PM -0700, Noah Misch wrote:\n> On Tue, Jun 28, 2022 at 04:35:45PM -0400, Bruce Momjian wrote:\n> > > If you dump/reload an unmodified v14 template1 (as pg_dumpall and pg_upgrade\n> > > do), your v15 template1 will have a v14 ACL on its public schema. At that\n> > > point, the fate of \"newly-created databases in existing clusters\" depends on\n> > > whether you clone template1 or template0. Does any of that detail belong\n> > > here, or does the existing text suffice?\n> > \n> > I think it is very confusing to have template0 have one value and\n> > template1 have a different one, but as I understand it template0 will\n> > only be used for pg_dump comparison, and that will keep template1 with\n> > the same permissions, so I guess it is okay.\n> \n> It's an emergent property of two decisions. In the interest of backward\n> compatibility, I decided to have v15 pg_dump emit GRANT for the public schema\n> even when the source is an unmodified v14- database. When that combines with\n> the ancient decision that a pg_dumpall or pg_upgrade covers template1 but not\n> template0, one gets the above consequences. I don't see a way to improve on\n> this outcome.\n\nThanks for the summary.\n\n> > > > permissions on the <literal>public</literal> schema has not\n> > > > been changed. Databases restored from previous Postgres releases\n> > > > will be restored with their current permissions. Users wishing\n> > > > to have the old permissions on new objects will need to grant\n> > > \n> > > The phrase \"old permissions on new objects\" doesn't sound right to me, but I'm\n> > > not sure why. I think you're aiming for the fact that this is just a default;\n> > > one can still change the ACL to anything, including to the old default. If\n> > > these notes are going to mention the old default like they do so far, I think\n> > > they should also urge readers to understand\n> > > https://www.postgresql.org/docs/devel/ddl-schemas.html#DDL-SCHEMAS-PATTERNS\n> > > before returning to the old default. What do you think?\n> > \n> > Agreed, the new text is:\n> > \n> > \tUsers wishing to have the former permissions will need to grant\n> > \t<literal>CREATE</literal> permission for <literal>PUBLIC</literal> on\n> > \tthe <literal>public</literal> schema; this change can be made on\n> > \t<literal>template1</literal> to cause all new databases to have these\n> > \tpermissions.\n> \n> What do you think about the \"should also urge readers ...\" part of my message?\n\nI see your point, that there is no indication of why you might not want\nto restore the old permissions. I created the attached patch which\nmakes two additions to clarify this.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Fri, 1 Jul 2022 14:08:00 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Fri, Jul 01, 2022 at 02:08:00PM -0400, Bruce Momjian wrote:\n> On Wed, Jun 29, 2022 at 10:08:08PM -0700, Noah Misch wrote:\n> > On Tue, Jun 28, 2022 at 04:35:45PM -0400, Bruce Momjian wrote:\n\n> > > > > permissions on the <literal>public</literal> schema has not\n> > > > > been changed. Databases restored from previous Postgres releases\n> > > > > will be restored with their current permissions. Users wishing\n> > > > > to have the old permissions on new objects will need to grant\n> > > > \n> > > > The phrase \"old permissions on new objects\" doesn't sound right to me, but I'm\n> > > > not sure why. I think you're aiming for the fact that this is just a default;\n> > > > one can still change the ACL to anything, including to the old default. If\n> > > > these notes are going to mention the old default like they do so far, I think\n> > > > they should also urge readers to understand\n> > > > https://www.postgresql.org/docs/devel/ddl-schemas.html#DDL-SCHEMAS-PATTERNS\n> > > > before returning to the old default. What do you think?\n> > > \n> > > Agreed, the new text is:\n> > > \n> > > \tUsers wishing to have the former permissions will need to grant\n> > > \t<literal>CREATE</literal> permission for <literal>PUBLIC</literal> on\n> > > \tthe <literal>public</literal> schema; this change can be made on\n> > > \t<literal>template1</literal> to cause all new databases to have these\n> > > \tpermissions.\n> > \n> > What do you think about the \"should also urge readers ...\" part of my message?\n> \n> I see your point, that there is no indication of why you might not want\n> to restore the old permissions. I created the attached patch which\n> makes two additions to clarify this.\n\n> --- a/doc/src/sgml/release-15.sgml\n> +++ b/doc/src/sgml/release-15.sgml\n> @@ -63,12 +63,11 @@ Author: Noah Misch <noah@leadboat.com>\n> permissions on the <literal>public</literal> schema has not\n> been changed. Databases restored from previous Postgres releases\n> will be restored with their current permissions. Users wishing\n> - to have the former more-open permissions will need to grant\n> + to have the former permissions will need to grant\n> <literal>CREATE</literal> permission for <literal>PUBLIC</literal>\n> on the <literal>public</literal> schema; this change can be made\n> on <literal>template1</literal> to cause all new databases\n> - to have these permissions. This change was made to increase\n> - security.\n> + to have these permissions.\n> </para>\n> </listitem>\n\nHere's what I've been trying to ask: what do you think of linking to\nhttps://www.postgresql.org/docs/devel/ddl-schemas.html#DDL-SCHEMAS-PATTERNS\nhere? The release note text is still vague, and the docs have extensive\ncoverage of the topic. The notes can just link to that extensive coverage.\n\n\n",
"msg_date": "Fri, 1 Jul 2022 18:21:28 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Fri, Jul 1, 2022 at 09:56:17AM -0700, Peter Geoghegan wrote:\n> On Fri, Jul 1, 2022 at 9:41 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > > It might be worth explaining the shift directly in the release notes.\n> > > The new approach is simpler and makes a lot more sense -- why should\n> > > the relfrozenxid be closely tied to freezing? We don't necessarily\n> >\n> > I don't think this is an appropriate detail for the release notes.\n> \n> Okay. What about saying something about relminmxid advancement where\n> the database consumes lots of multixacts?\n\nNo. same issue.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Sat, 2 Jul 2022 20:13:41 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": ">\n> Okay, thanks Bruce.\n-- \nPeter Geoghegan\n\nOkay, thanks Bruce.-- Peter Geoghegan",
"msg_date": "Sat, 2 Jul 2022 18:17:34 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Sat, Jul 2, 2022 at 08:13:41PM -0400, Bruce Momjian wrote:\n> On Fri, Jul 1, 2022 at 09:56:17AM -0700, Peter Geoghegan wrote:\n> > On Fri, Jul 1, 2022 at 9:41 AM Bruce Momjian <bruce@momjian.us> wrote:\n> > > > It might be worth explaining the shift directly in the release notes.\n> > > > The new approach is simpler and makes a lot more sense -- why should\n> > > > the relfrozenxid be closely tied to freezing? We don't necessarily\n> > >\n> > > I don't think this is an appropriate detail for the release notes.\n> > \n> > Okay. What about saying something about relminmxid advancement where\n> > the database consumes lots of multixacts?\n> \n> No. same issue.\n\nActually, I was wrong. I thought that we only mentioned that we\ncomputed a more agressive xid, but now see I was mentioning the _frozen_\nxid. Reading the commit, we do compute the multi-xid and store that too\nso I have updated the PG 15 release notes with the attached patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Tue, 5 Jul 2022 14:09:44 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Fri, Jul 1, 2022 at 06:21:28PM -0700, Noah Misch wrote:\n> Here's what I've been trying to ask: what do you think of linking to\n> https://www.postgresql.org/docs/devel/ddl-schemas.html#DDL-SCHEMAS-PATTERNS\n> here? The release note text is still vague, and the docs have extensive\n> coverage of the topic. The notes can just link to that extensive coverage.\n\nSure. how is this patch?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Tue, 5 Jul 2022 14:35:39 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, Jul 5, 2022 at 11:09 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> Actually, I was wrong. I thought that we only mentioned that we\n> computed a more agressive xid, but now see I was mentioning the _frozen_\n> xid. Reading the commit, we do compute the multi-xid and store that too\n> so I have updated the PG 15 release notes with the attached patch.\n\nIt might be worth using the \"symbol names\" directly, since they appear\nin the documentation already (under \"Routine Vacuuming\"). These are\n<structfield>relfrozenxid</structfield> and\n<structfield>relminmxid</structfield>. These are implementation\ndetails, but they're documented in detail (though admittedly the\ndocumentation has *lots* of problems).\n\nHere is what I would like this item to hint at, to advanced users with\ntricky requirements: The new approach to setting relminmxid will\nimprove the behavior of VACUUM in databases that already happen to use\nlots of MultiXacts. These users will notice that autovacuum now works\noff of relminmxid values that actually tell us something about each\ntable's consumption of MultiXacts over time. Most individual tables\nnaturally consume *zero* MultiXacts, even in databases that consume\nmany MultiXacts -- due to naturally occuring workload characteristics.\nThe old approach failed to recognize this, leading to very uniform\nrelminmxid values across tables that were in fact very different,\nMultiXact-wise.\n\nThe way that we handle relfrozenxid is probably much less likely to\nmake life much easier for any database, at least on its own, in\nPostgres 15. So from the point of view of a user considering\nupgrading, the impact on relminmxid is likely to be far more\nimportant.\n\nAdmittedly the most likely scenario by far is that the whole feature\njust isn't interesting, but a small minority of advanced users (users\nwith painful MultiXact problems) will find the relminmxid thing very\ncompelling.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 5 Jul 2022 11:51:31 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, Jul 05, 2022 at 02:35:39PM -0400, Bruce Momjian wrote:\n> On Fri, Jul 1, 2022 at 06:21:28PM -0700, Noah Misch wrote:\n> > Here's what I've been trying to ask: what do you think of linking to\n> > https://www.postgresql.org/docs/devel/ddl-schemas.html#DDL-SCHEMAS-PATTERNS\n> > here? The release note text is still vague, and the docs have extensive\n> > coverage of the topic. The notes can just link to that extensive coverage.\n> \n> Sure. how is this patch?\n\n> --- a/doc/src/sgml/release-15.sgml\n> +++ b/doc/src/sgml/release-15.sgml\n> @@ -63,11 +63,12 @@ Author: Noah Misch <noah@leadboat.com>\n> permissions on the <literal>public</literal> schema has not\n> been changed. Databases restored from previous Postgres releases\n> will be restored with their current permissions. Users wishing\n> - to have the former permissions will need to grant\n> + to have the former more-open permissions will need to grant\n> <literal>CREATE</literal> permission for <literal>PUBLIC</literal>\n> on the <literal>public</literal> schema; this change can be made\n> on <literal>template1</literal> to cause all new databases\n> - to have these permissions.\n> + to have these permissions. This change was made to increase\n> + security; see <xref linkend=\"ddl-schemas-patterns\"/>.\n> </para>\n> </listitem>\n\nI think this still puts undue weight on single-user systems moving back to the\nold default. The linked documentation does say how to get back to v14\npermissions (and disclaims security if you do so), so let's not mention it\nhere. The attached is how I would write it. I also reworked the \"Databases\nrestored from previous ...\" sentence, since its statement is also true of\ndatabases restored v15-to-v15 (no \"previous\" release involved). I also moved\nthe bit about USAGE to end, since it's just emphasizing what the reader should\nalready assume. Any concerns?",
"msg_date": "Tue, 5 Jul 2022 12:53:49 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, Jul 5, 2022 at 11:51:31AM -0700, Peter Geoghegan wrote:\n> On Tue, Jul 5, 2022 at 11:09 AM Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > Actually, I was wrong. I thought that we only mentioned that we\n> > computed a more agressive xid, but now see I was mentioning the _frozen_\n> > xid. Reading the commit, we do compute the multi-xid and store that too\n> > so I have updated the PG 15 release notes with the attached patch.\n> \n> It might be worth using the \"symbol names\" directly, since they appear\n> in the documentation already (under \"Routine Vacuuming\"). These are\n> <structfield>relfrozenxid</structfield> and\n> <structfield>relminmxid</structfield>. These are implementation\n> details, but they're documented in detail (though admittedly the\n> documentation has *lots* of problems).\n\nWell, users can look into the details if they wish.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 5 Jul 2022 16:08:13 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, Jul 5, 2022 at 12:53:49PM -0700, Noah Misch wrote:\n> On Tue, Jul 05, 2022 at 02:35:39PM -0400, Bruce Momjian wrote:\n> > On Fri, Jul 1, 2022 at 06:21:28PM -0700, Noah Misch wrote:\n> > > Here's what I've been trying to ask: what do you think of linking to\n> > > https://www.postgresql.org/docs/devel/ddl-schemas.html#DDL-SCHEMAS-PATTERNS\n> > > here? The release note text is still vague, and the docs have extensive\n> > > coverage of the topic. The notes can just link to that extensive coverage.\n> > \n> > Sure. how is this patch?\n> \n> > --- a/doc/src/sgml/release-15.sgml\n> > +++ b/doc/src/sgml/release-15.sgml\n> > @@ -63,11 +63,12 @@ Author: Noah Misch <noah@leadboat.com>\n> > permissions on the <literal>public</literal> schema has not\n> > been changed. Databases restored from previous Postgres releases\n> > will be restored with their current permissions. Users wishing\n> > - to have the former permissions will need to grant\n> > + to have the former more-open permissions will need to grant\n> > <literal>CREATE</literal> permission for <literal>PUBLIC</literal>\n> > on the <literal>public</literal> schema; this change can be made\n> > on <literal>template1</literal> to cause all new databases\n> > - to have these permissions.\n> > + to have these permissions. This change was made to increase\n> > + security; see <xref linkend=\"ddl-schemas-patterns\"/>.\n> > </para>\n> > </listitem>\n> \n> I think this still puts undue weight on single-user systems moving back to the\n> old default. The linked documentation does say how to get back to v14\n> permissions (and disclaims security if you do so), so let's not mention it\n> here. The attached is how I would write it. I also reworked the \"Databases\n> restored from previous ...\" sentence, since its statement is also true of\n> databases restored v15-to-v15 (no \"previous\" release involved). I also moved\n> the bit about USAGE to end, since it's just emphasizing what the reader should\n> already assume. Any concerns?\n\nI see where you are going --- to talk about how to convert upgraded\nclusters to secure clusters, rather than how to revert to the previous\nbehavior. I assumed that the most common question would be how to get\nthe previous behavior, rather than how to get the new behavior in\nupgraded clusters. However, I am fine with what you think is best.\n\nMy only stylistic suggestion would be to remove \"a\" from \"a\n<literal>REVOKE</literal>\".\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 5 Jul 2022 16:35:32 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, Jul 05, 2022 at 04:35:32PM -0400, Bruce Momjian wrote:\n> On Tue, Jul 5, 2022 at 12:53:49PM -0700, Noah Misch wrote:\n> > On Tue, Jul 05, 2022 at 02:35:39PM -0400, Bruce Momjian wrote:\n> > > On Fri, Jul 1, 2022 at 06:21:28PM -0700, Noah Misch wrote:\n> > > > Here's what I've been trying to ask: what do you think of linking to\n> > > > https://www.postgresql.org/docs/devel/ddl-schemas.html#DDL-SCHEMAS-PATTERNS\n> > > > here? The release note text is still vague, and the docs have extensive\n> > > > coverage of the topic. The notes can just link to that extensive coverage.\n> > > \n> > > Sure. how is this patch?\n> > \n> > > --- a/doc/src/sgml/release-15.sgml\n> > > +++ b/doc/src/sgml/release-15.sgml\n> > > @@ -63,11 +63,12 @@ Author: Noah Misch <noah@leadboat.com>\n> > > permissions on the <literal>public</literal> schema has not\n> > > been changed. Databases restored from previous Postgres releases\n> > > will be restored with their current permissions. Users wishing\n> > > - to have the former permissions will need to grant\n> > > + to have the former more-open permissions will need to grant\n> > > <literal>CREATE</literal> permission for <literal>PUBLIC</literal>\n> > > on the <literal>public</literal> schema; this change can be made\n> > > on <literal>template1</literal> to cause all new databases\n> > > - to have these permissions.\n> > > + to have these permissions. This change was made to increase\n> > > + security; see <xref linkend=\"ddl-schemas-patterns\"/>.\n> > > </para>\n> > > </listitem>\n> > \n> > I think this still puts undue weight on single-user systems moving back to the\n> > old default. The linked documentation does say how to get back to v14\n> > permissions (and disclaims security if you do so), so let's not mention it\n> > here. The attached is how I would write it. I also reworked the \"Databases\n> > restored from previous ...\" sentence, since its statement is also true of\n> > databases restored v15-to-v15 (no \"previous\" release involved). I also moved\n> > the bit about USAGE to end, since it's just emphasizing what the reader should\n> > already assume. Any concerns?\n> \n> I see where you are going --- to talk about how to convert upgraded\n> clusters to secure clusters, rather than how to revert to the previous\n> behavior. I assumed that the most common question would be how to get\n> the previous behavior, rather than how to get the new behavior in\n> upgraded clusters. However, I am fine with what you think is best.\n\nSince having too-permissive ACLs is usually symptom-free, I share your\nforecast about the more-common question. Expect questions on mailing lists,\nstackoverflow, etc. The right way to answer those questions is roughly this:\n\n > On PostgreSQL 15, my application gets \"permission denied for schema\n > public\". What should I do?\n\n You have a choice to make. The best selection depends on the security\n needs of your database. See\n https://www.postgresql.org/docs/devel/ddl-schemas.html#DDL-SCHEMAS-PATTERNS\n for a guide to making that choice.\n\nRecommending GRANT to that two-sentence question would be negligent. One\nshould know a database's lack of security needs before recommending GRANT.\nThis is a key opportunity to have more users make the right decision while\ntheir attention is on the topic.\n\n> My only stylistic suggestion would be to remove \"a\" from \"a\n> <literal>REVOKE</literal>\".\n\nI'll plan to push with that change.\n\n\n",
"msg_date": "Tue, 5 Jul 2022 14:57:52 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, Jul 5, 2022 at 02:57:52PM -0700, Noah Misch wrote:\n> Since having too-permissive ACLs is usually symptom-free, I share your\n> forecast about the more-common question. Expect questions on mailing lists,\n> stackoverflow, etc. The right way to answer those questions is roughly this:\n> \n> > On PostgreSQL 15, my application gets \"permission denied for schema\n> > public\". What should I do?\n> \n> You have a choice to make. The best selection depends on the security\n> needs of your database. See\n> https://www.postgresql.org/docs/devel/ddl-schemas.html#DDL-SCHEMAS-PATTERNS\n> for a guide to making that choice.\n> \n> Recommending GRANT to that two-sentence question would be negligent. One\n> should know a database's lack of security needs before recommending GRANT.\n> This is a key opportunity to have more users make the right decision while\n> their attention is on the topic.\n\nYes, I think it is a question of practicality vs. desirability. We are\nbasically telling people they have to do research to get the old\nbehavior in their new databases and clusters.\n\n> > My only stylistic suggestion would be to remove \"a\" from \"a\n> > <literal>REVOKE</literal>\".\n> \n> I'll plan to push with that change.\n\nWFM.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 5 Jul 2022 19:47:52 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, Jul 05, 2022 at 07:47:52PM -0400, Bruce Momjian wrote:\n> On Tue, Jul 5, 2022 at 02:57:52PM -0700, Noah Misch wrote:\n> > Since having too-permissive ACLs is usually symptom-free, I share your\n> > forecast about the more-common question. Expect questions on mailing lists,\n> > stackoverflow, etc. The right way to answer those questions is roughly this:\n> > \n> > > On PostgreSQL 15, my application gets \"permission denied for schema\n> > > public\". What should I do?\n> > \n> > You have a choice to make. The best selection depends on the security\n> > needs of your database. See\n> > https://www.postgresql.org/docs/devel/ddl-schemas.html#DDL-SCHEMAS-PATTERNS\n> > for a guide to making that choice.\n> > \n> > Recommending GRANT to that two-sentence question would be negligent. One\n> > should know a database's lack of security needs before recommending GRANT.\n> > This is a key opportunity to have more users make the right decision while\n> > their attention is on the topic.\n> \n> Yes, I think it is a question of practicality vs. desirability. We are\n> basically telling people they have to do research to get the old\n> behavior in their new databases and clusters.\n\nTrue. I want to maximize the experience for different classes of database:\n\n1. Databases needing user isolation and unknowingly not getting it.\n2. Databases not needing user isolation, e.g. automated test environments.\n\nExpecting all of these DBAs to read a 500-word doc section is failure-prone.\nFor the benefit of (2), I'm now thinking about adding a release note sentence,\n\"For a new database having zero need to defend against insider threats,\ngranting back the privilege yields the PostgreSQL 14 behavior.\"\n\n\n",
"msg_date": "Tue, 5 Jul 2022 19:45:57 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, Jul 5, 2022 at 07:45:57PM -0700, Noah Misch wrote:\n> On Tue, Jul 05, 2022 at 07:47:52PM -0400, Bruce Momjian wrote:\n> > Yes, I think it is a question of practicality vs. desirability. We are\n> > basically telling people they have to do research to get the old\n> > behavior in their new databases and clusters.\n> \n> True. I want to maximize the experience for different classes of database:\n> \n> 1. Databases needing user isolation and unknowingly not getting it.\n> 2. Databases not needing user isolation, e.g. automated test environments.\n> \n> Expecting all of these DBAs to read a 500-word doc section is failure-prone.\n> For the benefit of (2), I'm now thinking about adding a release note sentence,\n> \"For a new database having zero need to defend against insider threats,\n> granting back the privilege yields the PostgreSQL 14 behavior.\"\n\nI think you would need to say \"previous behavior\" since people might be\nupgrading from releases before PG 14. I also would change \"In existing\ndatabases\" to \"For existing databases\". I think your big risk here is\ntrying to explain how to have new clusters get the old or new behavior\nin the same text block, e.g.:\n\n The new default is one of the secure schema usage patterns that\n <xref linkend=\"ddl-schemas-patterns\"/> has recommended since the\n security release for CVE-2018-1058. Upgrading a cluster or restoring a\n database dump will preserve existing permissions. This is a change in\n the default for newly-created databases in existing clusters and for new\n clusters. For existing databases, especially those having multiple\n users, consider issuing a <literal>REVOKE</literal> to adopt this new\n default. (<literal>USAGE</literal> permission on this schema has not\n changed.) For a new database having zero need to defend against insider\n threats, granting back the privilege yields the previous behavior.\n\nIs this something we want to get into in the release notes, or perhaps\ndo we need to link to a wiki page for these details?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 6 Jul 2022 09:10:53 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, Jul 06, 2022 at 09:10:53AM -0400, Bruce Momjian wrote:\n> On Tue, Jul 5, 2022 at 07:45:57PM -0700, Noah Misch wrote:\n> > On Tue, Jul 05, 2022 at 07:47:52PM -0400, Bruce Momjian wrote:\n> > > Yes, I think it is a question of practicality vs. desirability. We are\n> > > basically telling people they have to do research to get the old\n> > > behavior in their new databases and clusters.\n> > \n> > True. I want to maximize the experience for different classes of database:\n> > \n> > 1. Databases needing user isolation and unknowingly not getting it.\n> > 2. Databases not needing user isolation, e.g. automated test environments.\n> > \n> > Expecting all of these DBAs to read a 500-word doc section is failure-prone.\n> > For the benefit of (2), I'm now thinking about adding a release note sentence,\n> > \"For a new database having zero need to defend against insider threats,\n> > granting back the privilege yields the PostgreSQL 14 behavior.\"\n> \n> I think you would need to say \"previous behavior\" since people might be\n> upgrading from releases before PG 14. I also would change \"In existing\n\nI felt \"previous behavior\" was mildly ambiguous. I've changed it to \"the\nbehavior of prior releases\".\n\n> databases\" to \"For existing databases\". I think your big risk here is\n\nDone. New version attached.\n\n> trying to explain how to have new clusters get the old or new behavior\n> in the same text block, e.g.:\n> \n> The new default is one of the secure schema usage patterns that\n> <xref linkend=\"ddl-schemas-patterns\"/> has recommended since the\n> security release for CVE-2018-1058. Upgrading a cluster or restoring a\n> database dump will preserve existing permissions. This is a change in\n> the default for newly-created databases in existing clusters and for new\n> clusters. For existing databases, especially those having multiple\n> users, consider issuing a <literal>REVOKE</literal> to adopt this new\n> default. (<literal>USAGE</literal> permission on this schema has not\n> changed.) For a new database having zero need to defend against insider\n> threats, granting back the privilege yields the previous behavior.\n> \n> Is this something we want to get into in the release notes, or perhaps\n> do we need to link to a wiki page for these details?\n\nNo supported release has a wiki page link in its release notes. We used wiki\npages in the more-distant past, but I don't recall why. I am not aware of\nwiki pages having relevant benefits.",
"msg_date": "Sat, 9 Jul 2022 20:19:41 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Sat, Jul 9, 2022 at 08:19:41PM -0700, Noah Misch wrote:\n> > I think you would need to say \"previous behavior\" since people might be\n> > upgrading from releases before PG 14. I also would change \"In existing\n> \n> I felt \"previous behavior\" was mildly ambiguous. I've changed it to \"the\n> behavior of prior releases\".\n\nSure.\n> \n> > databases\" to \"For existing databases\". I think your big risk here is\n> \n> Done. New version attached.\n\nI had trouble reading the sentences in the order you used so I\nrestructured it:\n\n\tThe new default is one of the secure schema usage patterns that <xref\n\tlinkend=\"ddl-schemas-patterns\"/> has recommended since the security\n\trelease for CVE-2018-1058. The change applies to newly-created\n\tdatabases in existing clusters and for new clusters. Upgrading a\n\tcluster or restoring a database dump will preserve existing permissions.\n\t\n\tFor existing databases, especially those having multiple users, consider\n\tissuing <literal>REVOKE</literal> to adopt this new default. For new\n\tdatabases having zero need to defend against insider threats, granting\n\t<literal>USAGE</literal> permission on their <literal>public</literal>\n\tschemas will yield the behavior of prior releases.\n\n> > Is this something we want to get into in the release notes, or perhaps\n> > do we need to link to a wiki page for these details?\n> \n> No supported release has a wiki page link in its release notes. We used wiki\n> pages in the more-distant past, but I don't recall why. I am not aware of\n> wiki pages having relevant benefits.\n\nI think the wiki was good if you needed a lot of release-specific text,\nor if you wanted to adjust the wording after the release.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Mon, 11 Jul 2022 12:39:57 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, May 10, 2022 at 11:41:08PM -0400, Bruce Momjian wrote:\n> On Tue, May 10, 2022 at 08:31:17PM -0500, Justin Pryzby wrote:\n\n> > | Store server-level statistics in shared memory (Kyotaro Horiguchi, Andres Freund, Melanie Plageman)\n> > \n> > Should this be called \"cumulative\" statistics? As in b3abca68106d518ce5d3c0d9a1e0ec02a647ceda.\n> \n> Uh, they are counters, which I guess is cummulative, but that doesn't\n> seem very descriptive. The documentation call it the statistics\n> collector, but I am not sure we even have that anymore with an in-memory\n> implementation. I am kind of not sure what to call it.\n\nWhat I was trying to say is that it's now called the cumulative stats system.\n\n> > | New function\n> > \n> > \"The new function ..\" (a few times)\n> \n> Uh, I only see it once.\n\nThere's still a couple of these without \"The\".\n\n> > Should any of these be listed as incompatible changes (some of these I asked\n> > before, but the others are from another list).\n\n> > ccd10a9bfa5 Fix enforcement of PL/pgSQL variable CONSTANT markings (Tom Lane)\n> \n> I didn't see not enforcing constant as an incompatibility, but rather a\n> bug.\n\nYes it's a bug, but it's going to be seen as a compatibility issue for someone\nwhose application breaks. The same goes for other things I mentioned.\n\n> > 376ce3e404b Prefer $HOME when looking up the current user's home directory.\n> \n> Uh, I didn't think so.\n> \n> > 7844c9918a4 psql: Show all query results by default\n> \n> Same.\n> \n> > 17a856d08be Change aggregated log format of pgbench.\n> \n> We have a pgbench section and I can't see it. I am trying to keep\n> incompatiblities as things related to in-production problems or\n> surprises.\n> \n> > ? 73508475d69 Remove pg_atoi()\n> \n> I don't see who would care except for internals folks.\n> \n> > ? aa64f23b029 Remove MaxBackends variable in favor of GetMaxBackends() function.\n> \n> Same.\n> \n> > ? d816f366bc4 psql: Make SSL info display more compact\n> \n> I did look at that but considered that this wouldn't be something that\n> would break anything.\n> \n> > ? 27b02e070fd pg_upgrade: Don't print progress status when output is not a tty.\n> \n> Same.\n> \n> > ? ab4fd4f868e Remove 'datlastsysoid'.\n> \n> Seemed too internal.\n\nFYI, removal of this column broke a tool one of my coworkers uses (navicat).\nI'm told that the fix will be in navicat v16.1 (but their existing users will\nneed to pay to upgrade from v15).\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 11 Jul 2022 12:39:23 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Mon, Jul 11, 2022 at 12:39:23PM -0500, Justin Pryzby wrote:\n> On Tue, May 10, 2022 at 11:41:08PM -0400, Bruce Momjian wrote:\n> > On Tue, May 10, 2022 at 08:31:17PM -0500, Justin Pryzby wrote:\n> \n> > > | Store server-level statistics in shared memory (Kyotaro Horiguchi, Andres Freund, Melanie Plageman)\n> > > \n> > > Should this be called \"cumulative\" statistics? As in b3abca68106d518ce5d3c0d9a1e0ec02a647ceda.\n> > \n> > Uh, they are counters, which I guess is cummulative, but that doesn't\n> > seem very descriptive. The documentation call it the statistics\n> > collector, but I am not sure we even have that anymore with an in-memory\n> > implementation. I am kind of not sure what to call it.\n> \n> What I was trying to say is that it's now called the cumulative stats system.\n\nIt is actually called the \"cumulative statistics system\", so updated; \npatch attached and applied.\n\n> > > | New function\n> > > \n> > > \"The new function ..\" (a few times)\n> > \n> > Uh, I only see it once.\n> \n> There's still a couple of these without \"The\".\n\nAh, found them, fixed.\n\n> > > Should any of these be listed as incompatible changes (some of these I asked\n> > > before, but the others are from another list).\n> \n> > > ccd10a9bfa5 Fix enforcement of PL/pgSQL variable CONSTANT markings (Tom Lane)\n> > \n> > I didn't see not enforcing constant as an incompatibility, but rather a\n> > bug.\n> \n> Yes it's a bug, but it's going to be seen as a compatibility issue for someone\n> whose application breaks. The same goes for other things I mentioned.\n\nWe don't guarantee that the only breakage is listed in the\nincompatibilities section, only the most common ones.\n\n> > > 376ce3e404b Prefer $HOME when looking up the current user's home directory.\n> > \n> > Uh, I didn't think so.\n> > \n> > > 7844c9918a4 psql: Show all query results by default\n> > \n> > Same.\n> > \n> > > 17a856d08be Change aggregated log format of pgbench.\n> > \n> > We have a pgbench section and I can't see it. I am trying to keep\n> > incompatiblities as things related to in-production problems or\n> > surprises.\n> > \n> > > ? 73508475d69 Remove pg_atoi()\n> > \n> > I don't see who would care except for internals folks.\n> > \n> > > ? aa64f23b029 Remove MaxBackends variable in favor of GetMaxBackends() function.\n> > \n> > Same.\n> > \n> > > ? d816f366bc4 psql: Make SSL info display more compact\n> > \n> > I did look at that but considered that this wouldn't be something that\n> > would break anything.\n> > \n> > > ? 27b02e070fd pg_upgrade: Don't print progress status when output is not a tty.\n> > \n> > Same.\n> > \n> > > ? ab4fd4f868e Remove 'datlastsysoid'.\n> > \n> > Seemed too internal.\n> \n> FYI, removal of this column broke a tool one of my coworkers uses (navicat).\n> I'm told that the fix will be in navicat v16.1 (but their existing users will\n> need to pay to upgrade from v15).\n\nThis actually supports my point --- only navicat needs to know about this\nrenaming, it its users. Telling navicat users about this change does\nnot help them.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Mon, 11 Jul 2022 14:23:37 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Mon, Jul 11, 2022 at 12:39:57PM -0400, Bruce Momjian wrote:\n> I had trouble reading the sentences in the order you used so I\n> restructured it:\n> \n> \tThe new default is one of the secure schema usage patterns that <xref\n> \tlinkend=\"ddl-schemas-patterns\"/> has recommended since the security\n> \trelease for CVE-2018-1058. The change applies to newly-created\n> \tdatabases in existing clusters and for new clusters. Upgrading a\n> \tcluster or restoring a database dump will preserve existing permissions.\n\nI agree with the sentence order change.\n\n> \tFor existing databases, especially those having multiple users, consider\n> \tissuing <literal>REVOKE</literal> to adopt this new default. For new\n> \tdatabases having zero need to defend against insider threats, granting\n> \t<literal>USAGE</literal> permission on their <literal>public</literal>\n> \tschemas will yield the behavior of prior releases.\n\ns/USAGE/CREATE/ in the last sentence. Looks good with that change.\n\n\n",
"msg_date": "Mon, 11 Jul 2022 23:31:32 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Mon, Jul 11, 2022 at 11:31:32PM -0700, Noah Misch wrote:\n> On Mon, Jul 11, 2022 at 12:39:57PM -0400, Bruce Momjian wrote:\n> > I had trouble reading the sentences in the order you used so I\n> > restructured it:\n> > \n> > \tThe new default is one of the secure schema usage patterns that <xref\n> > \tlinkend=\"ddl-schemas-patterns\"/> has recommended since the security\n> > \trelease for CVE-2018-1058. The change applies to newly-created\n> > \tdatabases in existing clusters and for new clusters. Upgrading a\n> > \tcluster or restoring a database dump will preserve existing permissions.\n> \n> I agree with the sentence order change.\n\nGreat.\n\n> > \tFor existing databases, especially those having multiple users, consider\n> > \tissuing <literal>REVOKE</literal> to adopt this new default. For new\n> > \tdatabases having zero need to defend against insider threats, granting\n> > \t<literal>USAGE</literal> permission on their <literal>public</literal>\n> > \tschemas will yield the behavior of prior releases.\n> \n> s/USAGE/CREATE/ in the last sentence. Looks good with that change.\n\nAh, yes, of course.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 12 Jul 2022 14:47:07 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "Regarding this item:\n\n\"Allow hash lookup for NOT IN clauses with many constants (David Rowley,\nJames Coleman)\nPreviously the code always sequentially scanned the list of values.\"\n\nThe todo list has an entry titled \"Planning large IN lists\", which links to\n\nhttps://www.postgresql.org/message-id/1178821226.6034.63.camel@goldbach\n\nDid we already have a hash lookup for IN clauses with constants and the\nabove commit adds NOT IN? If so, maybe we have enough to remove this todo\nitem.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nRegarding this item:\"Allow hash lookup for NOT IN clauses with many constants (David Rowley, James Coleman)Previously the code always sequentially scanned the list of values.\"The todo list has an entry titled \"Planning large IN lists\", which links to https://www.postgresql.org/message-id/1178821226.6034.63.camel@goldbachDid we already have a hash lookup for IN clauses with constants and the above commit adds NOT IN? If so, maybe we have enough to remove this todo item.-- John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 14 Jul 2022 13:24:41 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 01:24:41PM +0700, John Naylor wrote:\n> Regarding this item:\n> \n> \"Allow hash lookup for NOT IN clauses with many constants (David Rowley, James\n> Coleman)\n> Previously the code always sequentially scanned the list of values.\"\n> \n> The todo list has an entry titled \"Planning large IN lists\", which links to \n> \n> https://www.postgresql.org/message-id/1178821226.6034.63.camel@goldbach\n> \n> Did we already have a hash lookup for IN clauses with constants and the above\n> commit adds NOT IN? If so, maybe we have enough to remove this todo item.\n\nAgreed, I have removed it now.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 14 Jul 2022 11:15:26 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "> Increase hash_mem_multiplier default to 2.0 (Peter Geoghegan)\n> This allows query hash operations to use double the amount of work_mem memory as other operations.\n\nI wonder if it's worth pointing out that a query may end up using not just 2x\nmore memory (since work_mem*hash_mem_multiplier is per node), but 2**N more,\nfor N nodes.\n\n> Remove pg_dump's --no-synchronized-snapshots option since all supported server versions support synchronized snapshots (Tom Lane)\n\nIt'd be better to put that after the note about dropping support for upgrading\nclusters older than v9.2 in psql/pg_dump/pg_upgrade.\n\n> Enable system and TOAST btree indexes to efficiently store duplicates (Peter Geoghegan)\n\nSay \"btree indexes on system [and TOAST] tables\"\n\n> Prevent changes to columns only indexed by BRIN indexes from disabling HOT updates (Josef Simanek)\n\nThis was reverted\n\n> Generate periodic log message during slow server starts (Nitin Jadhav, Robert Haas)\nmessages plural\n\n> Messages report the cause of the delay. The time interval for notification is controlled by the new server variable log_startup_progress_interval.\n*The messages\n\n> Add server variable shared_memory_size to report the size of allocated shared memory (Nathan Bossart)\n> Add server variable shared_memory_size_in_huge_pages to report the number of huge memory pages required (Nathan Bossart)\n\nMaybe these should say server *parameter* since they're not really \"variable\".\n\n> 0. Add support for LZ4 and Zstandard compression of server-side base backups (Jeevan Ladhe, Robert Haas)\n> 1. Allow pg_basebackup to use LZ4 and Zstandard compression on server-side base backup files (Dipesh Pandit, Jeevan Ladhe)\n> 2. Allow pg_basebackup's --compress option to control the compression method and options (Michael Paquier, Robert Haas)\n> New options include server-gzip (gzip on the server), client-gzip (same as gzip).\n> 3. Allow pg_basebackup to compress on the server side and decompress on the client side before storage (Dipesh Pandit)\n> This is accomplished by specifying compression on the server side and plain output format.\n\nI still think these expose the incremental development rather than the\nuser-facing change.\n\n1. It seems wrong to say \"server-side\" since client-side compression with\nLZ4/zstd is also supported.\n\n2. It's confusing to say that the new options are server-gzip and client-gzip,\nsince it just mentioned new algorithms;\n\n3. I'm not sure this needs to be mentioned at all; maybe it should be a\n\"detail\" following the item about server-side compression.\n\n> Tables added to the listed schemas in the future will also be replicated.\n\n\"Tables later added\" is clearer. Otherwise \"in the future\" sounds like maybe\nin v16 or v17 we'll start replicating those tables.\n\n> Allow subscribers to stop logical replication application on error (Osumi Takamichi, Mark Dilger)\n\"application\" sounds off.\n\n> Add new default WAL-logged method for database creation (Dilip Kumar)\n\"New default\" sounds off. Say \"Add new WAL-logged method for database creation, used by default\".\n\n> Have pg_upgrade preserve relfilenodes, tablespace, and database OIDs between old and new clusters (Shruthi KC, Antonin Houska)\n\n\"tablespace OIDs\" or \"tablespace and database OIDs and relfilenodes\"\n\n> Limit support of pg_upgrade to old servers running PostgreSQL 9.2 and later (Tom Lane)\n\nThe word \"old\" doesn't appear in the 2 release notes items about pg_dump and\npsql, and \"old\" makes it sound sounds like \"antique\" rather than \"source\".\n\n> Some internal-use-only types have also been assigned this column.\nthis *value\n\n> Allow custom scan provders to indicate if they support projections (Sven Klemm)\n> The default is now that custom scan providers can't support projections, so they need to be updated for this release.\n\nPer the commit message, they don't \"need\" to be updated.\nI think this should say \"The default now assumes that a custom scan provider\ndoes not support projections; to retain optimal performance, they should be\nupdated to indicate whether that's supported.\n\n\n",
"msg_date": "Mon, 18 Jul 2022 20:23:23 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 08:23:23PM -0500, Justin Pryzby wrote:\n> > Increase hash_mem_multiplier default to 2.0 (Peter Geoghegan)\n> > This allows query hash operations to use double the amount of work_mem memory as other operations.\n> \n> I wonder if it's worth pointing out that a query may end up using not just 2x\n> more memory (since work_mem*hash_mem_multiplier is per node), but 2**N more,\n> for N nodes.\n\nUh, I said \"per operation\" so people might realize there can be multiple\nwork_mem memory operations per query. I don't think I can do more in\nthis text. I can't think of a way to improve it without making it more\nconfusing.\n\n> > Remove pg_dump's --no-synchronized-snapshots option since all supported server versions support synchronized snapshots (Tom Lane)\n> \n> It'd be better to put that after the note about dropping support for upgrading\n> clusters older than v9.2 in psql/pg_dump/pg_upgrade.\n\nWell, I put the --no-synchronized-snapshots item in incompatibilities\nsince it is a user-visible change that might require script adjustments.\nHowever, I put the limit of pg_dump to 9.2 and greater into the pg_dump\nsection. Are you suggesting I move the--no-synchronized-snapshots item\ndown there? That doesn't match with the way I have listed other\nincompatibilities so I am resistant to do that.\n\n> > Enable system and TOAST btree indexes to efficiently store duplicates (Peter Geoghegan)\n> \n> Say \"btree indexes on system [and TOAST] tables\"\n\nOkay, updated to:\n\n Allow btree indexes on system and TOAST tables to efficiently\n store duplicates (Peter Geoghegan)\n\n> > Prevent changes to columns only indexed by BRIN indexes from disabling HOT updates (Josef Simanek)\n> \n> This was reverted\n\nAh, yes, removed.\n\n> > Generate periodic log message during slow server starts (Nitin Jadhav, Robert Haas)\n> messages plural\n> \n> > Messages report the cause of the delay. The time interval for notification is controlled by the new server variable log_startup_progress_interval.\n> *The messages\n\nAh, yes, fixed.\n\n> > Add server variable shared_memory_size to report the size of allocated shared memory (Nathan Bossart)\n> > Add server variable shared_memory_size_in_huge_pages to report the number of huge memory pages required (Nathan Bossart)\n> \n> Maybe these should say server *parameter* since they're not really \"variable\".\n\nUh, I think of parameters as something passed. We do call them server\nvariables, or read-only server variables. I can add \"read-only\" but it\nseems odd.\n\n> > 0. Add support for LZ4 and Zstandard compression of server-side base backups (Jeevan Ladhe, Robert Haas)\n> > 1. Allow pg_basebackup to use LZ4 and Zstandard compression on server-side base backup files (Dipesh Pandit, Jeevan Ladhe)\n> > 2. Allow pg_basebackup's --compress option to control the compression method and options (Michael Paquier, Robert Haas)\n> > New options include server-gzip (gzip on the server), client-gzip (same as gzip).\n> > 3. Allow pg_basebackup to compress on the server side and decompress on the client side before storage (Dipesh Pandit)\n> > This is accomplished by specifying compression on the server side and plain output format.\n> \n> I still think these expose the incremental development rather than the\n> user-facing change.\n\nWell, they are in different parts of the system, though they are clearly\nall related. I am afraid merging them would be even more confusing.\n\n> 1. It seems wrong to say \"server-side\" since client-side compression with\n> LZ4/zstd is also supported.\n\nAgreed. I changed it to:\n\n Allow pg_basebackup to do LZ4 and Zstandard server-side compression\n on base backup files (Dipesh Pandit, Jeevan Ladhe)\n\n> 2. It's confusing to say that the new options are server-gzip and client-gzip,\n> since it just mentioned new algorithms;\n\nI see your point since there will be new options for LZ4 and Zstandard\ntoo, so I just removed that paragraph.\n\n> 3. I'm not sure this needs to be mentioned at all; maybe it should be a\n> \"detail\" following the item about server-side compression.\n\nSee my concerns above --- it seems too complex to merge into something\nelse. However, I am open to an entire rewrite of these items.\n\n> > Tables added to the listed schemas in the future will also be replicated.\n> \n> \"Tables later added\" is clearer. Otherwise \"in the future\" sounds like maybe\n> in v16 or v17 we'll start replicating those tables.\n\nAgreed, new wording:\n\n\tTables added later to the listed schemas will also be replicated.\n\n> > Allow subscribers to stop logical replication application on error (Osumi Takamichi, Mark Dilger)\n> \"application\" sounds off.\n\nAgreed. New text is:\n\n\tAllow subscribers to stop the application of logical replication\n\tchanges on error\n\n> > Add new default WAL-logged method for database creation (Dilip Kumar)\n> \"New default\" sounds off. Say \"Add new WAL-logged method for database creation, used by default\".\n\nAgreed, new text:\n\n Add new <acronym>WAL</acronym>-logged method for <link\n linkend=\"sql-createdatabase\">database creation</link> (Dilip Kumar)\n\n This is the new default for database creation and avoids the need\n for checkpoints during database creation; the old method is still\n available.\n\n> > Have pg_upgrade preserve relfilenodes, tablespace, and database OIDs between old and new clusters (Shruthi KC, Antonin Houska)\n> \n> \"tablespace OIDs\" or \"tablespace and database OIDs and relfilenodes\"\n\nGood point, I went with:\n\n\tHave <application>pg_upgrade</application> preserve tablespace\n\tand database OIDs, and relfilenodes between old and new clusters\n\t(Shruthi KC, Antonin Houska)\n\n> > Limit support of pg_upgrade to old servers running PostgreSQL 9.2 and later (Tom Lane)\n> \n> The word \"old\" doesn't appear in the 2 release notes items about pg_dump and\n> psql, and \"old\" makes it sound sounds like \"antique\" rather than \"source\".\n\nUh, so pg_upgrade uses the terms \"old\" and \"new\" in its option names,\ne.g., oldbindir, newbindir. I don't think \"source\" would be an\nimprovement here.\n\n> > Some internal-use-only types have also been assigned this column.\n> this *value\n\nGood point, I went with:\n\n\tSome other internal-use-only values have also been assigned to\n\tthis column.\n\n> > Allow custom scan provders to indicate if they support projections (Sven Klemm)\n> > The default is now that custom scan providers can't support projections, so they need to be updated for this release.\n> \n> Per the commit message, they don't \"need\" to be updated.\n> I think this should say \"The default now assumes that a custom scan provider\n> does not support projections; to retain optimal performance, they should be\n> updated to indicate whether that's supported.\n\nOkay, I went with this text:\n\n The default is now that custom scan providers are assumed to not\n support projections; those that do need to be updated for this\n release.\n\nCumulative applied patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Tue, 19 Jul 2022 13:24:30 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 01:24:30PM -0400, Bruce Momjian wrote:\n> > > Remove pg_dump's --no-synchronized-snapshots option since all supported server versions support synchronized snapshots (Tom Lane)\n> > \n> > It'd be better to put that after the note about dropping support for upgrading\n> > clusters older than v9.2 in psql/pg_dump/pg_upgrade.\n> \n> Well, I put the --no-synchronized-snapshots item in incompatibilities\n> since it is a user-visible change that might require script adjustments.\n> However, I put the limit of pg_dump to 9.2 and greater into the pg_dump\n> section. Are you suggesting I move the--no-synchronized-snapshots item\n> down there? That doesn't match with the way I have listed other\n> incompatibilities so I am resistant to do that.\n\nI'd rather see the \"limit support to v9.2\" be moved or added to the\n\"incompatibilities\" section, maybe with \"remove --no-synchronized-snapshots\"\nas a secondary sentence.\n\n> > > 0. Add support for LZ4 and Zstandard compression of server-side base backups (Jeevan Ladhe, Robert Haas)\n> > > 1. Allow pg_basebackup to use LZ4 and Zstandard compression on server-side base backup files (Dipesh Pandit, Jeevan Ladhe)\n> > > 2. Allow pg_basebackup's --compress option to control the compression method and options (Michael Paquier, Robert Haas)\n> > > New options include server-gzip (gzip on the server), client-gzip (same as gzip).\n> > > 3. Allow pg_basebackup to compress on the server side and decompress on the client side before storage (Dipesh Pandit)\n> > > This is accomplished by specifying compression on the server side and plain output format.\n> > \n> > I still think these expose the incremental development rather than the\n> > user-facing change.\n> \n> > 1. It seems wrong to say \"server-side\" since client-side compression with\n> > LZ4/zstd is also supported.\n> \n> Agreed. I changed it to:\n> \n> Allow pg_basebackup to do LZ4 and Zstandard server-side compression\n> on base backup files (Dipesh Pandit, Jeevan Ladhe)\n\nThis still misses the point that those compression algs are also supported on\nthe client side, so it seems misleading to mention \"server-side\" support.\n\n> > > Allow custom scan provders to indicate if they support projections (Sven Klemm)\n> > > The default is now that custom scan providers can't support projections, so they need to be updated for this release.\n> > \n> > Per the commit message, they don't \"need\" to be updated.\n> > I think this should say \"The default now assumes that a custom scan provider\n> > does not support projections; to retain optimal performance, they should be\n> > updated to indicate whether that's supported.\n> \n> Okay, I went with this text:\n> \n> The default is now that custom scan providers are assumed to not\n> support projections; those that do need to be updated for this\n> release.\n\nI'd say \"those that do *will need to be updated\" otherwise the sentence can\nsound like it means \"those that need to be updated [will] ...\"\n\nThanks,\n-- \nJustin\n\n\n",
"msg_date": "Tue, 19 Jul 2022 13:13:07 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, Jul 19, 2022 at 01:13:07PM -0500, Justin Pryzby wrote:\n> On Tue, Jul 19, 2022 at 01:24:30PM -0400, Bruce Momjian wrote:\n> > Well, I put the --no-synchronized-snapshots item in incompatibilities\n> > since it is a user-visible change that might require script adjustments.\n> > However, I put the limit of pg_dump to 9.2 and greater into the pg_dump\n> > section. Are you suggesting I move the--no-synchronized-snapshots item\n> > down there? That doesn't match with the way I have listed other\n> > incompatibilities so I am resistant to do that.\n> \n> I'd rather see the \"limit support to v9.2\" be moved or added to the\n> \"incompatibilities\" section, maybe with \"remove --no-synchronized-snapshots\"\n> as a secondary sentence.\n\nIs removing support for an older version an incompatibility --- I didn't\nthink so.\n\n> > > > 0. Add support for LZ4 and Zstandard compression of server-side base backups (Jeevan Ladhe, Robert Haas)\n> > > > 1. Allow pg_basebackup to use LZ4 and Zstandard compression on server-side base backup files (Dipesh Pandit, Jeevan Ladhe)\n> > > > 2. Allow pg_basebackup's --compress option to control the compression method and options (Michael Paquier, Robert Haas)\n> > > > New options include server-gzip (gzip on the server), client-gzip (same as gzip).\n> > > > 3. Allow pg_basebackup to compress on the server side and decompress on the client side before storage (Dipesh Pandit)\n> > > > This is accomplished by specifying compression on the server side and plain output format.\n> > > \n> > > I still think these expose the incremental development rather than the\n> > > user-facing change.\n> > \n> > > 1. It seems wrong to say \"server-side\" since client-side compression with\n> > > LZ4/zstd is also supported.\n> > \n> > Agreed. I changed it to:\n> > \n> > Allow pg_basebackup to do LZ4 and Zstandard server-side compression\n> > on base backup files (Dipesh Pandit, Jeevan Ladhe)\n> \n> This still misses the point that those compression algs are also supported on\n> the client side, so it seems misleading to mention \"server-side\" support.\n\nI reworked that paragraph in the attached patch. What we did was to add\nserver-side gzip/LZ/ZSTD, and client-side LZ/ZSTD. (We already had\nclient-side gzip.) Hopefully the new text is clearer. You can see the\nnew output here:\n\n\thttps://momjian.us/pgsql_docs/release-15.html\n\n> > > > Allow custom scan provders to indicate if they support projections (Sven Klemm)\n> > > > The default is now that custom scan providers can't support projections, so they need to be updated for this release.\n> > > \n> > > Per the commit message, they don't \"need\" to be updated.\n> > > I think this should say \"The default now assumes that a custom scan provider\n> > > does not support projections; to retain optimal performance, they should be\n> > > updated to indicate whether that's supported.\n> > \n> > Okay, I went with this text:\n> > \n> > The default is now that custom scan providers are assumed to not\n> > support projections; those that do need to be updated for this\n> > release.\n> \n> I'd say \"those that do *will need to be updated\" otherwise the sentence can\n> sound like it means \"those that need to be updated [will] ...\"\n\nOh, good point, done.\n\nCumulative patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Tue, 19 Jul 2022 16:42:48 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, Jul 12, 2022 at 02:47:07PM -0400, Bruce Momjian wrote:\n> On Mon, Jul 11, 2022 at 11:31:32PM -0700, Noah Misch wrote:\n> > On Mon, Jul 11, 2022 at 12:39:57PM -0400, Bruce Momjian wrote:\n> > > I had trouble reading the sentences in the order you used so I\n> > > restructured it:\n> > > \n> > > \tThe new default is one of the secure schema usage patterns that <xref\n> > > \tlinkend=\"ddl-schemas-patterns\"/> has recommended since the security\n> > > \trelease for CVE-2018-1058. The change applies to newly-created\n> > > \tdatabases in existing clusters and for new clusters. Upgrading a\n> > > \tcluster or restoring a database dump will preserve existing permissions.\n> > \n> > I agree with the sentence order change.\n> \n> Great.\n> \n> > > \tFor existing databases, especially those having multiple users, consider\n> > > \tissuing <literal>REVOKE</literal> to adopt this new default. For new\n> > > \tdatabases having zero need to defend against insider threats, granting\n> > > \t<literal>USAGE</literal> permission on their <literal>public</literal>\n> > > \tschemas will yield the behavior of prior releases.\n> > \n> > s/USAGE/CREATE/ in the last sentence. Looks good with that change.\n> \n> Ah, yes, of course.\n\nPatch applied, I also adjusted the second paragraph to be more\nsymmetric. You can see the results here:\n\n\thttps://momjian.us/pgsql_docs/release-15.html\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 21 Jul 2022 13:44:13 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "Hi,\n\nI noticed a stray \"DETAILS?\" marker while going through the release\nnotes for 15. Is that subsection still under construction or review?\n\n> <listitem>\n> <para>\n> Record and check the collation of each <link\n> linkend=\"sql-createdatabase\">database</link> (Peter Eisentraut)\n> [...]\n> to match the operating system collation version. DETAILS?\n> </para>\n> </listitem>\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Sat, 27 Aug 2022 16:03:02 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Sat, Aug 27, 2022 at 04:03:02PM +0200, Matthias van de Meent wrote:\n> Hi,\n> \n> I noticed a stray \"DETAILS?\" marker while going through the release\n> notes for 15. Is that subsection still under construction or review?\n> \n> > <listitem>\n> > <para>\n> > Record and check the collation of each <link\n> > linkend=\"sql-createdatabase\">database</link> (Peter Eisentraut)\n> > [...]\n> > to match the operating system collation version. DETAILS?\n> > </para>\n> > </listitem>\n\nGood question --- the full text is:\n\n <listitem>\n <para>\n Record and check the collation of each <link\n linkend=\"sql-createdatabase\">database</link> (Peter Eisentraut)\n </para>\n\n <para>\n This is designed to detect collation\n mismatches to avoid data corruption. Function\n <function>pg_database_collation_actual_version()</function>\n reports the underlying operating system collation version, and\n <command>ALTER DATABASE ... REFRESH</command> sets the database\n to match the operating system collation version. DETAILS?\n </para>\n </listitem>\n\nI just can't figure out what the user needs to understand about this,\nand I understand very little of it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 30 Aug 2022 16:42:32 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On 30.08.22 22:42, Bruce Momjian wrote:\n> Good question --- the full text is:\n> \n> <listitem>\n> <para>\n> Record and check the collation of each <link\n> linkend=\"sql-createdatabase\">database</link> (Peter Eisentraut)\n> </para>\n> \n> <para>\n> This is designed to detect collation\n> mismatches to avoid data corruption. Function\n> <function>pg_database_collation_actual_version()</function>\n> reports the underlying operating system collation version, and\n> <command>ALTER DATABASE ... REFRESH</command> sets the database\n> to match the operating system collation version. DETAILS?\n> </para>\n> </listitem>\n> \n> I just can't figure out what the user needs to understand about this,\n> and I understand very little of it.\n\nWe already had this feature for (schema-level) collations, now we have \nit on the level of the database collation.\n\n\n",
"msg_date": "Wed, 31 Aug 2022 11:38:33 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 11:38:33AM +0200, Peter Eisentraut wrote:\n> On 30.08.22 22:42, Bruce Momjian wrote:\n> > Good question --- the full text is:\n> > \n> > <listitem>\n> > <para>\n> > Record and check the collation of each <link\n> > linkend=\"sql-createdatabase\">database</link> (Peter Eisentraut)\n> > </para>\n> > \n> > <para>\n> > This is designed to detect collation\n> > mismatches to avoid data corruption. Function\n> > <function>pg_database_collation_actual_version()</function>\n> > reports the underlying operating system collation version, and\n> > <command>ALTER DATABASE ... REFRESH</command> sets the database\n> > to match the operating system collation version. DETAILS?\n> > </para>\n> > </listitem>\n> > \n> > I just can't figure out what the user needs to understand about this,\n> > and I understand very little of it.\n> \n> We already had this feature for (schema-level) collations, now we have it on\n> the level of the database collation.\n\nOkay, I figured out the interplay between OS collation version support,\ncollation libraries, and collation levels. Here is an updated patch for\nthe release notes.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Wed, 31 Aug 2022 16:03:06 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 04:03:06PM -0400, Bruce Momjian wrote:\n> On Wed, Aug 31, 2022 at 11:38:33AM +0200, Peter Eisentraut wrote:\n> > On 30.08.22 22:42, Bruce Momjian wrote:\n> > > I just can't figure out what the user needs to understand about this,\n> > > and I understand very little of it.\n> > \n> > We already had this feature for (schema-level) collations, now we have it on\n> > the level of the database collation.\n> \n> Okay, I figured out the interplay between OS collation version support,\n> collation libraries, and collation levels. Here is an updated patch for\n> the release notes.\n\nPatch applied.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 2 Sep 2022 21:47:50 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On 5/10/22 11:44 AM, Bruce Momjian wrote:\r\n> I have completed the first draft of the PG 15 release notes \r\n\r\n> I assume there will be major adjustments in the next few weeks based on\r\n> feedback.\r\n\r\nI wanted to propose the \"major enhancements\" section to see if we can \r\nget an iteration in prior to Beta 4. Please see attached patched.\r\n\r\nDo we want to include anything else, or substitute any of the items?\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Sun, 4 Sep 2022 13:41:59 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "I noticed that the v15 release notes still refer to pg_checkpointer, which\nwas renamed to pg_checkpoint in b9eb0ff.\n\ndiff --git a/doc/src/sgml/release-15.sgml b/doc/src/sgml/release-15.sgml\nindex d432c2db44..362728753a 100644\n--- a/doc/src/sgml/release-15.sgml\n+++ b/doc/src/sgml/release-15.sgml\n@@ -1255,7 +1255,7 @@ Author: Jeff Davis <jdavis@postgresql.org>\n <listitem>\n <para>\n Add predefined role <link\n- linkend=\"predefined-roles-table\"><literal>pg_checkpointer</literal></link>\n+ linkend=\"predefined-roles-table\"><literal>pg_checkpoint</literal></link>\n that allows members to run <command>CHECKPOINT</command>\n (Jeff Davis)\n </para>\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 4 Sep 2022 11:42:54 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On 9/4/22 2:42 PM, Nathan Bossart wrote:\r\n> I noticed that the v15 release notes still refer to pg_checkpointer, which\r\n> was renamed to pg_checkpoint in b9eb0ff.\r\n> \r\n> diff --git a/doc/src/sgml/release-15.sgml b/doc/src/sgml/release-15.sgml\r\n> index d432c2db44..362728753a 100644\r\n> --- a/doc/src/sgml/release-15.sgml\r\n> +++ b/doc/src/sgml/release-15.sgml\r\n> @@ -1255,7 +1255,7 @@ Author: Jeff Davis <jdavis@postgresql.org>\r\n> <listitem>\r\n> <para>\r\n> Add predefined role <link\r\n> - linkend=\"predefined-roles-table\"><literal>pg_checkpointer</literal></link>\r\n> + linkend=\"predefined-roles-table\"><literal>pg_checkpoint</literal></link>\r\n> that allows members to run <command>CHECKPOINT</command>\r\n> (Jeff Davis)\r\n> </para>\r\n\r\nNudging on folks to review the major features language for the docs \r\n(pg15-maj-features.patch).\r\n\r\nSeparately, per[1], including dense_rank() in the list of window \r\nfunctions with optimizations (dense-rank.diff).\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/CAApHDvpr6N7egNfSttGdQMfL%2BKYBjUb_Zf%2BrHULb7_2k4V%3DGGg%40mail.gmail.com",
"msg_date": "Mon, 12 Sep 2022 17:31:36 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On 2022-Sep-12, Jonathan S. Katz wrote:\n\n> + <listitem>\n> + <para>\n> + Column-level and row-level filtering on\n> + <link linkend=\"logical-replication\">logical replication</link>\n> + publications.\n> + </para>\n> + </listitem>\n\n-column-level filtering\n+the ability to specify column lists\n\n> + Row-level filtering and the ability to specify column lists on\n> + <link linkend=\"logical-replication\">logical replication</link>\n> + publications.\n\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No tengo por qué estar de acuerdo con lo que pienso\"\n (Carlos Caszeli)\n\n\n",
"msg_date": "Tue, 13 Sep 2022 13:13:33 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On 9/13/22 7:13 AM, Alvaro Herrera wrote:\r\n> On 2022-Sep-12, Jonathan S. Katz wrote:\r\n> \r\n>> + <listitem>\r\n>> + <para>\r\n>> + Column-level and row-level filtering on\r\n>> + <link linkend=\"logical-replication\">logical replication</link>\r\n>> + publications.\r\n>> + </para>\r\n>> + </listitem>\r\n> \r\n> -column-level filtering\r\n> +the ability to specify column lists\r\n\r\nAdjusted to be similar to your suggestion. Updated patch attached.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Tue, 13 Sep 2022 11:47:16 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> Adjusted to be similar to your suggestion. Updated patch attached.\n\nI pushed this with a bit more copy-editing.\n\nI'm planning to do a final(?) pass over the v15 notes today,\nbut I thought it'd be appropriate to push this separately.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 23 Sep 2022 11:25:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On 9/23/22 11:25 AM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>> Adjusted to be similar to your suggestion. Updated patch attached.\r\n> \r\n> I pushed this with a bit more copy-editing.\r\n> \r\n> I'm planning to do a final(?) pass over the v15 notes today,\r\n> but I thought it'd be appropriate to push this separately.\r\n\r\nThanks!\r\n\r\nRE \"final pass\", there's still an errant \"BACKPATCHED\"[1] that still \r\nneeds addressing. I didn't have a chance to verify if it was indeed \r\nbackpatched.\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=doc/src/sgml/release-15.sgml;hb=refs/heads/REL_15_STABLE#l460",
"msg_date": "Fri, 23 Sep 2022 12:37:34 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 9/23/22 11:25 AM, Tom Lane wrote:\n>> I'm planning to do a final(?) pass over the v15 notes today,\n>> but I thought it'd be appropriate to push this separately.\n\n> RE \"final pass\", there's still an errant \"BACKPATCHED\"[1] that still \n> needs addressing. I didn't have a chance to verify if it was indeed \n> backpatched.\n\nYeah, that one indeed needs removed (and I've done so). I see a\nfew other places where Bruce left notes about things that need more\nclarification. I'm just finishing a pass of \"update for subsequent\ncommits\", and then I'll start on copy-editing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 23 Sep 2022 13:33:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On 9/23/22 1:33 PM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>> On 9/23/22 11:25 AM, Tom Lane wrote:\r\n>>> I'm planning to do a final(?) pass over the v15 notes today,\r\n>>> but I thought it'd be appropriate to push this separately.\r\n> \r\n>> RE \"final pass\", there's still an errant \"BACKPATCHED\"[1] that still\r\n>> needs addressing. I didn't have a chance to verify if it was indeed\r\n>> backpatched.\r\n> \r\n> Yeah, that one indeed needs removed (and I've done so). I see a\r\n> few other places where Bruce left notes about things that need more\r\n> clarification. I'm just finishing a pass of \"update for subsequent\r\n> commits\", and then I'll start on copy-editing.\r\n\r\nACK. I will available to review during the weekend (Sunday).\r\n\r\nJonathan",
"msg_date": "Fri, 23 Sep 2022 17:07:38 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Fri, Sep 23, 2022 at 01:33:07PM -0400, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> > On 9/23/22 11:25 AM, Tom Lane wrote:\n> >> I'm planning to do a final(?) pass over the v15 notes today,\n> >> but I thought it'd be appropriate to push this separately.\n> \n> > RE \"final pass\", there's still an errant \"BACKPATCHED\"[1] that still \n> > needs addressing. I didn't have a chance to verify if it was indeed \n> > backpatched.\n> \n> Yeah, that one indeed needs removed (and I've done so). I see a\n> few other places where Bruce left notes about things that need more\n> clarification. I'm just finishing a pass of \"update for subsequent\n> commits\", and then I'll start on copy-editing.\n\nSome possible changes for your consideration.\n\n+ Store <application>pg_upgrade</application>'s log and\n+ temporary files in a subdirectory of the new cluster called\n+ <filename>pg_upgrade_output.d</filename> (Justin Pryzby)\n\n+ Previously such files were left in the current directory,\n+ requiring manual cleanup. It's still necessary to remove them\n+ manually afterwards, but now one can just remove that whole\n+ subdirectory.\n\nIf pg_upgrade succeeds, then it removes the dir itself (so it's not\n\"necessary\").\n\nAnd if it fails after starting to restore the schema, then it's\nnecessary to remove not the \"subdirectory\" but the whole new-cluster\ndir.\n\n+ Make <application>pg_upgrade</application> preserve tablespace\n+ and database OIDs, as well as table relfilenode numbers\n\ns/table/relation/ ?\n\nYou changed this to use spaces:\n| The new setting is <literal>log_destination = jsonlog</literal>.\nbut then left these without spaces:\n| and <literal>wal_level=minimal</literal>.\n| This is enabled via <literal>--compress=lz4</literal> and requires\n\n+ value, use the transaction start time not wall clock time to\n\ns/not/rather than/ ?\n\n+ Adjust <application>psql</application> so that Readline's\n\nshould use <productname>Readline ?\n\n+ Previously a pound marker was inserted, but that's pretty\n+ unhelpful in SQL.\n\nThis sounds more like a candid commit message than a release note.\n\n+ Improve performance of dumping databases with many objects\n\ns/of/when/ ?\n\n+ New options are <literal>server</literal> to write the\n\n*The* new options\n\n+ In some cases a partition child table could appear more than once.\n\nTechnically \"partition child table\" is redundant\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 25 Sep 2022 16:50:09 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Fri, Sep 23, 2022 at 01:33:07PM -0400, Tom Lane wrote:\n> + Previously such files were left in the current directory,\n> + requiring manual cleanup. It's still necessary to remove them\n> + manually afterwards, but now one can just remove that whole\n> + subdirectory.\n\n> If pg_upgrade succeeds, then it removes the dir itself (so it's not\n> \"necessary\").\n\nAh, I'd only ever paid attention to failure cases, so I didn't\nrealize that :-(. Text adjusted:\n\n Previously such files were left in the current directory,\n requiring manual cleanup. Now they are automatically removed on\n successful completion of <application>pg_upgrade</application>.\n\nI took most of your other suggestions, too. Thanks!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 26 Sep 2022 14:34:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, 13 Sept 2022 at 09:31, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> Separately, per[1], including dense_rank() in the list of window\n> functions with optimizations (dense-rank.diff).\n\nThis one might have been forgotten... ? I can push it shortly if nobody objects.\n\nDavid\n\n> [1]\n> https://www.postgresql.org/message-id/CAApHDvpr6N7egNfSttGdQMfL%2BKYBjUb_Zf%2BrHULb7_2k4V%3DGGg%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 27 Sep 2022 10:28:55 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Tue, 13 Sept 2022 at 09:31, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>> Separately, per[1], including dense_rank() in the list of window\n>> functions with optimizations (dense-rank.diff).\n\n> This one might have been forgotten... ? I can push it shortly if nobody objects.\n\nYeah, I missed that one. We're theoretically in the wrap freeze for\n15rc1, but I don't have a problem with release-note changes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 26 Sep 2022 17:45:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
},
{
"msg_contents": "On Tue, 27 Sept 2022 at 10:45, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > On Tue, 13 Sept 2022 at 09:31, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> >> Separately, per[1], including dense_rank() in the list of window\n> >> functions with optimizations (dense-rank.diff).\n>\n> > This one might have been forgotten... ? I can push it shortly if nobody objects.\n>\n> Yeah, I missed that one. We're theoretically in the wrap freeze for\n> 15rc1, but I don't have a problem with release-note changes.\n\nThanks. I've just pushed it.\n\nDavid\n\n\n",
"msg_date": "Tue, 27 Sep 2022 10:59:04 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: First draft of the PG 15 release notes"
}
] |
[
{
"msg_contents": "(Sorry in advance if this is off-topic of -hackers, and please head me\nto the right place if so.)\n\nI'm stuck by connection failure to gitmaster.\n\nI told that I already have the commit-bit on pgtranslation repository\nfor the community account \"horiguti\".\n\nI did the following steps.\n\n1. Add the public key for git-access to \"SSH Key\" field of \"Edit User\n Profile\" page.(https://www.postgresql.org/account/profile/) I did\n this more than few months ago.\n\n2. Clone ssh://git@gitmaster.postgresql.org/pgtranslation/messages.git.\n\nThe problem for me here is I get \"Permission denied\" by the second\nstep.\n\nThe following is an extract of verbose log when I did:\n\n> GIT_SSH_COMMAND=\"ssh -vvvv\" git clone ssh://git@gitmaster.postgresql.org/pgtranslation/messages.git\n\ndebug1: Authenticating to gitmaster.postgresql.org:22 as 'git'\ndebug1: Offering public key: /home/horiguti/.ssh/postgresql ECDSA SHA256:zMOonb8...\ndebug3: send packet: type 50\ndebug2: we sent a publickey packet, wait for reply\ndebug3: receive packet: type 51\n\nThe account and host looks correct. The server returns 51\n(SSH_MSG_USERAUTH_FAILURE), which means the server didn't find my\npublic key, but the fingerprint shown above coincides with that of the\nregistered public key. I don't have a clue of the reason from my side.\n\nPlease someone tell me what to do to get over the situation.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 11 May 2022 16:21:09 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "gitmaster access"
},
{
"msg_contents": "Hi\n\nOn Wed, 11 May 2022 at 08:21, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> (Sorry in advance if this is off-topic of -hackers, and please head me\n> to the right place if so.)\n>\n> I'm stuck by connection failure to gitmaster.\n>\n> I told that I already have the commit-bit on pgtranslation repository\n> for the community account \"horiguti\".\n>\n> I did the following steps.\n>\n> 1. Add the public key for git-access to \"SSH Key\" field of \"Edit User\n> Profile\" page.(https://www.postgresql.org/account/profile/) I did\n> this more than few months ago.\n>\n> 2. Clone ssh://git@gitmaster.postgresql.org/pgtranslation/messages.git.\n>\n\nThe correct repo is ssh://git@git.postgresql.org/pgtranslation/messages.git.\n\n\n>\n> The problem for me here is I get \"Permission denied\" by the second\n> step.\n>\n> The following is an extract of verbose log when I did:\n>\n> > GIT_SSH_COMMAND=\"ssh -vvvv\" git clone ssh://\n> git@gitmaster.postgresql.org/pgtranslation/messages.git\n>\n> debug1: Authenticating to gitmaster.postgresql.org:22 as 'git'\n> debug1: Offering public key: /home/horiguti/.ssh/postgresql ECDSA\n> SHA256:zMOonb8...\n> debug3: send packet: type 50\n> debug2: we sent a publickey packet, wait for reply\n> debug3: receive packet: type 51\n>\n> The account and host looks correct. The server returns 51\n> (SSH_MSG_USERAUTH_FAILURE), which means the server didn't find my\n> public key, but the fingerprint shown above coincides with that of the\n> registered public key. I don't have a clue of the reason from my side.\n>\n> Please someone tell me what to do to get over the situation.\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n>\n>\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nHiOn Wed, 11 May 2022 at 08:21, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:(Sorry in advance if this is off-topic of -hackers, and please head me\nto the right place if so.)\n\nI'm stuck by connection failure to gitmaster.\n\nI told that I already have the commit-bit on pgtranslation repository\nfor the community account \"horiguti\".\n\nI did the following steps.\n\n1. Add the public key for git-access to \"SSH Key\" field of \"Edit User\n Profile\" page.(https://www.postgresql.org/account/profile/) I did\n this more than few months ago.\n\n2. Clone ssh://git@gitmaster.postgresql.org/pgtranslation/messages.git.The correct repo is ssh://git@git.postgresql.org/pgtranslation/messages.git. \n\nThe problem for me here is I get \"Permission denied\" by the second\nstep.\n\nThe following is an extract of verbose log when I did:\n\n> GIT_SSH_COMMAND=\"ssh -vvvv\" git clone ssh://git@gitmaster.postgresql.org/pgtranslation/messages.git\n\ndebug1: Authenticating to gitmaster.postgresql.org:22 as 'git'\ndebug1: Offering public key: /home/horiguti/.ssh/postgresql ECDSA SHA256:zMOonb8...\ndebug3: send packet: type 50\ndebug2: we sent a publickey packet, wait for reply\ndebug3: receive packet: type 51\n\nThe account and host looks correct. The server returns 51\n(SSH_MSG_USERAUTH_FAILURE), which means the server didn't find my\npublic key, but the fingerprint shown above coincides with that of the\nregistered public key. I don't have a clue of the reason from my side.\n\nPlease someone tell me what to do to get over the situation.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n-- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 11 May 2022 08:46:40 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "At Wed, 11 May 2022 08:46:40 +0100, Dave Page <dpage@pgadmin.org> wrote in \n> Hi\n> \n> On Wed, 11 May 2022 at 08:21, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> > 2. Clone ssh://git@gitmaster.postgresql.org/pgtranslation/messages.git.\n> >\n> \n> The correct repo is ssh://git@git.postgresql.org/pgtranslation/messages.git.\n\nThanks for the reply. I didn't wrote, but I have tried that and had\nthe same result.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 11 May 2022 16:55:47 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "Hi\n\nOn Wed, 11 May 2022 at 08:55, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Wed, 11 May 2022 08:46:40 +0100, Dave Page <dpage@pgadmin.org> wrote\n> in\n> > Hi\n> >\n> > On Wed, 11 May 2022 at 08:21, Kyotaro Horiguchi <horikyota.ntt@gmail.com\n> >\n> > wrote:\n> > > 2. Clone ssh://git@gitmaster.postgresql.org/pgtranslation/messages.git\n> .\n> > >\n> >\n> > The correct repo is ssh://\n> git@git.postgresql.org/pgtranslation/messages.git.\n>\n> Thanks for the reply. I didn't wrote, but I have tried that and had\n> the same result.\n>\n\nWhat is your community user ID?\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nHiOn Wed, 11 May 2022 at 08:55, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Wed, 11 May 2022 08:46:40 +0100, Dave Page <dpage@pgadmin.org> wrote in \n> Hi\n> \n> On Wed, 11 May 2022 at 08:21, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> > 2. Clone ssh://git@gitmaster.postgresql.org/pgtranslation/messages.git.\n> >\n> \n> The correct repo is ssh://git@git.postgresql.org/pgtranslation/messages.git.\n\nThanks for the reply. I didn't wrote, but I have tried that and had\nthe same result.What is your community user ID? -- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 11 May 2022 09:08:26 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "At Wed, 11 May 2022 09:08:26 +0100, Dave Page <dpage@pgadmin.org> wrote in \n> What is your community user ID?\n\nMy community user name is \"horiguti\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 11 May 2022 17:25:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "> Hi\n> \n> On Wed, 11 May 2022 at 08:21, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> \n>> (Sorry in advance if this is off-topic of -hackers, and please head me\n>> to the right place if so.)\n>>\n>> I'm stuck by connection failure to gitmaster.\n>>\n>> I told that I already have the commit-bit on pgtranslation repository\n>> for the community account \"horiguti\".\n>>\n>> I did the following steps.\n>>\n>> 1. Add the public key for git-access to \"SSH Key\" field of \"Edit User\n>> Profile\" page.(https://www.postgresql.org/account/profile/) I did\n>> this more than few months ago.\n>>\n>> 2. Clone ssh://git@gitmaster.postgresql.org/pgtranslation/messages.git.\n>>\n> \n> The correct repo is ssh://git@git.postgresql.org/pgtranslation/messages.git.\n\nThis does not work for me neither. However, in my case following works:\n\nssh://git@gitmaster.postgresql.org/pgtranslation/messages.git\n\nAlso Tom Lane said:\nOn Sun, May 1, 2022 at 4:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Tatsuo Ishii <ishii@sraoss.co.jp> writes:\n> > This is ok:\n> > git clone ssh://git@gitmaster.postgresql.org/postgresql.git\n>\n> That's the thing to use if you're a committer.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Wed, 11 May 2022 17:34:37 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "Hi\n\nOn Wed, 11 May 2022 at 09:34, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> > Hi\n> >\n> > On Wed, 11 May 2022 at 08:21, Kyotaro Horiguchi <horikyota.ntt@gmail.com\n> >\n> > wrote:\n> >\n> >> (Sorry in advance if this is off-topic of -hackers, and please head me\n> >> to the right place if so.)\n> >>\n> >> I'm stuck by connection failure to gitmaster.\n> >>\n> >> I told that I already have the commit-bit on pgtranslation repository\n> >> for the community account \"horiguti\".\n> >>\n> >> I did the following steps.\n> >>\n> >> 1. Add the public key for git-access to \"SSH Key\" field of \"Edit User\n> >> Profile\" page.(https://www.postgresql.org/account/profile/) I did\n> >> this more than few months ago.\n> >>\n> >> 2. Clone ssh://git@gitmaster.postgresql.org/pgtranslation/messages.git.\n> >>\n> >\n> > The correct repo is ssh://\n> git@git.postgresql.org/pgtranslation/messages.git.\n>\n> This does not work for me neither. However, in my case following works:\n>\n> ssh://git@gitmaster.postgresql.org/pgtranslation/messages.git\n\n\nIf that works, then colour me confused because:\n\ngemulon:~# host gitmaster.postgresql.org\ngitmaster.postgresql.org is an alias for gemulon.postgresql.org.\ngemulon.postgresql.org has address 72.32.157.198\ngemulon.postgresql.org has IPv6 address 2001:4800:3e1:1::198\ngemulon:~# find / -name pgtranslation\ngemulon:~# find / -name messages.git\ngemulon:~# ls -al /home/git/repositories/\ntotal 16\ndrwxr-xr-x 4 git git 4096 Jan 4 2020 .\ndrwxr-xr-x 8 git git 4096 May 11 09:03 ..\ndrwxr-xr-x 7 git git 4096 Jan 4 2020 mhatest.git\ndrwxr-sr-x 7 git git 4096 May 11 06:39 postgresql.git\ngemulon:~#\n\n\n>\n>\n> Also Tom Lane said:\n> On Sun, May 1, 2022 at 4:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Tatsuo Ishii <ishii@sraoss.co.jp> writes:\n> > > This is ok:\n> > > git clone ssh://git@gitmaster.postgresql.org/postgresql.git\n> >\n> > That's the thing to use if you're a committer.\n>\n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS, Inc. Japan\n> English: http://www.sraoss.co.jp/index_en.php\n> Japanese:http://www.sraoss.co.jp\n>\n\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nHiOn Wed, 11 May 2022 at 09:34, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:> Hi\n> \n> On Wed, 11 May 2022 at 08:21, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> \n>> (Sorry in advance if this is off-topic of -hackers, and please head me\n>> to the right place if so.)\n>>\n>> I'm stuck by connection failure to gitmaster.\n>>\n>> I told that I already have the commit-bit on pgtranslation repository\n>> for the community account \"horiguti\".\n>>\n>> I did the following steps.\n>>\n>> 1. Add the public key for git-access to \"SSH Key\" field of \"Edit User\n>> Profile\" page.(https://www.postgresql.org/account/profile/) I did\n>> this more than few months ago.\n>>\n>> 2. Clone ssh://git@gitmaster.postgresql.org/pgtranslation/messages.git.\n>>\n> \n> The correct repo is ssh://git@git.postgresql.org/pgtranslation/messages.git.\n\nThis does not work for me neither. However, in my case following works:\n\nssh://git@gitmaster.postgresql.org/pgtranslation/messages.gitIf that works, then colour me confused because:gemulon:~# host gitmaster.postgresql.orggitmaster.postgresql.org is an alias for gemulon.postgresql.org.gemulon.postgresql.org has address 72.32.157.198gemulon.postgresql.org has IPv6 address 2001:4800:3e1:1::198gemulon:~# find / -name pgtranslationgemulon:~# find / -name messages.gitgemulon:~# ls -al /home/git/repositories/total 16drwxr-xr-x 4 git git 4096 Jan 4 2020 .drwxr-xr-x 8 git git 4096 May 11 09:03 ..drwxr-xr-x 7 git git 4096 Jan 4 2020 mhatest.gitdrwxr-sr-x 7 git git 4096 May 11 06:39 postgresql.gitgemulon:~# \n\nAlso Tom Lane said:\nOn Sun, May 1, 2022 at 4:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Tatsuo Ishii <ishii@sraoss.co.jp> writes:\n> > This is ok:\n> > git clone ssh://git@gitmaster.postgresql.org/postgresql.git\n>\n> That's the thing to use if you're a committer.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n-- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 11 May 2022 10:27:00 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "On Wed, 11 May 2022 at 09:25, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Wed, 11 May 2022 09:08:26 +0100, Dave Page <dpage@pgadmin.org> wrote\n> in\n> > What is your community user ID?\n>\n> My community user name is \"horiguti\".\n>\n\nOK, so you have write access on the repo on git.postgresql.org, but I can't\nfind an SSH key for your account on the system. Can you check\nhttps://www.postgresql.org/account/profile/ and make sure you've got the\ncorrect SSH key in your profile? If you add one, it might take 10 minutes\nor so to make its way to the git server.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Wed, 11 May 2022 at 09:25, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Wed, 11 May 2022 09:08:26 +0100, Dave Page <dpage@pgadmin.org> wrote in \n> What is your community user ID?\n\nMy community user name is \"horiguti\".OK, so you have write access on the repo on git.postgresql.org, but I can't find an SSH key for your account on the system. Can you check https://www.postgresql.org/account/profile/ and make sure you've got the correct SSH key in your profile? If you add one, it might take 10 minutes or so to make its way to the git server. -- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 11 May 2022 10:40:00 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": ">> This does not work for me neither. However, in my case following works:\n>>\n>> ssh://git@gitmaster.postgresql.org/pgtranslation/messages.git\n> \n> \n> If that works, then colour me confused because:\n> \n> gemulon:~# host gitmaster.postgresql.org\n> gitmaster.postgresql.org is an alias for gemulon.postgresql.org.\n> gemulon.postgresql.org has address 72.32.157.198\n> gemulon.postgresql.org has IPv6 address 2001:4800:3e1:1::198\n> gemulon:~# find / -name pgtranslation\n> gemulon:~# find / -name messages.git\n> gemulon:~# ls -al /home/git/repositories/\n> total 16\n> drwxr-xr-x 4 git git 4096 Jan 4 2020 .\n> drwxr-xr-x 8 git git 4096 May 11 09:03 ..\n> drwxr-xr-x 7 git git 4096 Jan 4 2020 mhatest.git\n> drwxr-sr-x 7 git git 4096 May 11 06:39 postgresql.git\n> gemulon:~#\n\nSorry, I meant ssh://git@gitmaster.postgresql.org/postgresql.git\nworks, but ssh://git@git.postgresql.org/postgresql.git does not work\nfor me.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Wed, 11 May 2022 21:55:55 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "Hi\n\nOn Wed, 11 May 2022 at 13:56, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> >> This does not work for me neither. However, in my case following works:\n> >>\n> >> ssh://git@gitmaster.postgresql.org/pgtranslation/messages.git\n> >\n> >\n> > If that works, then colour me confused because:\n> >\n> > gemulon:~# host gitmaster.postgresql.org\n> > gitmaster.postgresql.org is an alias for gemulon.postgresql.org.\n> > gemulon.postgresql.org has address 72.32.157.198\n> > gemulon.postgresql.org has IPv6 address 2001:4800:3e1:1::198\n> > gemulon:~# find / -name pgtranslation\n> > gemulon:~# find / -name messages.git\n> > gemulon:~# ls -al /home/git/repositories/\n> > total 16\n> > drwxr-xr-x 4 git git 4096 Jan 4 2020 .\n> > drwxr-xr-x 8 git git 4096 May 11 09:03 ..\n> > drwxr-xr-x 7 git git 4096 Jan 4 2020 mhatest.git\n> > drwxr-sr-x 7 git git 4096 May 11 06:39 postgresql.git\n> > gemulon:~#\n>\n> Sorry, I meant ssh://git@gitmaster.postgresql.org/postgresql.git\n> works, but ssh://git@git.postgresql.org/postgresql.git does not work\n> for me.\n>\n\nThat is expected; no one has write access to that repo (and we only include\nSSH keys for users with write access).\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nHiOn Wed, 11 May 2022 at 13:56, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:>> This does not work for me neither. However, in my case following works:\n>>\n>> ssh://git@gitmaster.postgresql.org/pgtranslation/messages.git\n> \n> \n> If that works, then colour me confused because:\n> \n> gemulon:~# host gitmaster.postgresql.org\n> gitmaster.postgresql.org is an alias for gemulon.postgresql.org.\n> gemulon.postgresql.org has address 72.32.157.198\n> gemulon.postgresql.org has IPv6 address 2001:4800:3e1:1::198\n> gemulon:~# find / -name pgtranslation\n> gemulon:~# find / -name messages.git\n> gemulon:~# ls -al /home/git/repositories/\n> total 16\n> drwxr-xr-x 4 git git 4096 Jan 4 2020 .\n> drwxr-xr-x 8 git git 4096 May 11 09:03 ..\n> drwxr-xr-x 7 git git 4096 Jan 4 2020 mhatest.git\n> drwxr-sr-x 7 git git 4096 May 11 06:39 postgresql.git\n> gemulon:~#\n\nSorry, I meant ssh://git@gitmaster.postgresql.org/postgresql.git\nworks, but ssh://git@git.postgresql.org/postgresql.git does not work\nfor me.That is expected; no one has write access to that repo (and we only include SSH keys for users with write access). -- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 11 May 2022 14:03:56 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": ">> Sorry, I meant ssh://git@gitmaster.postgresql.org/postgresql.git\n>> works, but ssh://git@git.postgresql.org/postgresql.git does not work\n>> for me.\n>>\n> \n> That is expected; no one has write access to that repo (and we only include\n> SSH keys for users with write access).\n\nThen we need to change this, no?\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=summary\n\nURL\tgit://git.postgresql.org/git/postgresql.git\n\thttps://git.postgresql.org/git/postgresql.git\n\tssh://git@git.postgresql.org/postgresql.git\n\nThe last line should be \"ssh://git@gitmaster.postgresql.org/postgresql.git\"?\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 12 May 2022 09:04:38 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "On Thu, May 12, 2022 at 09:04:38AM +0900, Tatsuo Ishii wrote:\n> Then we need to change this, no?\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=summary\n> \n> URL\tgit://git.postgresql.org/git/postgresql.git\n> \thttps://git.postgresql.org/git/postgresql.git\n> \tssh://git@git.postgresql.org/postgresql.git\n> \n> The last line should be \"ssh://git@gitmaster.postgresql.org/postgresql.git\"?\n -------------------------------------------------\n\nThat is the one I use.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 11 May 2022 20:59:26 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "On Wed, May 11, 2022 at 08:59:26PM -0400, Bruce Momjian wrote:\n> On Thu, May 12, 2022 at 09:04:38AM +0900, Tatsuo Ishii wrote:\n> > Then we need to change this, no?\n> > \n> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=summary\n> > \n> > URL\tgit://git.postgresql.org/git/postgresql.git\n> > \thttps://git.postgresql.org/git/postgresql.git\n> > \tssh://git@git.postgresql.org/postgresql.git\n> > \n> > The last line should be \"ssh://git@gitmaster.postgresql.org/postgresql.git\"?\n> -------------------------------------------------\n> \n> That is the one I use.\n\nI assume the URL list at:\n\n https://git.postgresql.org/gitweb/?p=postgresql.git;a=summary\n\nis for non-committers.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 11 May 2022 21:12:28 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "At Wed, 11 May 2022 10:40:00 +0100, Dave Page <dpage@pgadmin.org> wrote in \n> OK, so you have write access on the repo on git.postgresql.org, but I can't\n> find an SSH key for your account on the system. Can you check\n> https://www.postgresql.org/account/profile/ and make sure you've got the\n> correct SSH key in your profile? If you add one, it might take 10 minutes\n> or so to make its way to the git server.\n\nThanks for the inspection. I understand what the ssh server is facing.\n\nI had already filled the filed with an (I beilive correct) openssh\npublic key, but to make sure, I emptied the field \"SSH key\", waited\nfor 20 minutes, then added a fresh pubkey and waited for 20 minutes.\n\nI tried both git.postgresql.org and gitmaster and don't have better\nluck. The server still says \"I don't know you\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 12 May 2022 10:20:10 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "> Thanks for the inspection. I understand what the ssh server is facing.\n> \n> I had already filled the filed with an (I beilive correct) openssh\n> public key, but to make sure, I emptied the field \"SSH key\", waited\n> for 20 minutes, then added a fresh pubkey and waited for 20 minutes.\n> \n> I tried both git.postgresql.org and gitmaster and don't have better\n> luck. The server still says \"I don't know you\".\n> \n> regards.\n> \n> -- \n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\nLast year we faced a similar problem, namely, a new committer for\npgpool.git could not access the git repository (Permission denied\n(publickey)). Magnus kindly advised following and it worked. Hope this\nhelps.\n\n> 1. Log into the git server on https://git.postgresql.org/adm/. It\n> should be an automatic log in and show the repository.\n> 2. *then* go back to the main website and delete the ssh key\n> 3. Now add the ssh key again on the main website\n> 4. Wait 10-15 minutes and then it should work\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 12 May 2022 10:34:49 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "On Thu, May 12, 2022 at 10:34:49AM +0900, Tatsuo Ishii wrote:\n> Last year we faced a similar problem, namely, a new committer for\n> pgpool.git could not access the git repository (Permission denied\n> (publickey)). Magnus kindly advised following and it worked. Hope this\n> helps.\n> \n> > 1. Log into the git server on https://git.postgresql.org/adm/. It\n> > should be an automatic log in and show the repository.\n> > 2. *then* go back to the main website and delete the ssh key\n> > 3. Now add the ssh key again on the main website\n> > 4. Wait 10-15 minutes and then it should work\n\nI don't see any repositories listed for my login, so I wonder if\ngitmaster and pgpool are handled differently. When I changed my SSH key\nfor gitmaster recently, I had to phone someone to verify the change ---\nI could not do it via a website.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 11 May 2022 21:42:46 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "> On Thu, May 12, 2022 at 10:34:49AM +0900, Tatsuo Ishii wrote:\n>> Last year we faced a similar problem, namely, a new committer for\n>> pgpool.git could not access the git repository (Permission denied\n>> (publickey)). Magnus kindly advised following and it worked. Hope this\n>> helps.\n>> \n>> > 1. Log into the git server on https://git.postgresql.org/adm/. It\n>> > should be an automatic log in and show the repository.\n>> > 2. *then* go back to the main website and delete the ssh key\n>> > 3. Now add the ssh key again on the main website\n>> > 4. Wait 10-15 minutes and then it should work\n> \n> I don't see any repositories listed for my login, so I wonder if\n> gitmaster and pgpool are handled differently.\n\nI guess so too. I only see pgpool related repositories but\npostgres.git on https://git.postgresql.org/adm/. According to Magnus,\nthis is necessary to trigger replication of SSH key.\n\n> When I changed my SSH key\n> for gitmaster recently, I had to phone someone to verify the change ---\n> I could not do it via a website.\n\nThank you for the info. I will be careful when I want to change SSH\nkey for gitmaster next time.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 12 May 2022 10:59:04 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Wed, May 11, 2022 at 08:59:26PM -0400, Bruce Momjian wrote:\n>> On Thu, May 12, 2022 at 09:04:38AM +0900, Tatsuo Ishii wrote:\n>>> The last line should be \"ssh://git@gitmaster.postgresql.org/postgresql.git\"?\n\n> I assume the URL list at:\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=summary\n> is for non-committers.\n\nYeah, I agree with that. If we advertise the gitmaster address here,\nthe primary result will be that we get a lot of complaints from random\npeople complaining that they can't access it. A secondary result\nis likely to be an increase in attacks against that server.\n\nThe onboarding process for new committers should include explaining\nabout the separate master repo and how they can access it, but that\nis absolutely not something we should advertise widely.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 11 May 2022 22:40:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "At Thu, 12 May 2022 10:34:49 +0900 (JST), Tatsuo Ishii <ishii@sraoss.co.jp> wrote in \n> Last year we faced a similar problem, namely, a new committer for\n> pgpool.git could not access the git repository (Permission denied\n> (publickey)). Magnus kindly advised following and it worked. Hope this\n> helps.\n> \n> > 1. Log into the git server on https://git.postgresql.org/adm/. It\n> > should be an automatic log in and show the repository.\n> > 2. *then* go back to the main website and delete the ssh key\n> > 3. Now add the ssh key again on the main website\n> > 4. Wait 10-15 minutes and then it should work\n\nThank you for the info, but unfortunately it hasn't worked.\nI'm going to try a slightly different steps..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 12 May 2022 11:44:33 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": ">> I assume the URL list at:\n>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=summary\n>> is for non-committers.\n> \n> Yeah, I agree with that. If we advertise the gitmaster address here,\n> the primary result will be that we get a lot of complaints from random\n> people complaining that they can't access it. A secondary result\n> is likely to be an increase in attacks against that server.\n> \n> The onboarding process for new committers should include explaining\n> about the separate master repo and how they can access it, but that\n> is absolutely not something we should advertise widely.\n\nAgreed. Probably we should remove\nssh://git@git.postgresql.org/postgresql.git from the page.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 12 May 2022 12:05:14 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "At Thu, 12 May 2022 11:44:33 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 12 May 2022 10:34:49 +0900 (JST), Tatsuo Ishii <ishii@sraoss.co.jp> wrote in \n> > Last year we faced a similar problem, namely, a new committer for\n> > pgpool.git could not access the git repository (Permission denied\n> > (publickey)). Magnus kindly advised following and it worked. Hope this\n> > helps.\n> > \n> > > 1. Log into the git server on https://git.postgresql.org/adm/. It\n> > > should be an automatic log in and show the repository.\n> > > 2. *then* go back to the main website and delete the ssh key\n> > > 3. Now add the ssh key again on the main website\n> > > 4. Wait 10-15 minutes and then it should work\n> \n> Thank you for the info, but unfortunately it hasn't worked.\n> I'm going to try a slightly different steps..\n\nAnd finally I succeeded to clone from git.postgresql.org and to push a\ncommit.\n\nThank you all for the advices!\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 12 May 2022 13:54:57 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": ">> Thank you for the info, but unfortunately it hasn't worked.\n>> I'm going to try a slightly different steps..\n> \n> And finally I succeeded to clone from git.postgresql.org and to push a\n> commit.\n\nIs it git.postgresql.org, not gitmaster.postgresql.org? Interesting...\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 12 May 2022 14:44:15 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "At Thu, 12 May 2022 14:44:15 +0900 (JST), Tatsuo Ishii <ishii@sraoss.co.jp> wrote in \n> >> Thank you for the info, but unfortunately it hasn't worked.\n> >> I'm going to try a slightly different steps..\n> > \n> > And finally I succeeded to clone from git.postgresql.org and to push a\n> > commit.\n> \n> Is it git.postgresql.org, not gitmaster.postgresql.org? Interesting...\n\ngit.postgresql.org. I still receive \"Permission denied\" from\ngitmaster.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 12 May 2022 15:11:38 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "On Thu, 12 May 2022 at 07:11, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Thu, 12 May 2022 14:44:15 +0900 (JST), Tatsuo Ishii <ishii@sraoss.co.jp>\n> wrote in\n> > >> Thank you for the info, but unfortunately it hasn't worked.\n> > >> I'm going to try a slightly different steps..\n> > >\n> > > And finally I succeeded to clone from git.postgresql.org and to push a\n> > > commit.\n\n\n\\o/\n\n\n> >\n> > Is it git.postgresql.org, not gitmaster.postgresql.org? Interesting...\n>\n> git.postgresql.org. I still receive \"Permission denied\" from\n> gitmaster.\n\n\n\nYes, gitmaster is completely irrelevant here. It is *only* used for\nPostgreSQL itself, and only by PostgreSQL Committers.\n\nThe postgresql.git repo on git.postgresql.org is unique in that it is a\nmirror of the real repository on gitmaster, and doesn’t have any committers\nexcept for the account used to push commits from gitmaster. The third party\nbrowser software doesn’t know anything about that which is why it still\nshows the ssh:// URL despite it not being usable by anyone.\n\nIs there some reason you thought gitmaster was relevant here (some webpage\nfor example)? This is the third(?) someone has been confused by gitmaster\nrecently, something both Magnus and I have been surprised by.\n-- \n-- \nDave Page\nhttps://pgsnake.blogspot.com\n\nEDB Postgres\nhttps://www.enterprisedb.com\n\nOn Thu, 12 May 2022 at 07:11, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Thu, 12 May 2022 14:44:15 +0900 (JST), Tatsuo Ishii <ishii@sraoss.co.jp> wrote in \n> >> Thank you for the info, but unfortunately it hasn't worked.\n> >> I'm going to try a slightly different steps..\n> > \n> > And finally I succeeded to clone from git.postgresql.org and to push a\n> > commit.\\o/\n> \n> Is it git.postgresql.org, not gitmaster.postgresql.org? Interesting...\n\ngit.postgresql.org. I still receive \"Permission denied\" from\ngitmaster.Yes, gitmaster is completely irrelevant here. It is *only* used for PostgreSQL itself, and only by PostgreSQL Committers.The postgresql.git repo on git.postgresql.org is unique in that it is a mirror of the real repository on gitmaster, and doesn’t have any committers except for the account used to push commits from gitmaster. The third party browser software doesn’t know anything about that which is why it still shows the ssh:// URL despite it not being usable by anyone.Is there some reason you thought gitmaster was relevant here (some webpage for example)? This is the third(?) someone has been confused by gitmaster recently, something both Magnus and I have been surprised by.-- -- Dave Pagehttps://pgsnake.blogspot.comEDB Postgreshttps://www.enterprisedb.com",
"msg_date": "Thu, 12 May 2022 07:34:01 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "> At Thu, 12 May 2022 14:44:15 +0900 (JST), Tatsuo Ishii <ishii@sraoss.co.jp> wrote in \n>> >> Thank you for the info, but unfortunately it hasn't worked.\n>> >> I'm going to try a slightly different steps..\n>> > \n>> > And finally I succeeded to clone from git.postgresql.org and to push a\n>> > commit.\n>> \n>> Is it git.postgresql.org, not gitmaster.postgresql.org? Interesting...\n> \n> git.postgresql.org. I still receive \"Permission denied\" from\n> gitmaster.\n\nOk. I learned that only postgresql.git should be accessed from\ngitmaster.postgresql.org. All other repos should be accessed from\ngit.postgresql.org.\n\nBTW,\n> I'm going to try a slightly different steps..\n\nCan you please tell me What you actually did? I am afraid of facing\nsimilar problem if I want to add another committer to pgpool2 repo.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 12 May 2022 16:03:50 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "On Thu, 12 May 2022 at 08:03, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n\n> > At Thu, 12 May 2022 14:44:15 +0900 (JST), Tatsuo Ishii <\n> ishii@sraoss.co.jp> wrote in\n> >> >> Thank you for the info, but unfortunately it hasn't worked.\n> >> >> I'm going to try a slightly different steps..\n> >> >\n> >> > And finally I succeeded to clone from git.postgresql.org and to push\n> a\n> >> > commit.\n> >>\n> >> Is it git.postgresql.org, not gitmaster.postgresql.org? Interesting...\n> >\n> > git.postgresql.org. I still receive \"Permission denied\" from\n> > gitmaster.\n>\n> Ok. I learned that only postgresql.git should be accessed from\n> gitmaster.postgresql.org. All other repos should be accessed from\n> git.postgresql.org.\n\n\nThat is correct for PostgreSQL Committers such as yourself. Anyone else can\n*only* use git.postgresql.org\n\n\n>\n> BTW,\n> > I'm going to try a slightly different steps..\n>\n> Can you please tell me What you actually did? I am afraid of facing\n> similar problem if I want to add another committer to pgpool2 repo.\n>\n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS, Inc. Japan\n> English: http://www.sraoss.co.jp/index_en.php\n> Japanese:http://www.sraoss.co.jp\n>\n-- \n-- \nDave Page\nhttps://pgsnake.blogspot.com\n\nEDB Postgres\nhttps://www.enterprisedb.com\n\nOn Thu, 12 May 2022 at 08:03, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:> At Thu, 12 May 2022 14:44:15 +0900 (JST), Tatsuo Ishii <ishii@sraoss.co.jp> wrote in \n>> >> Thank you for the info, but unfortunately it hasn't worked.\n>> >> I'm going to try a slightly different steps..\n>> > \n>> > And finally I succeeded to clone from git.postgresql.org and to push a\n>> > commit.\n>> \n>> Is it git.postgresql.org, not gitmaster.postgresql.org? Interesting...\n> \n> git.postgresql.org. I still receive \"Permission denied\" from\n> gitmaster.\n\nOk. I learned that only postgresql.git should be accessed from\ngitmaster.postgresql.org. All other repos should be accessed from\ngit.postgresql.org.That is correct for PostgreSQL Committers such as yourself. Anyone else can *only* use git.postgresql.org\n\nBTW,\n> I'm going to try a slightly different steps..\n\nCan you please tell me What you actually did? I am afraid of facing\nsimilar problem if I want to add another committer to pgpool2 repo.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n-- -- Dave Pagehttps://pgsnake.blogspot.comEDB Postgreshttps://www.enterprisedb.com",
"msg_date": "Thu, 12 May 2022 08:16:55 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "At Thu, 12 May 2022 16:03:50 +0900 (JST), Tatsuo Ishii <ishii@sraoss.co.jp> wrote in \n> > At Thu, 12 May 2022 14:44:15 +0900 (JST), Tatsuo Ishii <ishii@sraoss.co.jp> wrote in \n> BTW,\n> > I'm going to try a slightly different steps..\n> \n> Can you please tell me What you actually did? I am afraid of facing\n> similar problem if I want to add another committer to pgpool2 repo.\n\n Cleared SSH key in user profile.\n+Reloaded the adm page.\n+Waited for 20 minutes.\n Filled in SSH key in user profile.\n Reloaded the adm page.\n Waited for 20 minutes.\n Run git clone and succeeded \\o/\n\nI'm not sure the additional steps gave substantial differences,\nthough. Most of the whole steps look like bibbid-babbid-boo to me in\nthe first place..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 12 May 2022 17:45:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "On Thu, May 12, 2022 at 01:54:57PM +0900, Kyotaro Horiguchi wrote:\n> At Thu, 12 May 2022 11:44:33 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > At Thu, 12 May 2022 10:34:49 +0900 (JST), Tatsuo Ishii <ishii@sraoss.co.jp> wrote in \n> > > Last year we faced a similar problem, namely, a new committer for\n> > > pgpool.git could not access the git repository (Permission denied\n> > > (publickey)). Magnus kindly advised following and it worked. Hope this\n> > > helps.\n> > > \n> > > > 1. Log into the git server on https://git.postgresql.org/adm/. It\n> > > > should be an automatic log in and show the repository.\n> > > > 2. *then* go back to the main website and delete the ssh key\n> > > > 3. Now add the ssh key again on the main website\n> > > > 4. Wait 10-15 minutes and then it should work\n> > \n> > Thank you for the info, but unfortunately it hasn't worked.\n> > I'm going to try a slightly different steps..\n> \n> And finally I succeeded to clone from git.postgresql.org and to push a\n> commit.\n\nSorry, but this has me confused. When I read this, I thought you were\npushing a 'pgsql' core server commit to gitmaster, but that would be\nimpossible for git.postgresql.org, so where are you pushing to? This\nmight be part of the confusion Dave was asking about.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 12 May 2022 10:25:03 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "At Thu, 12 May 2022 10:25:03 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> On Thu, May 12, 2022 at 01:54:57PM +0900, Kyotaro Horiguchi wrote:\n> > At Thu, 12 May 2022 11:44:33 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > > At Thu, 12 May 2022 10:34:49 +0900 (JST), Tatsuo Ishii <ishii@sraoss.co.jp> wrote in \n> > > > Last year we faced a similar problem, namely, a new committer for\n> > > > pgpool.git could not access the git repository (Permission denied\n> > > > (publickey)). Magnus kindly advised following and it worked. Hope this\n> > > > helps.\n> > > > \n> > > > > 1. Log into the git server on https://git.postgresql.org/adm/. It\n> > > > > should be an automatic log in and show the repository.\n> > > > > 2. *then* go back to the main website and delete the ssh key\n> > > > > 3. Now add the ssh key again on the main website\n> > > > > 4. Wait 10-15 minutes and then it should work\n> > > \n> > > Thank you for the info, but unfortunately it hasn't worked.\n> > > I'm going to try a slightly different steps..\n> > \n> > And finally I succeeded to clone from git.postgresql.org and to push a\n> > commit.\n> \n> Sorry, but this has me confused. When I read this, I thought you were\n> pushing a 'pgsql' core server commit to gitmaster, but that would be\n> impossible for git.postgresql.org, so where are you pushing to? This\n> might be part of the confusion Dave was asking about.\n\nThe repo I mention here is pgtranslate. Since I didn't find a clear\ninstruction about how to push to the repos of other than core, after\nfailing with \"git.postgresql.org\", I tried \"gitmaster.postgresql.org\"\nfollowing the wiki page[1].\n\nI think Dave's first suggestion (use git.postgresql.org) had a point\nin that gitmaster is dedicated to core committers. But I got\nclearly understood that from the later conversatinos.\n\n\n[1] https://wiki.postgresql.org/wiki/Committing_with_Git\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 13 May 2022 09:37:45 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Wed, May 11, 2022 at 08:59:26PM -0400, Bruce Momjian wrote:\n> >> On Thu, May 12, 2022 at 09:04:38AM +0900, Tatsuo Ishii wrote:\n> >>> The last line should be \"ssh://git@gitmaster.postgresql.org/postgresql.git\"?\n> \n> > I assume the URL list at:\n> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=summary\n> > is for non-committers.\n> \n> Yeah, I agree with that. If we advertise the gitmaster address here,\n> the primary result will be that we get a lot of complaints from random\n> people complaining that they can't access it. A secondary result\n> is likely to be an increase in attacks against that server.\n\nI don't think we could change it very easily without some ugly hacking\nof the tool that generates that page too, which is no good...\n\nWe might be able to get rid of the ssh:// URL there though... Will look\ninto that.\n\n> The onboarding process for new committers should include explaining\n> about the separate master repo and how they can access it, but that\n> is absolutely not something we should advertise widely.\n\nIt does.\n\nThanks,\n\nStephen",
"msg_date": "Sun, 15 May 2022 11:56:35 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "> Greetings,\n> \n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> Bruce Momjian <bruce@momjian.us> writes:\n>> > On Wed, May 11, 2022 at 08:59:26PM -0400, Bruce Momjian wrote:\n>> >> On Thu, May 12, 2022 at 09:04:38AM +0900, Tatsuo Ishii wrote:\n>> >>> The last line should be \"ssh://git@gitmaster.postgresql.org/postgresql.git\"?\n>> \n>> > I assume the URL list at:\n>> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=summary\n>> > is for non-committers.\n>> \n>> Yeah, I agree with that. If we advertise the gitmaster address here,\n>> the primary result will be that we get a lot of complaints from random\n>> people complaining that they can't access it. A secondary result\n>> is likely to be an increase in attacks against that server.\n> \n> I don't think we could change it very easily without some ugly hacking\n> of the tool that generates that page too, which is no good...\n> \n> We might be able to get rid of the ssh:// URL there though... Will look\n> into that.\n\nFor postgresql.git, I agree. But for other repositories, I do not agree.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 16 May 2022 08:46:10 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
},
{
"msg_contents": "Greetings,\n\n* Tatsuo Ishii (ishii@sraoss.co.jp) wrote:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> Bruce Momjian <bruce@momjian.us> writes:\n> >> > On Wed, May 11, 2022 at 08:59:26PM -0400, Bruce Momjian wrote:\n> >> >> On Thu, May 12, 2022 at 09:04:38AM +0900, Tatsuo Ishii wrote:\n> >> >>> The last line should be \"ssh://git@gitmaster.postgresql.org/postgresql.git\"?\n> >> \n> >> > I assume the URL list at:\n> >> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=summary\n> >> > is for non-committers.\n> >> \n> >> Yeah, I agree with that. If we advertise the gitmaster address here,\n> >> the primary result will be that we get a lot of complaints from random\n> >> people complaining that they can't access it. A secondary result\n> >> is likely to be an increase in attacks against that server.\n> > \n> > I don't think we could change it very easily without some ugly hacking\n> > of the tool that generates that page too, which is no good...\n> > \n> > We might be able to get rid of the ssh:// URL there though... Will look\n> > into that.\n> \n> For postgresql.git, I agree. But for other repositories, I do not agree.\n\nRight, I was suggesting it just for postgresql.git.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 16 May 2022 11:15:10 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: gitmaster access"
}
] |
[
{
"msg_contents": "Good day.\n\n I've found that the pg_temp schema alias is mentioned in the\ndescription of the search_path variable, but is missing from\nthe schemas documentation section.\n\n I think it would be good to have that there, as that section is\nmentioned as an extended reference for schemas.\n\n This patch adds a small paragraph about the temporary schemas\nto the schemas documentation section.",
"msg_date": "Wed, 11 May 2022 13:18:04 +0300",
"msg_from": "Ilya Anfimov <ilan@tzirechnoy.com>",
"msg_from_op": true,
"msg_subject": "To add pg_temp schema description to schemas documentation"
}
] |
[
{
"msg_contents": "New rmgr stuff looks interesting. I've had a detailed look through it\nand tried to think about how it might be used in practice.\n\nSpotted a minor comment that needs adjustment for new methods...\n[PATCH: rmgr_001.v1.patch]\n\nI notice rm_startup() and rm_cleanup() presume that this only works in\nrecovery. If recovery is \"not needed\", there is no way to run anything\nat all, which seems wrong because how do we know that? I would prefer\nit if rm_startup() and rm_cleanup() were executed in all cases. Only 4\nbuiltin index rmgrs have these anyway, and they are all quick, so I\nsuggest we run them always. This allows a greater range of startup\nbehavior for rmgrs.\n[PATCH: rmgr_002.v1.patch]\n\nIt occurs to me that any use of WAL presumes that Checkpoints exist\nand do something useful. However, the custom rmgr interface doesn't\nallow you to specify any actions on checkpoint, so ends up being\nlimited in scope. So I think we also need an rm_checkpoint() call -\nwhich would be a no-op for existing rmgrs.\n[PATCH: rmgr_003.v1.patch]\n\nThe above turns out to be fairly simple, but extends the API to\nsomething truly flexible.\n\nPlease let me know what you think?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Wed, 11 May 2022 15:24:51 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Comments on Custom RMGRs"
},
{
"msg_contents": "On Wed, 2022-05-11 at 15:24 +0100, Simon Riggs wrote:\n> [PATCH: rmgr_001.v1.patch]\n> \n> [PATCH: rmgr_002.v1.patch]\n\nThank you. Both of these look like good ideas, and I will commit them\nin a few days assuming that nobody else sees a problem.\n\n> It occurs to me that any use of WAL presumes that Checkpoints exist\n> and do something useful. However, the custom rmgr interface doesn't\n> allow you to specify any actions on checkpoint, so ends up being\n> limited in scope. So I think we also need an rm_checkpoint() call -\n> which would be a no-op for existing rmgrs.\n> [PATCH: rmgr_003.v1.patch]\n\nI also like this idea, but can you describe the intended use case? I\nlooked through CheckPointGuts() and I'm not sure what else a custom AM\nmight want to do. Maybe sync special files in a way that's not handled\nwith RegisterSyncRequest()?\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Wed, 11 May 2022 09:39:48 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-11 09:39:48 -0700, Jeff Davis wrote:\n> On Wed, 2022-05-11 at 15:24 +0100, Simon Riggs wrote:\n> > [PATCH: rmgr_001.v1.patch]\n> > \n> > [PATCH: rmgr_002.v1.patch]\n> \n> Thank you. Both of these look like good ideas, and I will commit them\n> in a few days assuming that nobody else sees a problem.\n\nWhat exactly is the use case here? Without passing in information about\nwhether recovery will be performed etc, it's not at all clear how callbacks\ncould something useful?\n\nI don't think we should allocate a bunch of memory contexts to just free them\nimmediately after?\n\n\n> > It occurs to me that any use of WAL presumes that Checkpoints exist\n> > and do something useful. However, the custom rmgr interface doesn't\n> > allow you to specify any actions on checkpoint, so ends up being\n> > limited in scope. So I think we also need an rm_checkpoint() call -\n> > which would be a no-op for existing rmgrs.\n> > [PATCH: rmgr_003.v1.patch]\n> \n> I also like this idea, but can you describe the intended use case? I\n> looked through CheckPointGuts() and I'm not sure what else a custom AM\n> might want to do. Maybe sync special files in a way that's not handled\n> with RegisterSyncRequest()?\n\nI'm not happy with the idea of random code being executed in the middle of\nCheckPointGuts(), without any documentation of what is legal to do at that\npoint. To actually be useful we'd likely need multiple calls to such an rmgr\ncallback, with a parameter where in CheckPointGuts() we are. Right now the\nsequencing is explicit in CheckPointGuts(), but with the proposed callback,\nthat'd not be the case anymore.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 11 May 2022 20:40:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "On Thu, 12 May 2022 at 04:40, Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-05-11 09:39:48 -0700, Jeff Davis wrote:\n> > On Wed, 2022-05-11 at 15:24 +0100, Simon Riggs wrote:\n> > > [PATCH: rmgr_001.v1.patch]\n> > >\n> > > [PATCH: rmgr_002.v1.patch]\n> >\n> > Thank you. Both of these look like good ideas, and I will commit them\n> > in a few days assuming that nobody else sees a problem.\n>\n> What exactly is the use case here? Without passing in information about\n> whether recovery will be performed etc, it's not at all clear how callbacks\n> could something useful?\n\nSure, happy to do it that way.\n[PATCH: rmgr_002.v2.patch]\n\n> I don't think we should allocate a bunch of memory contexts to just free them\n> immediately after?\n\nDidn't seem a problem, but I've added code to use the flag requested above.\n\n> > > It occurs to me that any use of WAL presumes that Checkpoints exist\n> > > and do something useful. However, the custom rmgr interface doesn't\n> > > allow you to specify any actions on checkpoint, so ends up being\n> > > limited in scope. So I think we also need an rm_checkpoint() call -\n> > > which would be a no-op for existing rmgrs.\n> > > [PATCH: rmgr_003.v1.patch]\n> >\n> > I also like this idea, but can you describe the intended use case? I\n> > looked through CheckPointGuts() and I'm not sure what else a custom AM\n> > might want to do. Maybe sync special files in a way that's not handled\n> > with RegisterSyncRequest()?\n>\n> I'm not happy with the idea of random code being executed in the middle of\n> CheckPointGuts(), without any documentation of what is legal to do at that\n> point.\n\nThe \"I'm not happy..\" ship has already sailed with pluggable rmgrs.\n\nCheckpoints exist for one purpose - as the starting place for recovery.\n\nWhy would we allow pluggable recovery without *also* allowing\npluggable checkpoints?\n\n>To actually be useful we'd likely need multiple calls to such an rmgr\n> callback, with a parameter where in CheckPointGuts() we are. Right now the\n> sequencing is explicit in CheckPointGuts(), but with the proposed callback,\n> that'd not be the case anymore.\n\nIt is useful without the extra complexity you mention.\n\nI see multiple uses for the rm_checkpoint() point proposed and I've\nbeen asked multiple times for a checkpoint hook. Any rmgr that\nservices crash recovery for a non-smgr based storage system would need\nthis because the current checkpoint code only handles flushing to disk\nfor smgr-based approaches. That is orthogonal to other code during\ncheckpoint, so it stands alone quite well.\n\n--\nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Thu, 12 May 2022 22:26:51 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-12 22:26:51 +0100, Simon Riggs wrote:\n> On Thu, 12 May 2022 at 04:40, Andres Freund <andres@anarazel.de> wrote:\n> > I'm not happy with the idea of random code being executed in the middle of\n> > CheckPointGuts(), without any documentation of what is legal to do at that\n> > point.\n> \n> The \"I'm not happy..\" ship has already sailed with pluggable rmgrs.\n\nI don't agree. The ordering within a checkpoint is a lot more fragile than\nwhat you do in an individual redo routine.\n\n\n> Checkpoints exist for one purpose - as the starting place for recovery.\n> \n> Why would we allow pluggable recovery without *also* allowing\n> pluggable checkpoints?\n\nBecause one can do a lot of stuff with just pluggable WAL records, without\nintegrating into checkpoints?\n\nNote that I'm *not* against making checkpoint extensible - I just think it\nneeds a good bit of design work around when the hook is called etc.\n\n\nI definitely think it's too late in the cycle to add checkpoint extensibility\nnow.\n\n\n> > To actually be useful we'd likely need multiple calls to such an rmgr\n> > callback, with a parameter where in CheckPointGuts() we are. Right now the\n> > sequencing is explicit in CheckPointGuts(), but with the proposed callback,\n> > that'd not be the case anymore.\n> \n> It is useful without the extra complexity you mention.\n\nShrug. The documentation work definitely is needed. Perhaps we can get away\nwithout multiple callbacks within a checkpoint, I think it'll become more\napparent when writing information about the precise point in time the\ncheckpoint callback is called.\n\n\n> I see multiple uses for the rm_checkpoint() point proposed and I've\n> been asked multiple times for a checkpoint hook. Any rmgr that\n> services crash recovery for a non-smgr based storage system would need\n> this because the current checkpoint code only handles flushing to disk\n> for smgr-based approaches. That is orthogonal to other code during\n> checkpoint, so it stands alone quite well.\n\nFWIW, for that there are much bigger problems than checkpoint\nextensibility. Most importantly there's currently no good way to integrate\nrelation creation / drop with the commit / abort infrastructure...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 12 May 2022 16:42:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "On Thu, 2022-05-12 at 22:26 +0100, Simon Riggs wrote:\n> I see multiple uses for the rm_checkpoint() point proposed and I've\n> been asked multiple times for a checkpoint hook.\n\nCan you elaborate and/or link to a discussion?\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Thu, 12 May 2022 21:13:43 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "On Fri, 13 May 2022 at 05:13, Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Thu, 2022-05-12 at 22:26 +0100, Simon Riggs wrote:\n> > I see multiple uses for the rm_checkpoint() point proposed and I've\n> > been asked multiple times for a checkpoint hook.\n>\n> Can you elaborate and/or link to a discussion?\n\nThose were conversations away from Hackers, but I'm happy to share.\n\nThe first was a discussion about a data structure needed by BDR about\n4 years ago. In the absence of a pluggable checkpoint, the solution\nwas forced to use a normal table, which wasn't very satisfactory.\n\nThe second was a more recent conversation with Mike Stonebraker, at\nthe end of 2021.. He was very keen to remove the buffer manager\nentirely, which requires that we have a new smgr, which then needs new\ncode to allow it to be written to disk at checkpoint time, which then\nrequires some kind of pluggable code at checkpoint time. (Mike was\nalso keen to remove WAL, but that's another story entirely!).\n\nThe last use case was unlogged indexes, which need to be read from\ndisk at startup or rebuilt after crash, which requires RmgrStartup to\nwork both with and without InRedo=true.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 13 May 2022 13:31:13 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "On Fri, 13 May 2022 at 00:42, Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-05-12 22:26:51 +0100, Simon Riggs wrote:\n> > On Thu, 12 May 2022 at 04:40, Andres Freund <andres@anarazel.de> wrote:\n> > > I'm not happy with the idea of random code being executed in the middle of\n> > > CheckPointGuts(), without any documentation of what is legal to do at that\n> > > point.\n> >\n> > The \"I'm not happy..\" ship has already sailed with pluggable rmgrs.\n>\n> I don't agree. The ordering within a checkpoint is a lot more fragile than\n> what you do in an individual redo routine.\n\nExample?\n\n\n> > Checkpoints exist for one purpose - as the starting place for recovery.\n> >\n> > Why would we allow pluggable recovery without *also* allowing\n> > pluggable checkpoints?\n>\n> Because one can do a lot of stuff with just pluggable WAL records, without\n> integrating into checkpoints?\n>\n> Note that I'm *not* against making checkpoint extensible - I just think it\n> needs a good bit of design work around when the hook is called etc.\n\nWhen was any such work done previously for any other hook?? That isn't needed.\n\nCheckpoints aren't complete until all data structures have\ncheckpointed, so there are no problems from a partial checkpoint being\nwritten.\n\nAs a result, the order of actions in CheckpointGuts() is mostly\nindependent of each other. The SLRUs are all independent of each\nother, as is CheckPointBuffers().\n\nThe use cases I'm trying to support aren't tricksy modifications of\nexisting code, they are just entirely new data structures which are\ncompletely independent of other Postgres objects.\n\n\n> I definitely think it's too late in the cycle to add checkpoint extensibility\n> now.\n>\n>\n> > > To actually be useful we'd likely need multiple calls to such an rmgr\n> > > callback, with a parameter where in CheckPointGuts() we are. Right now the\n> > > sequencing is explicit in CheckPointGuts(), but with the proposed callback,\n> > > that'd not be the case anymore.\n> >\n> > It is useful without the extra complexity you mention.\n>\n> Shrug. The documentation work definitely is needed. Perhaps we can get away\n> without multiple callbacks within a checkpoint, I think it'll become more\n> apparent when writing information about the precise point in time the\n> checkpoint callback is called.\n\nYou seem to be thinking in terms of modifying the existing actions in\nCheckpointGuts(). I don't care about that. Anybody that wishes to do\nthat can work out the details of their actions.\n\nThere is nothing to document, other than \"don't do things that won't\nwork\". How can anyone enumerate all the things that wouldn't work??\n\nThere is no list of caveats for any other hook. Why is it needed here?\n\n> > I see multiple uses for the rm_checkpoint() point proposed and I've\n> > been asked multiple times for a checkpoint hook. Any rmgr that\n> > services crash recovery for a non-smgr based storage system would need\n> > this because the current checkpoint code only handles flushing to disk\n> > for smgr-based approaches. That is orthogonal to other code during\n> > checkpoint, so it stands alone quite well.\n>\n> FWIW, for that there are much bigger problems than checkpoint\n> extensibility. Most importantly there's currently no good way to integrate\n> relation creation / drop with the commit / abort infrastructure...\n\nOne at a time...\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 13 May 2022 13:46:58 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "On Fri, 2022-05-13 at 13:31 +0100, Simon Riggs wrote:\n> The first was a discussion about a data structure needed by BDR about\n> 4 years ago. In the absence of a pluggable checkpoint, the solution\n> was forced to use a normal table, which wasn't very satisfactory.\n\nI'm interested to hear more about this case. Are you developing it into\na full table AM? In my experience with columnar, there's still a long\ntail of things I wish I had in the backend to better support complete\ntable AMs.\n\n> The second was a more recent conversation with Mike Stonebraker, at\n> the end of 2021.. He was very keen to remove the buffer manager\n> entirely, which requires that we have a new smgr, which then needs\n> new\n> code to allow it to be written to disk at checkpoint time, which then\n> requires some kind of pluggable code at checkpoint time. (Mike was\n> also keen to remove WAL, but that's another story entirely!).\n\nI'm guessing that would be more of an experimental/ambitious project,\nand based on modified postgres anyway.\n\n> The last use case was unlogged indexes, which need to be read from\n> disk at startup or rebuilt after crash, which requires RmgrStartup to\n> work both with and without InRedo=true.\n\nThat sounds like a core feature, in which case we can just refactor\nthat for v16. It might be a nice cleanup for unlogged tables, too. I\ndon't think your 002-v2 patch is particularly risky, but any reluctance\nat all probably pushes it to v16 given that it's so late in the cycle.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Fri, 13 May 2022 08:46:24 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "On Fri, May 13, 2022 at 8:47 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> > Note that I'm *not* against making checkpoint extensible - I just think it\n> > needs a good bit of design work around when the hook is called etc.\n>\n> When was any such work done previously for any other hook?? That isn't needed.\n\nI think almost every proposal to add a hook results in some discussion\nabout how usable the hook will be and whether it's being put in the\ncorrect place and called with the correct arguments.\n\nI think that's a good thing, too. Otherwise the code would be\ncluttered with a bunch of hooks that seemed to someone like a good\nidea at the time but are actually just a maintenance headache.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 13 May 2022 12:25:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "Hi,\n\nThe checkpoint hook looks very useful, especially for extensions that have\ntheir own storage, like pg_stat_statements.\nFor example, we can keep work data in shared memory and save it only during\ncheckpoints.\nWhen recovering, we need to read all the data from the disk and then repeat\nthe latest changes from the WAL.\n\nOn Mon, Feb 26, 2024 at 2:42 PM Simon Riggs <simon.riggs@enterprisedb.com>\nwrote:\n> On Fri, 13 May 2022 at 00:42, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2022-05-12 22:26:51 +0100, Simon Riggs wrote:\n> > > On Thu, 12 May 2022 at 04:40, Andres Freund <andres@anarazel.de>\nwrote:\n> > > > I'm not happy with the idea of random code being executed in the\nmiddle of\n> > > > CheckPointGuts(), without any documentation of what is legal to do\nat that\n> > > > point.\n> > >\n> > > The \"I'm not happy..\" ship has already sailed with pluggable rmgrs.\n> >\n> > I don't agree. The ordering within a checkpoint is a lot more fragile\nthan\n> > what you do in an individual redo routine.\n>\n> Example?\n>\n>\n> > > Checkpoints exist for one purpose - as the starting place for\nrecovery.\n> > >\n> > > Why would we allow pluggable recovery without *also* allowing\n> > > pluggable checkpoints?\n> >\n> > Because one can do a lot of stuff with just pluggable WAL records,\nwithout\n> > integrating into checkpoints?\n> >\n> > Note that I'm *not* against making checkpoint extensible - I just think\nit\n> > needs a good bit of design work around when the hook is called etc.\n>\n> When was any such work done previously for any other hook?? That isn't\nneeded.\n>\n> Checkpoints aren't complete until all data structures have\n> checkpointed, so there are no problems from a partial checkpoint being\n> written.\n>\n> As a result, the order of actions in CheckpointGuts() is mostly\n> independent of each other. The SLRUs are all independent of each\n> other, as is CheckPointBuffers().\n>\n> The use cases I'm trying to support aren't tricksy modifications of\n> existing code, they are just entirely new data structures which are\n> completely independent of other Postgres objects.\n>\n>\n> > I definitely think it's too late in the cycle to add checkpoint\nextensibility\n> > now.\n> >\n> >\n> > > > To actually be useful we'd likely need multiple calls to such an\nrmgr\n> > > > callback, with a parameter where in CheckPointGuts() we are. Right\nnow the\n> > > > sequencing is explicit in CheckPointGuts(), but with the proposed\ncallback,\n> > > > that'd not be the case anymore.\n> > >\n> > > It is useful without the extra complexity you mention.\n> >\n> > Shrug. The documentation work definitely is needed. Perhaps we can get\naway\n> > without multiple callbacks within a checkpoint, I think it'll become\nmore\n> > apparent when writing information about the precise point in time the\n> > checkpoint callback is called.\n>\n> You seem to be thinking in terms of modifying the existing actions in\n> CheckpointGuts(). I don't care about that. Anybody that wishes to do\n> that can work out the details of their actions.\n>\n> There is nothing to document, other than \"don't do things that won't\n> work\". How can anyone enumerate all the things that wouldn't work??\n>\n> There is no list of caveats for any other hook. Why is it needed here?\n\nThere are easily reproducible issues where rm_checkpoint() throws an ERROR.\nWhen it occurs at the end-of-recovery checkpoint, the server fails to start\nwith a message like this:\nERROR: Test error\nFATAL: checkpoint request failed\nHINT: Consult recent messages in the server log for details.\n\nEven if we remove the broken extension from shared_preload_libraries, we\nget the following message in the server log:\nFATAL: resource manager with ID 128 not registered\nHINT: Include the extension module that implements this resource manager\nin shared_preload_libraries.\n\nIn both cases, with or without the extension in shared_preload_libraries,\nthe server cannot start.\n\nThis seems like a programmer's problem, but what should the user do after\nreceiving such messages?\n\nMaybe it would be safer to use something like after_checkpoint_hook, which\nwould be called after the checkpoint is completed?\nThis is enough for some cases when we only need to save shared memory to\ndisk.\n\n--\nRegards,\nDaniil Anisimov\nPostgres Professional: http://postgrespro.com\n\nHi,The checkpoint hook looks very useful, especially for extensions that have their own storage, like pg_stat_statements.For example, we can keep work data in shared memory and save it only during checkpoints.When recovering, we need to read all the data from the disk and then repeat the latest changes from the WAL.On Mon, Feb 26, 2024 at 2:42 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:> On Fri, 13 May 2022 at 00:42, Andres Freund <andres@anarazel.de> wrote:> >> > On 2022-05-12 22:26:51 +0100, Simon Riggs wrote:> > > On Thu, 12 May 2022 at 04:40, Andres Freund <andres@anarazel.de> wrote:> > > > I'm not happy with the idea of random code being executed in the middle of> > > > CheckPointGuts(), without any documentation of what is legal to do at that> > > > point.> > >> > > The \"I'm not happy..\" ship has already sailed with pluggable rmgrs.> >> > I don't agree. The ordering within a checkpoint is a lot more fragile than> > what you do in an individual redo routine.>> Example?>>> > > Checkpoints exist for one purpose - as the starting place for recovery.> > >> > > Why would we allow pluggable recovery without *also* allowing> > > pluggable checkpoints?> >> > Because one can do a lot of stuff with just pluggable WAL records, without> > integrating into checkpoints?> >> > Note that I'm *not* against making checkpoint extensible - I just think it> > needs a good bit of design work around when the hook is called etc.>> When was any such work done previously for any other hook?? That isn't needed.>> Checkpoints aren't complete until all data structures have> checkpointed, so there are no problems from a partial checkpoint being> written.>> As a result, the order of actions in CheckpointGuts() is mostly> independent of each other. The SLRUs are all independent of each> other, as is CheckPointBuffers().>> The use cases I'm trying to support aren't tricksy modifications of> existing code, they are just entirely new data structures which are> completely independent of other Postgres objects.>>> > I definitely think it's too late in the cycle to add checkpoint extensibility> > now.> >> >> > > > To actually be useful we'd likely need multiple calls to such an rmgr> > > > callback, with a parameter where in CheckPointGuts() we are. Right now the> > > > sequencing is explicit in CheckPointGuts(), but with the proposed callback,> > > > that'd not be the case anymore.> > >> > > It is useful without the extra complexity you mention.> >> > Shrug. The documentation work definitely is needed. Perhaps we can get away> > without multiple callbacks within a checkpoint, I think it'll become more> > apparent when writing information about the precise point in time the> > checkpoint callback is called.>> You seem to be thinking in terms of modifying the existing actions in> CheckpointGuts(). I don't care about that. Anybody that wishes to do> that can work out the details of their actions.>> There is nothing to document, other than \"don't do things that won't> work\". How can anyone enumerate all the things that wouldn't work??>> There is no list of caveats for any other hook. Why is it needed here?There are easily reproducible issues where rm_checkpoint() throws an ERROR.When it occurs at the end-of-recovery checkpoint, the server fails to start with a message like this:ERROR: Test errorFATAL: checkpoint request failedHINT: Consult recent messages in the server log for details.Even if we remove the broken extension from shared_preload_libraries, we get the following message in the server log:FATAL: resource manager with ID 128 not registeredHINT: Include the extension module that implements this resource manager in shared_preload_libraries.In both cases, with or without the extension in shared_preload_libraries, the server cannot start.This seems like a programmer's problem, but what should the user do after receiving such messages?Maybe it would be safer to use something like after_checkpoint_hook, which would be called after the checkpoint is completed?This is enough for some cases when we only need to save shared memory to disk.--Regards,Daniil AnisimovPostgres Professional: http://postgrespro.com",
"msg_date": "Mon, 26 Feb 2024 23:29:26 +0700",
"msg_from": "Danil Anisimow <anisimow.d@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "On Mon, 2024-02-26 at 23:29 +0700, Danil Anisimow wrote:\n> Hi,\n> \n> The checkpoint hook looks very useful, especially for extensions that\n> have their own storage, like pg_stat_statements.\n> For example, we can keep work data in shared memory and save it only\n> during checkpoints.\n> When recovering, we need to read all the data from the disk and then\n> repeat the latest changes from the WAL.\n\nLet's pick this discussion back up, then. Where should the hook go?\nDoes it need to be broken into phases like resource owners? What\nguidance can we provide to extension authors to use it correctly?\n\nSimon's right that these things don't need to be 100% answered for\nevery hook we add; but I agree with Andres and Robert that this could\nbenefit from some more discussion about the details.\n\nThe proposal calls the hook right after CheckPointPredicate() and\nbefore CheckPointBuffers(). Is that the right place for the use case\nyou have in mind with pg_stat_statements?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 26 Feb 2024 11:55:58 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "On Tue, Feb 27, 2024 at 2:56 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> Let's pick this discussion back up, then. Where should the hook go?\n> Does it need to be broken into phases like resource owners? What\n> guidance can we provide to extension authors to use it correctly?\n>\n> Simon's right that these things don't need to be 100% answered for\n> every hook we add; but I agree with Andres and Robert that this could\n> benefit from some more discussion about the details.\n>\n> The proposal calls the hook right after CheckPointPredicate() and\n> before CheckPointBuffers(). Is that the right place for the use case\n> you have in mind with pg_stat_statements?\n\nHello!\n\nAnswering your questions might take some time as I want to write a sample\npatch for pg_stat_statements and make some tests.\nWhat do you think about putting the patch to commitfest as it closing in a\nfew hours?\n\n--\nRegards,\nDaniil Anisimov\nPostgres Professional: http://postgrespro.com\n\nOn Tue, Feb 27, 2024 at 2:56 AM Jeff Davis <pgsql@j-davis.com> wrote:> Let's pick this discussion back up, then. Where should the hook go?> Does it need to be broken into phases like resource owners? What> guidance can we provide to extension authors to use it correctly?>> Simon's right that these things don't need to be 100% answered for> every hook we add; but I agree with Andres and Robert that this could> benefit from some more discussion about the details.>> The proposal calls the hook right after CheckPointPredicate() and> before CheckPointBuffers(). Is that the right place for the use case> you have in mind with pg_stat_statements?Hello!Answering your questions might take some time as I want to write a sample patch for pg_stat_statements and make some tests.What do you think about putting the patch to commitfest as it closing in a few hours?--Regards,Daniil AnisimovPostgres Professional: http://postgrespro.com",
"msg_date": "Thu, 29 Feb 2024 21:47:57 +0700",
"msg_from": "Danil Anisimow <anisimow.d@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "On Thu, 2024-02-29 at 21:47 +0700, Danil Anisimow wrote:\n> Answering your questions might take some time as I want to write a\n> sample patch for pg_stat_statements and make some tests.\n> What do you think about putting the patch to commitfest as it closing\n> in a few hours?\n\nAdded to March CF.\n\nI don't have an immediate use case in mind for this, so please drive\nthat part of the discussion. I can't promise this for 17, but if the\npatch is simple enough and a quick consensus develops, then it's\npossible.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 29 Feb 2024 11:06:22 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "\n\n> On 29 Feb 2024, at 19:47, Danil Anisimow <anisimow.d@gmail.com> wrote:\n> \n> Answering your questions might take some time as I want to write a sample patch for pg_stat_statements and make some tests.\n> What do you think about putting the patch to commitfest as it closing in a few hours?\n\nI’ve switched the patch to “Waiting on Author” to indicate that currently patch is not available yet. Please, flip it back when it’s available for review.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 4 Mar 2024 13:31:37 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "On Fri, Mar 1, 2024 at 2:06 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> Added to March CF.\n>\n> I don't have an immediate use case in mind for this, so please drive\n> that part of the discussion. I can't promise this for 17, but if the\n> patch is simple enough and a quick consensus develops, then it's\n> possible.\n\n[pgss_001.v1.patch] adds a custom resource manager to the\npg_stat_statements extension. The proposed patch is not a complete solution\nfor pgss and may not work correctly with replication.\n\nThe 020_crash.pl test demonstrates server interruption by killing a\nbackend. Without rm_checkpoint hook, the server restores pgss stats only\nafter last CHECKPOINT. Data added to WAL before the checkpoint is not\nrestored.\n\nThe rm_checkpoint hook allows saving shared memory data to disk at each\ncheckpoint. However, for pg_stat_statements, it matters when the checkpoint\noccurred. When the server shuts down, pgss deletes the temporary file of\nquery texts. In other cases, this is unacceptable.\nTo provide this capability, a flags parameter was added to the\nrm_checkpoint hook. The changes are presented in [rmgr_003.v2.patch].\n\n--\nRegards,\nDaniil Anisimov\nPostgres Professional: http://postgrespro.com",
"msg_date": "Thu, 21 Mar 2024 19:47:10 +0700",
"msg_from": "Danil Anisimow <anisimow.d@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "On Thu, 2024-03-21 at 19:47 +0700, Danil Anisimow wrote:\n> [pgss_001.v1.patch] adds a custom resource manager to the\n> pg_stat_statements extension.\n\nDid you consider moving the logic for loading the initial contents from\ndisk from pgss_shmem_startup to .rmgr_startup?\n\n> The rm_checkpoint hook allows saving shared memory data to disk at\n> each checkpoint. However, for pg_stat_statements, it matters when the\n> checkpoint occurred. When the server shuts down, pgss deletes the\n> temporary file of query texts. In other cases, this is unacceptable.\n> To provide this capability, a flags parameter was added to the\n> rm_checkpoint hook. The changes are presented in [rmgr_003.v2.patch].\n\nOverall this seems fairly reasonable to me. I think this will work for\nsimilar extensions, where the data being stored is independent from the\nbuffers.\n\nMy biggest concern is that it might not be quite right for a table AM\nthat has complex state that needs action to be taken at a slightly\ndifferent time, e.g. right after CheckPointBuffers().\n\nThen again, the rmgr is a low-level API, and any extension using it\nshould be prepared to adapt to changes. If it works for pgss, then we\nknow it works for at least one thing, and we can always improve it\nlater. For instance, we might call the hook several times and pass it a\n\"phase\" argument.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 21 Mar 2024 12:02:03 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "On Thu, 2024-03-21 at 19:47 +0700, Danil Anisimow wrote:\n> The proposed patch is not a complete solution for pgss and may not\n> work correctly with replication.\n\nAlso, what is the desired behavior during replication? Should queries\non the primary be represented in pgss on the replica? If the answer is\nyes, should they be differentiated somehow so that you can know where\nthe slow queries are running?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 21 Mar 2024 12:07:38 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 2:02 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Thu, 2024-03-21 at 19:47 +0700, Danil Anisimow wrote:\n> > [pgss_001.v1.patch] adds a custom resource manager to the\n> > pg_stat_statements extension.\n>\n> Did you consider moving the logic for loading the initial contents from\n> disk from pgss_shmem_startup to .rmgr_startup?\n\nI tried it, but .rmgr_startup is not called if the system was shut down\ncleanly.\n\n> My biggest concern is that it might not be quite right for a table AM\n> that has complex state that needs action to be taken at a slightly\n> different time, e.g. right after CheckPointBuffers().\n\n> Then again, the rmgr is a low-level API, and any extension using it\n> should be prepared to adapt to changes. If it works for pgss, then we\n> know it works for at least one thing, and we can always improve it\n> later. For instance, we might call the hook several times and pass it a\n> \"phase\" argument.\n\nIn [rmgr_003.v3.patch] I added a phase argument to RmgrCheckpoint().\nCurrently it is only called in two places: before and after\nCheckPointBuffers().\n\n--\nRegards,\nDaniil Anisimov\nPostgres Professional: http://postgrespro.com",
"msg_date": "Fri, 29 Mar 2024 18:20:11 +0700",
"msg_from": "Danil Anisimow <anisimow.d@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "On Fri, 2024-03-29 at 18:20 +0700, Danil Anisimow wrote:\n> \n> In [rmgr_003.v3.patch] I added a phase argument to RmgrCheckpoint().\n> Currently it is only called in two places: before and after\n> CheckPointBuffers().\n\nI am fine with this.\n\nYou've moved the discussion forward in two ways:\n\n 1. Changes to pg_stat_statements to actually use the API; and\n 2. The hook is called at multiple points.\n\nThose at least partially address the concerns raised by Andres and\nRobert. But given that there was pushback from multiple people on the\nfeature, I'd like to hear from at least one of them. It's very late in\nthe cycle so I'm not sure we'll get more feedback in time, though.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 29 Mar 2024 10:09:38 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 1:09 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> I am fine with this.\n>\n> You've moved the discussion forward in two ways:\n>\n> 1. Changes to pg_stat_statements to actually use the API; and\n> 2. The hook is called at multiple points.\n>\n> Those at least partially address the concerns raised by Andres and\n> Robert. But given that there was pushback from multiple people on the\n> feature, I'd like to hear from at least one of them. It's very late in\n> the cycle so I'm not sure we'll get more feedback in time, though.\n\nIn my seemingly-neverending pass through the July CommitFest, I\nreached this patch. My comment is: it's possible that\nrmgr_003.v3.patch is enough to be useful, but does anyone in the world\nthink they know that for a fact?\n\nI mean, pgss_001.v1.patch purports to demonstrate that it can be used,\nbut that's based on rmgr_003.v2.patch, not the v3 patch, and the\nemails seem to indicate that it may not actually work. I also think,\nlooking at it, that it looks much more like a POC than something we'd\nconsider ready for commit. It also seems very unclear that we'd want\npg_stat_statements to behave this way, and indeed \"this way\" isn't\nreally spelled out anywhere.\n\nI think it would be nice if we had an example that uses the proposed\nhook that we could actually commit. Maybe that's asking too much,\nthough. I think the minimum thing we need is a compelling rationale\nfor why this particular hook design is going to be good enough. That\ncould be demonstrated by means of (1) a well-commented example that\naccomplishes some understandable goal and/or (2) a detailed\ndescription of how a non-core table AM or index AM is expected to be\nable to make use of this. Bonus points if the person providing that\nrationale can say credibly that they've actually implemented this and\nit works great with 100TB of production data.\n\nThe problem here is not only that we don't want to commit a hook that\ndoes nothing useful. We also don't want to commit a hook that works\nwonderfully for someone but we have no idea why. If we do that, then\nwe don't know whether it's OK to modify the hook in the future as the\ncode evolves, or more to the point, which kinds of modifications will\nbe acceptable. And also, the next person who wants to use it is likely\nto have to figure out all on their own how to do so, instead of being\nable to refer to comments or documentation or the commit message or at\nleast a mailing list post.\n\nMy basic position is not that this patch is a bad idea, but that it\nisn't really finished. The idea is probably a pretty good one, but\nwhether this is a reasonable implementation of the idea doesn't seem\nclear, at least not to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 17 May 2024 14:56:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "On Fri, 2024-05-17 at 14:56 -0400, Robert Haas wrote:\n> (2) a detailed\n> description of how a non-core table AM or index AM is expected to be\n> able to make use of this. Bonus points if the person providing that\n> rationale can say credibly that they've actually implemented this and\n> it works great with 100TB of production data.\n\nThat's a chicken-and-egg problem and we should be careful about setting\nthe bar too high for table AM improvements. Ultimately, AM authors will\nbenefit more from a steady stream of improvements that sometimes miss\nthe mark than complete stagnation, as long as we use reasonable\njudgement.\n\nThere aren't a lot of table AMs, and to create a good one you need a\nlot of internals knowledge. If it's an important AM, the developers are\nsurely going to try it out on mainline occasionally, and expect API\nbreaks. If the API breaks for them in some fundamental way, they can\ncomplain and we still have time to fix it.\n\n> The problem here is not only that we don't want to commit a hook that\n> does nothing useful. We also don't want to commit a hook that works\n> wonderfully for someone but we have no idea why. If we do that, then\n> we don't know whether it's OK to modify the hook in the future as the\n> code evolves, or more to the point, which kinds of modifications will\n> be acceptable.\n\nWe have to have some kind of understanding between us and AM authors\nthat they need to participate in discussions when using these APIs, try\nchanges during development, be adaptable when they change from release\nto release, and come back and tell us when something is wrong.\n\n> And also, the next person who wants to use it is likely\n> to have to figure out all on their own how to do so, instead of being\n> able to refer to comments or documentation or the commit message or\n> at\n> least a mailing list post.\n\nObviously it would be better to have a nice example table AM in\n/contrib, different enough from heap, but nobody has done that yet. You\ncould argue that we never should have exposed the API without something\nlike this in the first place, but that's also a big ask and we'd\nprobably still not have it.\n\n\nRegarding this particular change: the checkpointing hook seems more\nlike a table AM feature, so I agree with you that we should have a good\nidea how a real table AM might use this, rather than only\npg_stat_statements.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 17 May 2024 13:20:19 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "On Fri, May 17, 2024 at 4:20 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Regarding this particular change: the checkpointing hook seems more\n> like a table AM feature, so I agree with you that we should have a good\n> idea how a real table AM might use this, rather than only\n> pg_stat_statements.\n\nI would even be OK with a pg_stat_statements example that is fully\nworking and fully explained. I just don't want to have no example at\nall. The original proposal has been changed twice because of\ncomplaints that the hook wasn't quite useful enough, but I think that\nonly proves that v3 is closer to being useful than v1. If v1 is 40% of\nthe way to useful and v3 is 120% of the way to useful, wonderful! But\nif v1 is 20% of the way to being useful and v3 is 60% of the way to\nbeing useful, it's not time to commit anything yet. I don't know which\nis the case, and I think if someone wants this to be committed, they\nneed to explain clearly why it's the first and not the second.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 17 May 2024 16:25:15 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "On Fri, May 17, 2024 at 04:25:15PM -0400, Robert Haas wrote:\n> On Fri, May 17, 2024 at 4:20 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>> Regarding this particular change: the checkpointing hook seems more\n>> like a table AM feature, so I agree with you that we should have a good\n>> idea how a real table AM might use this, rather than only\n>> pg_stat_statements.\n> \n> I would even be OK with a pg_stat_statements example that is fully\n> working and fully explained. I just don't want to have no example at\n> all. The original proposal has been changed twice because of\n> complaints that the hook wasn't quite useful enough, but I think that\n> only proves that v3 is closer to being useful than v1. If v1 is 40% of\n> the way to useful and v3 is 120% of the way to useful, wonderful! But\n> if v1 is 20% of the way to being useful and v3 is 60% of the way to\n> being useful, it's not time to commit anything yet. I don't know which\n> is the case, and I think if someone wants this to be committed, they\n> need to explain clearly why it's the first and not the second.\n\nPlease note that I've been studying ways to have pg_stat_statements\nbeing plugged in directly with the shared pgstat APIs to get it backed\nby a dshash to give more flexibility and scaling, giving a way for\nextensions to register their own stats kind. In this case, the flush\nof the stats would be controlled with a callback in the stats\nregistered by the extensions, conflicting with what's proposed here.\npg_stat_statements is all about stats, at the end. I don't want this\nargument to act as a barrier if a checkpoint hook is an accepted\nconsensus here, but a checkpoint hook used for this code path is not\nthe most intuitive solution I can think of in the long-term.\n--\nMichael",
"msg_date": "Mon, 27 May 2024 11:20:52 -0700",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "On Fri May 17, 2024 at 3:20 PM CDT, Jeff Davis wrote:\n> ...\n>\n> Obviously it would be better to have a nice example table AM in\n> /contrib, different enough from heap, but nobody has done that yet. You\n> could argue that we never should have exposed the API without something\n> like this in the first place, but that's also a big ask and we'd\n> probably still not have it.\n\nNot sure how useful it would be as an example, but MariaDB has \na blackhole storage engine[0], which has helped serve as a guide for me \npreviously.\n\n[0]: https://mariadb.com/kb/en/blackhole/\n\n-- \nTristan Partin\nhttps://tristan.partin.io\n\n\n",
"msg_date": "Mon, 27 May 2024 16:32:46 -0500",
"msg_from": "\"Tristan Partin\" <tristan@partin.io>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "On 27/05/2024 21:20, Michael Paquier wrote:\n> On Fri, May 17, 2024 at 04:25:15PM -0400, Robert Haas wrote:\n>> On Fri, May 17, 2024 at 4:20 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>>> Regarding this particular change: the checkpointing hook seems more\n>>> like a table AM feature, so I agree with you that we should have a good\n>>> idea how a real table AM might use this, rather than only\n>>> pg_stat_statements.\n>>\n>> I would even be OK with a pg_stat_statements example that is fully\n>> working and fully explained. I just don't want to have no example at\n>> all. The original proposal has been changed twice because of\n>> complaints that the hook wasn't quite useful enough, but I think that\n>> only proves that v3 is closer to being useful than v1. If v1 is 40% of\n>> the way to useful and v3 is 120% of the way to useful, wonderful! But\n>> if v1 is 20% of the way to being useful and v3 is 60% of the way to\n>> being useful, it's not time to commit anything yet. I don't know which\n>> is the case, and I think if someone wants this to be committed, they\n>> need to explain clearly why it's the first and not the second.\n> \n> Please note that I've been studying ways to have pg_stat_statements\n> being plugged in directly with the shared pgstat APIs to get it backed\n> by a dshash to give more flexibility and scaling, giving a way for\n> extensions to register their own stats kind. In this case, the flush\n> of the stats would be controlled with a callback in the stats\n> registered by the extensions, conflicting with what's proposed here.\n> pg_stat_statements is all about stats, at the end. I don't want this\n> argument to act as a barrier if a checkpoint hook is an accepted\n> consensus here, but a checkpoint hook used for this code path is not\n> the most intuitive solution I can think of in the long-term.\n\nOn the topic of concrete uses for this API: We have a bunch of built-in \nresource managers that could be refactored to use this API. \nCheckPointGuts currently looks like this:\n\n> /*\n> * Flush all data in shared memory to disk, and fsync\n> *\n> * This is the common code shared between regular checkpoints and\n> * recovery restartpoints.\n> */\n> static void\n> CheckPointGuts(XLogRecPtr checkPointRedo, int flags)\n> {\n> \tCheckPointRelationMap();\n> \tCheckPointReplicationSlots(flags & CHECKPOINT_IS_SHUTDOWN);\n> \tCheckPointSnapBuild();\n> \tCheckPointLogicalRewriteHeap();\n> \tCheckPointReplicationOrigin();\n> \n> \t/* Write out all dirty data in SLRUs and the main buffer pool */\n> \tTRACE_POSTGRESQL_BUFFER_CHECKPOINT_START(flags);\n> \tCheckpointStats.ckpt_write_t = GetCurrentTimestamp();\n> \tCheckPointCLOG();\n> \tCheckPointCommitTs();\n> \tCheckPointSUBTRANS();\n> \tCheckPointMultiXact();\n> \tCheckPointPredicate();\n> \n> \tRmgrCheckpoint(flags, RMGR_CHECKPOINT_BEFORE_BUFFERS);\n> \n> \tCheckPointBuffers(flags);\n> \n> \tRmgrCheckpoint(flags, RMGR_CHECKPOINT_AFTER_BUFFERS);\n> \n> \t/* Perform all queued up fsyncs */\n> \tTRACE_POSTGRESQL_BUFFER_CHECKPOINT_SYNC_START();\n> \tCheckpointStats.ckpt_sync_t = GetCurrentTimestamp();\n> \tProcessSyncRequests();\n> \tCheckpointStats.ckpt_sync_end_t = GetCurrentTimestamp();\n> \tTRACE_POSTGRESQL_BUFFER_CHECKPOINT_DONE();\n> \n> \t/* We deliberately delay 2PC checkpointing as long as possible */\n> \tCheckPointTwoPhase(checkPointRedo);\n> }\n\nOf these calls, CheckPointCLOG would be natural as the rmgr_callback of \nthe clog rmgr. Similarly for CheckPointMultiXact and maybe a few others.\n\n\nHowever, let's look at the pg_stat_statements patch (pgss_001.v1.patch):\n\nIt's now writing a new WAL record for every call to pgss_store(), \nturning even simple queries into WAL-logged operations. That's a \nnon-starter. It will also not work on a standby. This needs to be \nredesigned so that the data is updated in memory, and written to disk \nand/or WAL-logged only periodically. Perhaps at checkpoints, but you \ncould do it more frequently too.\n\nI'm not convinced that the stats should be WAL-logged. Do you want them \nto be replicated and included in backups? Maybe, but maybe not. It's \ncertainly a change to how it currently works.\n\nIf we don't WAL-log the stats, we don't really need a custom RMGR for \nit. We just need a checkpoint hook to flush the stats to disk, but we \ndon't need a registered RMGR ID for it.\n\nSo, I got a feeling that adding this to the rmgr interface is not quite \nright. The rmgr callbacks are for things that run when WAL is \n*replayed*, while checkpoints are related to how WAL is generated. Let's \ndesign this as an independent hook, separate from rmgrs.\n\n\nAnother data point: In Neon, we actually had to add a little code to \ncheckpoints, to WAL-log some exta data. That was a quick hack and might \nnot be the right design in the first place, but these hooks would not \nhave been useful for our purposes. We wanted to write a new WAL record \nat shutdown, and in CheckPointGuts(), it's already too late for that. It \nneeds to be done earlier, before starting to the shutdown checkpoint. \nSimilar to LogStandbySnapshot(), except that LogStandbySnapshot() is not \ncalled at shutdown like we wanted to. For a table AM, the point of a \ncheckpoint hook is probably to fsync() data that is managed outside of \nthe normal buffer manager and CheckPointGuts() is the right place for \nthat, but other extensions might want to hook into checkpoints for other \nreasons.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 23 Jul 2024 16:21:53 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
},
{
"msg_contents": "On Tue, 2024-07-23 at 16:21 +0300, Heikki Linnakangas wrote:\n> So, I got a feeling that adding this to the rmgr interface is not\n> quite \n> right. The rmgr callbacks are for things that run when WAL is \n> *replayed*, while checkpoints are related to how WAL is generated.\n> Let's \n> design this as an independent hook, separate from rmgrs.\n\nThat's a good way to look at it, agreed.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sun, 04 Aug 2024 08:36:28 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Comments on Custom RMGRs"
}
] |
[
{
"msg_contents": "With beta1 planned for next week, we're running out of time for $SUBJECT.\nI propose to do that tomorrow, or possibly Friday if anyone needs a\nlittle more time to get bugfix patches in. Comments?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 11 May 2022 15:36:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "It's about time to run pgindent, renumber_oids, etc"
}
] |
[
{
"msg_contents": "Hello,\n\nI've noticed that \n\njsonb_hash_extended(jsonb_v,0) = jsonb_hash_extended(jsonb_build_array(jsonb_v),0)\n\nfor any jsonb value jsonb_v. \n\nAFAICT it happens because when iterating over a jsonb the hash function makes no distinction between raw scalars and arrays (it doesn't inspect v.val.array.rawScalar)\nhttps://github.com/postgres/postgres/blob/27b77ecf9f4d5be211900eda54d8155ada50d696/src/backend/utils/adt/jsonb_op.c#L326\n\nIs this an intended behaviour or a bug?\n\nCheers,\nValeriy\n\n\n\n\n\n\n",
"msg_date": "Thu, 12 May 2022 13:02:12 +0200",
"msg_from": "Valeriy Meleshkin <valeriy@meleshk.in>",
"msg_from_op": true,
"msg_subject": "Reproducible coliisions in jsonb_hash"
},
{
"msg_contents": "\nOn 2022-05-12 Th 07:02, Valeriy Meleshkin wrote:\n> Hello,\n>\n> I've noticed that \n>\n> jsonb_hash_extended(jsonb_v,0) = jsonb_hash_extended(jsonb_build_array(jsonb_v),0)\n>\n> for any jsonb value jsonb_v. \n>\n> AFAICT it happens because when iterating over a jsonb the hash function makes no distinction between raw scalars and arrays (it doesn't inspect v.val.array.rawScalar)\n> https://github.com/postgres/postgres/blob/27b77ecf9f4d5be211900eda54d8155ada50d696/src/backend/utils/adt/jsonb_op.c#L326\n>\n> Is this an intended behaviour or a bug?\n>\n\nIt does look rather like a bug, but I'm unclear about the implications\nof fixing it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 12 May 2022 09:51:47 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Reproducible coliisions in jsonb_hash"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-05-12 Th 07:02, Valeriy Meleshkin wrote:\n>> AFAICT it happens because when iterating over a jsonb the hash function makes no distinction between raw scalars and arrays (it doesn't inspect v.val.array.rawScalar)\n\n> It does look rather like a bug, but I'm unclear about the implications\n> of fixing it.\n\nChanging this hash algorithm would break existing hash indexes on jsonb\ncolumns. Maybe there aren't any, but even if so I can't get very excited\nabout changing this. Hash algorithms always have collisions, and we have\nnever made any promise that ours are cryptographically strong.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 May 2022 09:57:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reproducible coliisions in jsonb_hash"
},
{
"msg_contents": "On Thu, May 12, 2022 at 9:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 2022-05-12 Th 07:02, Valeriy Meleshkin wrote:\n> >> AFAICT it happens because when iterating over a jsonb the hash function makes no distinction between raw scalars and arrays (it doesn't inspect v.val.array.rawScalar)\n>\n> > It does look rather like a bug, but I'm unclear about the implications\n> > of fixing it.\n>\n> Changing this hash algorithm would break existing hash indexes on jsonb\n> columns. Maybe there aren't any, but even if so I can't get very excited\n> about changing this. Hash algorithms always have collisions, and we have\n> never made any promise that ours are cryptographically strong.\n\nI might be missing something, but I don't know why \"cryptographically\nstrong\" is the relevant concept here. It seems like the question is\nhow likely it is that this would lead to queries having crappy\nperformance. In general we want hash functions to deliver different\nvalues for different inputs so that we give ourselves the best\npossible chance of spreading keys evenly across buckets. If for\nexample we hashed every possible JSON object to the same constant\nvalue, everything we tried to do with this hash function would suck,\nand the problem would be so severe that I think we'd have to fix it,\neven though it would mean a compatibility break. Or even if we hashed\nevery JSON integer to the same value, that would be horrible, because\nit's quite likely that you could have a column full of json objects\nwhere ignoring the difference between one integer and another results\nin a ton of duplicate hash values.\n\nHere, that doesn't seem too likely. You could have a column that\ncontains 'tom' and ['tom'] and [['tom']] and [[['tom']]] and so forth\nand they all get mapped onto the same bucket and you're sad. But\nprobably not. So I'd judge that while it was probably a mistake to\nmake the hash function work this way, it's not likely to cause serious\nproblems, and therefore we ought to maybe leave it alone for now, but\nadd a comment so that if we ever break backward-compatibility for any\nother reason, we remember to fix this too.\n\nIOW, I think I mostly agree with your conclusion, but perhaps not\nentirely with the reasoning.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 12 May 2022 10:25:07 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reproducible coliisions in jsonb_hash"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Here, that doesn't seem too likely. You could have a column that\n> contains 'tom' and ['tom'] and [['tom']] and [[['tom']]] and so forth\n> and they all get mapped onto the same bucket and you're sad. But\n> probably not.\n\nYeah, that might be a more useful way to think about it: is this likely\nto cause performance-critical collisions in practice? I agree that\nthat doesn't seem like a very likely situation, even given that you\nmight be using json for erratically-structured data.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 May 2022 10:55:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reproducible coliisions in jsonb_hash"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Here, that doesn't seem too likely. You could have a column that\n> > contains 'tom' and ['tom'] and [['tom']] and [[['tom']]] and so forth\n> > and they all get mapped onto the same bucket and you're sad. But\n> > probably not.\n> \n> Yeah, that might be a more useful way to think about it: is this likely\n> to cause performance-critical collisions in practice? I agree that\n> that doesn't seem like a very likely situation, even given that you\n> might be using json for erratically-structured data.\n\nParticularly for something like jsonb (but maybe other things?) having a\nhash function that could be user-defined or at least have some options\nseems like it would be quite nice (similar to compression...). If we\nwere to go in the direction of changing this, I'd suggest that we try to\nmake it something where the existing function could still be used while\nalso allowing a new one to be used. More flexibility would be even\nbetter, of course (column-specific hash functions comes to mind...).\n\nAgreed with the general conclusion here also, just wanted to share some\nthoughts on possible future directions to go in.\n\nThanks,\n\nStephen",
"msg_date": "Sun, 15 May 2022 12:03:26 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Reproducible coliisions in jsonb_hash"
}
] |
[
{
"msg_contents": "Hi,\n\nI was experimenting with specifying symbol visiblity for functions explicitly,\ni.e. adding PGDLLIMPORT markers for them, with the goal of getting rid of\nsrc/tools/msvc/gendef.pl (and similar AIX stuff). While doing that I compared\nthe set of exported symbols before / after, leading me to find a few\npre-existing \"issues\".\n\nI think the attached patches are all a good idea and trivial enought that I\nthink we should apply them now.\n\nThe changes are sufficiently obvious and/or explained in the commit messages.\n\nComments?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Thu, 12 May 2022 09:45:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Declaration fixes"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I was experimenting with specifying symbol visiblity for functions explicitly,\n> i.e. adding PGDLLIMPORT markers for them, with the goal of getting rid of\n> src/tools/msvc/gendef.pl (and similar AIX stuff). While doing that I compared\n> the set of exported symbols before / after, leading me to find a few\n> pre-existing \"issues\".\n\n> I think the attached patches are all a good idea and trivial enought that I\n> think we should apply them now.\n\n+1 to all of that. Would the changes you're working on result in getting\nwarnings for these sorts of oversights on mainstream compilers?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 May 2022 13:18:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Declaration fixes"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-12 13:18:05 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I was experimenting with specifying symbol visiblity for functions explicitly,\n> > i.e. adding PGDLLIMPORT markers for them, with the goal of getting rid of\n> > src/tools/msvc/gendef.pl (and similar AIX stuff). While doing that I compared\n> > the set of exported symbols before / after, leading me to find a few\n> > pre-existing \"issues\".\n> \n> > I think the attached patches are all a good idea and trivial enought that I\n> > think we should apply them now.\n> \n> +1 to all of that.\n\nCool.\n\n\n> Would the changes you're working on result in getting\n> warnings for these sorts of oversights on mainstream compilers?\n\nI assume with \"mainstream compiler\" you basically mean gcc and clang? If so,\nsome oversights would be hard errors, some warnings, some runtime (i.e. symbol\nnot found errors when loading extension library) but some others unfortunately\nwould continue to only be visible in msvc :(.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 12 May 2022 11:38:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Declaration fixes"
},
{
"msg_contents": "On 2022-05-12 11:38:39 -0700, Andres Freund wrote:\n> On 2022-05-12 13:18:05 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > I think the attached patches are all a good idea and trivial enought that I\n> > > think we should apply them now.\n> > \n> > +1 to all of that.\n> \n> Cool.\n\nPushed them.\n\n\n",
"msg_date": "Thu, 12 May 2022 12:44:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Declaration fixes"
}
] |
[
{
"msg_contents": "I just completed the v15 pre-beta pgindent run. It went reasonably\nsmoothly, but I had to hack up typedefs.list a little bit compared\nto the version downloaded from the buildfarm.\n\n* The buildfarm's list is missing\n pg_md5_ctx\n pg_sha1_ctx\n pg_sha224_ctx\n pg_sha256_ctx\n pg_sha384_ctx\n pg_sha512_ctx\nwhich are certainly used, but only in some src/common files\nthat are built only in non-OpenSSL builds. So evidently,\nevery buildfarm member that's contributing to the typedefs list\nbuilds with OpenSSL. That wouldn't surprise me, except that\nmy own animal sifaka should be filling that gap. Looking at\nits latest attempt[1], it seems to be generating an empty list,\nwhich I guess means that our recipe for extracting typedefs\ndoesn't work on macOS/arm64. I shall investigate.\n\n* The buildfarm's list includes \"value_type\", which is surely\nnot typedef'd anywhere in our code, and that is messing up\nsome formatting involving JsonIsPredicate.value_type.\nI suppose that is coming from some system header where it is\na typedef on some machines (komodoensis and lorikeet report it,\nwhich seems like an odd pairing). I think the best thing to\ndo here is rename that field while we still can, perhaps to\nitem_type. Thoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sifaka&dt=2022-05-11%2020%3A21%3A15\n\n\n",
"msg_date": "Thu, 12 May 2022 16:00:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "typedefs.list glitches"
},
{
"msg_contents": "I wrote:\n> every buildfarm member that's contributing to the typedefs list\n> builds with OpenSSL. That wouldn't surprise me, except that\n> my own animal sifaka should be filling that gap. Looking at\n> its latest attempt[1], it seems to be generating an empty list,\n> which I guess means that our recipe for extracting typedefs\n> doesn't work on macOS/arm64. I shall investigate.\n\nFound it. Current macOS produces\n\n$ objdump -W\n/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/objdump: error: unknown argument '-W'\n\nwhere last year's vintage produced\n\n$ objdump -W\nobjdump: Unknown command line argument '-W'. Try: '/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/objdump --help'\nobjdump: Did you mean '-C'?\n\nThis confuses run_build.pl into taking the \"Linux and sometimes windows\"\ncode path instead of the $using_osx one. I think simplest fix is to\nmove the $using_osx branch ahead of the heuristic ones, as attached.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 12 May 2022 17:21:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: typedefs.list glitches"
},
{
"msg_contents": "On Thu May 12, 2022 at 4:21 PM CDT, Tom Lane wrote:\n> I wrote:\n> > every buildfarm member that's contributing to the typedefs list\n> > builds with OpenSSL. That wouldn't surprise me, except that\n> > my own animal sifaka should be filling that gap. Looking at\n> > its latest attempt[1], it seems to be generating an empty list,\n> > which I guess means that our recipe for extracting typedefs\n> > doesn't work on macOS/arm64. I shall investigate.\n>\n> Found it. Current macOS produces\n>\n> $ objdump -W\n> /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/objdump: error: unknown argument '-W'\n>\n> where last year's vintage produced\n>\n> $ objdump -W\n> objdump: Unknown command line argument '-W'. Try: '/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/objdump --help'\n> objdump: Did you mean '-C'?\n>\n> This confuses run_build.pl into taking the \"Linux and sometimes windows\"\n> code path instead of the $using_osx one. I think simplest fix is to\n> move the $using_osx branch ahead of the heuristic ones, as attached.\n\nHey Tom,\n\nWas this patch ever committed?\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Tue, 12 Dec 2023 09:37:29 -0600",
"msg_from": "\"Tristan Partin\" <tristan@neon.tech>",
"msg_from_op": false,
"msg_subject": "Re: typedefs.list glitches"
},
{
"msg_contents": "\"Tristan Partin\" <tristan@neon.tech> writes:\n> Was this patch ever committed?\n\nYes, though not till\n\ncommit dcca861554e90d6395c3c153317b0b0e3841f103\nAuthor: Andrew Dunstan <andrew@dunslane.net>\nDate: Sun Jan 15 07:32:50 2023 -0500\n\n Improve typedef logic for MacOS\n\nsifaka is currently generating typedefs, and I'm pretty certain\nit's using unpatched REL_17 BF code.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 12 Dec 2023 10:48:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: typedefs.list glitches"
}
] |
[
{
"msg_contents": "Andres drew my attention to this [1] build farm failure.\n\nLooks like a test I wrote resetting pg_stat_replication_slots is\nfailing:\n\n # Failed test 'Check that reset timestamp is later after resetting\nstats for slot 'test_slot' again.'\n # at t/006_logical_decoding.pl line 261.\n # got: 'f'\n # expected: 't'\n # Looks like you failed 1 test of 20.\n [19:59:58] t/006_logical_decoding.pl ............\n\nThis is the test code itself:\n\n is( $node_primary->safe_psql(\n 'postgres',\n qq(SELECT stats_reset > '$reset1'::timestamptz FROM\npg_stat_replication_slots WHERE slot_name = '$stats_test_slot1')\n ),\n qq(t),\n qq(Check that reset timestamp is later after resetting stats for\nslot '$stats_test_slot1' again.)\n );\n\nThis is the relevant SQL statement:\n\n SELECT stats_reset > '$reset1'::timestamptz FROM\npg_stat_replication_slots WHERE slot_name = '$stats_test_slot1'\n\nWhen this statement is executed, reset1 is as shown:\n\n 2022-05-12 19:59:58.342 CEST [88829:3] 006_logical_decoding.pl LOG:\nstatement: SELECT stats_reset > '2022-05-12\n19:59:58.402808+02'::timestamptz FROM pg_stat_replication_slots WHERE\nslot_name = 'test_slot'\n\nNote the timestamp of this execution. The stats reset occurred in the\npast, and as such *must* have come before '2022-05-12\n19:59:58.402808+02'::timestamptz.\n\nThe starred line is where `reset1` is fetched:\n\n 2022-05-12 19:59:58.305 CEST [86784:2] [unknown] LOG: connection\nauthorized: user=pgbf database=postgres\napplication_name=006_logical_decoding.pl\n* 2022-05-12 19:59:58.306 CEST [86784:3] 006_logical_decoding.pl LOG:\nstatement: SELECT stats_reset FROM pg_stat_replication_slots WHERE\nslot_name = 'test_slot'\n 2022-05-12 19:59:58.308 CEST [86784:4] 006_logical_decoding.pl LOG:\ndisconnection: session time: 0:00:00.003 user=pgbf database=postgres\nhost=[local]\n 2022-05-12 19:59:58.315 CEST [18214:1] [unknown] LOG: connection\nreceived: host=[local]\n 2022-05-12 19:59:58.316 CEST [18214:2] [unknown] LOG: connection\nauthorized: user=pgbf database=postgres\napplication_name=006_logical_decoding.pl\n 2022-05-12 19:59:58.317 CEST [18214:3] 006_logical_decoding.pl LOG:\nstatement: SELECT pg_stat_reset_replication_slot(NULL)\n 2022-05-12 19:59:58.322 CEST [18214:4] 006_logical_decoding.pl LOG:\ndisconnection: session time: 0:00:00.007 user=pgbf database=postgres\nhost=[local]\n 2022-05-12 19:59:58.329 CEST [45967:1] [unknown] LOG: connection\nreceived: host=[local]\n 2022-05-12 19:59:58.330 CEST [45967:2] [unknown] LOG: connection\nauthorized: user=pgbf database=postgres\napplication_name=006_logical_decoding.pl\n 2022-05-12 19:59:58.331 CEST [45967:3] 006_logical_decoding.pl LOG:\nstatement: SELECT stats_reset IS NOT NULL FROM\npg_stat_replication_slots WHERE slot_name = 'logical_slot'\n 2022-05-12 19:59:58.333 CEST [45967:4] 006_logical_decoding.pl LOG:\ndisconnection: session time: 0:00:00.003 user=pgbf database=postgres\nhost=[local]\n 2022-05-12 19:59:58.341 CEST [88829:1] [unknown] LOG: connection\nreceived: host=[local]\n 2022-05-12 19:59:58.341 CEST [88829:2] [unknown] LOG: connection\nauthorized: user=pgbf database=postgres\napplication_name=006_logical_decoding.pl\n 2022-05-12 19:59:58.342 CEST [88829:3] 006_logical_decoding.pl LOG:\nstatement: SELECT stats_reset > '2022-05-12\n19:59:58.402808+02'::timestamptz FROM pg_stat_replication_slots WHERE\nslot_name = 'test_slot'\n 2022-05-12 19:59:58.344 CEST [88829:4] 006_logical_decoding.pl LOG:\ndisconnection: session time: 0:00:00.003 user=pgbf database=postgres\nhost=[local]\n 2022-05-12 19:59:58.350 CEST [50055:4] LOG: received fast shutdown request\n 2022-05-12 19:59:58.350 CEST [50055:5] LOG: aborting any active transactions\n 2022-05-12 19:59:58.352 CEST [50055:6] LOG: background worker\n\"logical replication launcher\" (PID 89924) exited with exit code 1\n 2022-05-12 19:59:58.352 CEST [56213:1] LOG: shutting down\n 2022-05-12 19:59:58.352 CEST [56213:2] LOG: checkpoint starting:\nshutdown immediate\n 2022-05-12 19:59:58.353 CEST [56213:3] LOG: checkpoint complete:\nwrote 4 buffers (3.1%); 0 WAL file(s) added, 0 removed, 0 recycled;\nwrite=0.001 s, sync=0.001 s, total=0.001 s; sync files=0,\nlongest=0.000 s, average=0.000 s; distance=0 kB, estimate=0 kB\n 2022-05-12 19:59:58.355 CEST [50055:7] LOG: database system is shut down\n\nstats_reset was set in the past, so `reset1` shouldn't be after\n'2022-05-12 19:59:58.306 CEST'. It looks like the timestamp appearing in\nthe test query would correspond to a time after the database is shut\ndown.\n\n- melanie\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=morepork&dt=2022-05-12%2017%3A50%3A47\n\n\n",
"msg_date": "Thu, 12 May 2022 21:42:43 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "recovery test failure on morepork with timestamp mystery"
},
{
"msg_contents": "On Fri, May 13, 2022 at 11:43 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> Andres drew my attention to this [1] build farm failure.\n>\n> Looks like a test I wrote resetting pg_stat_replication_slots is\n> failing:\n>\n> # Failed test 'Check that reset timestamp is later after resetting\n> stats for slot 'test_slot' again.'\n> # at t/006_logical_decoding.pl line 261.\n> # got: 'f'\n> # expected: 't'\n> # Looks like you failed 1 test of 20.\n> [19:59:58] t/006_logical_decoding.pl ............\n>\n> This is the test code itself:\n>\n> is( $node_primary->safe_psql(\n> 'postgres',\n> qq(SELECT stats_reset > '$reset1'::timestamptz FROM\n> pg_stat_replication_slots WHERE slot_name = '$stats_test_slot1')\n> ),\n> qq(t),\n> qq(Check that reset timestamp is later after resetting stats for\n> slot '$stats_test_slot1' again.)\n> );\n>\n> This is the relevant SQL statement:\n>\n> SELECT stats_reset > '$reset1'::timestamptz FROM\n> pg_stat_replication_slots WHERE slot_name = '$stats_test_slot1'\n>\n> When this statement is executed, reset1 is as shown:\n>\n> 2022-05-12 19:59:58.342 CEST [88829:3] 006_logical_decoding.pl LOG:\n> statement: SELECT stats_reset > '2022-05-12\n> 19:59:58.402808+02'::timestamptz FROM pg_stat_replication_slots WHERE\n> slot_name = 'test_slot'\n>\n\nI don't know if this is related, but I noticed that the log timestamp\n(19:59:58.342) is reporting the $reset1 value (19:59:58.402808).\n\nI did not understand how a timestamp saved from the past could be\nahead of the timestamp of the log.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 13 May 2022 12:01:09 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: recovery test failure on morepork with timestamp mystery"
},
{
"msg_contents": "On Fri, May 13, 2022 at 12:01:09PM +1000, Peter Smith wrote:\n> I don't know if this is related, but I noticed that the log timestamp\n> (19:59:58.342) is reporting the $reset1 value (19:59:58.402808).\n> \n> I did not understand how a timestamp saved from the past could be\n> ahead of the timestamp of the log.\n\nmorepork is not completely in the white in this area. See the\nfollowing thread:\nhttps://www.postgresql.org/message-id/X+r2VUFkZdKcF29A@paquier.xyz\n--\nMichael",
"msg_date": "Fri, 13 May 2022 11:13:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery test failure on morepork with timestamp mystery"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-12 21:42:43 -0400, Melanie Plageman wrote:\n> Andres drew my attention to this [1] build farm failure.\n> \n> Looks like a test I wrote resetting pg_stat_replication_slots is\n> failing:\n> \n> # Failed test 'Check that reset timestamp is later after resetting\n> stats for slot 'test_slot' again.'\n> # at t/006_logical_decoding.pl line 261.\n> # got: 'f'\n> # expected: 't'\n> # Looks like you failed 1 test of 20.\n> [19:59:58] t/006_logical_decoding.pl ............\n> \n> This is the test code itself:\n> \n> is( $node_primary->safe_psql(\n> 'postgres',\n> qq(SELECT stats_reset > '$reset1'::timestamptz FROM\n> pg_stat_replication_slots WHERE slot_name = '$stats_test_slot1')\n> ),\n> qq(t),\n> qq(Check that reset timestamp is later after resetting stats for\n> slot '$stats_test_slot1' again.)\n> );\n> \n> This is the relevant SQL statement:\n> \n> SELECT stats_reset > '$reset1'::timestamptz FROM\n> pg_stat_replication_slots WHERE slot_name = '$stats_test_slot1'\n> \n> When this statement is executed, reset1 is as shown:\n> \n> 2022-05-12 19:59:58.342 CEST [88829:3] 006_logical_decoding.pl LOG:\n> statement: SELECT stats_reset > '2022-05-12\n> 19:59:58.402808+02'::timestamptz FROM pg_stat_replication_slots WHERE\n> slot_name = 'test_slot'\n> \n> Note the timestamp of this execution. The stats reset occurred in the\n> past, and as such *must* have come before '2022-05-12\n> 19:59:58.402808+02'::timestamptz.\n\nThe timestamp is computed during:\n\n> 2022-05-12 19:59:58.317 CEST [18214:3] 006_logical_decoding.pl LOG:\n> statement: SELECT pg_stat_reset_replication_slot(NULL)\n\nOne interesting tidbit is that the log timestamps are computed differently\n(with elog.c:get_formatted_log_time()) than the reset timestamp (with\nGetCurrentTimestamp()). Both use gettimeofday() internally.\n\nI wonder if there's a chance that somehow openbsd ends up with more usecs than\n\"fit\" in a second in the result of gettimeofday()? The elog.c case would\ntruncate everything above a second away afaics:\n\t/* 'paste' milliseconds into place... */\n\tsprintf(msbuf, \".%03d\", (int) (saved_timeval.tv_usec / 1000));\n\tmemcpy(formatted_log_time + 19, msbuf, 4);\n\nwhereas GetCurrentTimestamp() would add them to the timestamp:\n\tresult = (TimestampTz) tp.tv_sec -\n\t\t((POSTGRES_EPOCH_JDATE - UNIX_EPOCH_JDATE) * SECS_PER_DAY);\n\tresult = (result * USECS_PER_SEC) + tp.tv_usec;\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 12 May 2022 19:14:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery test failure on morepork with timestamp mystery"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-12 19:14:13 -0700, Andres Freund wrote:\n> On 2022-05-12 21:42:43 -0400, Melanie Plageman wrote:\n> > Andres drew my attention to this [1] build farm failure.\n\nI just saw that there's another recent timestamp related failure:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gombessa&dt=2022-05-13%2002%3A58%3A52\n\nIt's pretty odd that we have two timestamp related failures in stats code that\nhasn't changed in >30 days, both only on openbsd within the last ~10h. There's\nnot been a similar isolationtest failure before.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 12 May 2022 21:02:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery test failure on morepork with timestamp mystery"
},
{
"msg_contents": "On 2022-05-13 04:14, Andres Freund wrote:\n\n> One interesting tidbit is that the log timestamps are computed differently\n> (with elog.c:get_formatted_log_time()) than the reset timestamp (with\n> GetCurrentTimestamp()). Both use gettimeofday() internally.\n> \n> I wonder if there's a chance that somehow openbsd ends up with more usecs than\n> \"fit\" in a second in the result of gettimeofday()? The elog.c case would\n> truncate everything above a second away afaics:\n> \t/* 'paste' milliseconds into place... */\n> \tsprintf(msbuf, \".%03d\", (int) (saved_timeval.tv_usec / 1000));\n> \tmemcpy(formatted_log_time + 19, msbuf, 4);\n> \n> whereas GetCurrentTimestamp() would add them to the timestamp:\n> \tresult = (TimestampTz) tp.tv_sec -\n> \t\t((POSTGRES_EPOCH_JDATE - UNIX_EPOCH_JDATE) * SECS_PER_DAY);\n> \tresult = (result * USECS_PER_SEC) + tp.tv_usec;\n> \n\nWell, I don't know if you remember but there was a thread a while back \nand a test program (monotime.c) to test the clock if it could go \nbackwards and openbsd showed the following result when running the \nattached testprogram:\n\nopenbsd 5.9:\n\n$ ./monotime\n1021006 Starting\n1017367 Starting\n1003415 Starting\n1007598 Starting\n1021006 Stopped\n1007598 Stopped\n1017367 Stopped\n1003415 Stopped\n\nopenbsd 6.9:\n\n$ ./monotime\n410310 Starting\n547727 Starting\n410310 Back 262032.372314102 => 262032.242045208\n410310 Stopped\n465180 Starting\n255646 Starting\n547727 Stopped\n465180 Stopped\n255646 Stopped\n\ncould that have something to do with it?\n\n/Mikael",
"msg_date": "Fri, 13 May 2022 09:00:20 +0200",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: recovery test failure on morepork with timestamp mystery"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-13 09:00:20 +0200, Mikael Kjellstr�m wrote:\n> Well, I don't know if you remember but there was a thread a while back and a\n> test program (monotime.c) to test the clock if it could go backwards and\n> openbsd showed the following result when running the attached testprogram:\n\nNope, didn't remember...\n\n\n> $ ./monotime\n> 410310 Starting\n> 547727 Starting\n> 410310 Back 262032.372314102 => 262032.242045208\n> 410310 Stopped\n> 465180 Starting\n> 255646 Starting\n> 547727 Stopped\n> 465180 Stopped\n> 255646 Stopped\n> \n> could that have something to do with it?\n\nYes!\n\n\n> printf(\"%d Back %lld.%09lu => %lld.%09lu\\n\",\n> (int)getthrid(), ts0.tv_sec, ts0.tv_nsec, ts1.tv_sec,\n> ts1.tv_nsec);\n> break;\n\nI wonder whether the %09lu potentially is truncating ts1.tv_nsec.\n\n\nI can't reproduce the problem trivially in an openbsd VM I had around. But\nit's 7.1, so maybe that's the reason?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 13 May 2022 10:22:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery test failure on morepork with timestamp mystery"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-13 10:22:32 -0700, Andres Freund wrote:\n> On 2022-05-13 09:00:20 +0200, Mikael Kjellstr�m wrote:\n> > Well, I don't know if you remember but there was a thread a while back and a\n> > test program (monotime.c) to test the clock if it could go backwards and\n> > openbsd showed the following result when running the attached testprogram:\n> \n> Nope, didn't remember...\n> \n> \n> > $ ./monotime\n> > 410310 Starting\n> > 547727 Starting\n> > 410310 Back 262032.372314102 => 262032.242045208\n> > 410310 Stopped\n> > 465180 Starting\n> > 255646 Starting\n> > 547727 Stopped\n> > 465180 Stopped\n> > 255646 Stopped\n> > \n> > could that have something to do with it?\n> \n> Yes!\n> \n> \n> > printf(\"%d Back %lld.%09lu => %lld.%09lu\\n\",\n> > (int)getthrid(), ts0.tv_sec, ts0.tv_nsec, ts1.tv_sec,\n> > ts1.tv_nsec);\n> > break;\n> \n> I wonder whether the %09lu potentially is truncating ts1.tv_nsec.\n> \n> \n> I can't reproduce the problem trivially in an openbsd VM I had around. But\n> it's 7.1, so maybe that's the reason?\n\nWhat does\nsysctl kern.timecounter\nreturn? Does the problem continue if you switch kern.timecounter.hardware to\nsomething else?\n\nIn https://postgr.es/m/32aaeb66-71b2-4af0-91ef-1a992ac4d58b%40mksoft.nu you\nsaid it was using acpitimer0 and that it's a vmware VM. It might also be a\nvmware bug, not an openbsd one...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 13 May 2022 13:09:27 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery test failure on morepork with timestamp mystery"
},
{
"msg_contents": "\n\nOn 2022-05-13 22:09, Andres Freund wrote:\n\n> What does\n> sysctl kern.timecounter\n> return? Does the problem continue if you switch kern.timecounter.hardware to\n> something else?\n\n\nopenbsd 5.9:\n\n$ sysctl kern.timecounter\nkern.timecounter.tick=1\nkern.timecounter.timestepwarnings=0\nkern.timecounter.hardware=acpihpet0\nkern.timecounter.choice=i8254(0) acpihpet0(1000) acpitimer0(1000) \ndummy(-1000000)\n\n$ ./monotime \n\n1024736 Starting\n1013674 Starting\n1028468 Starting\n1014641 Starting\n1013674 Stopped\n1024736 Stopped\n1014641 Stopped\n1028468 Stopped\n\nno problem\n\n\nopenbsd 6.9:\n\n$ sysctl kern.timecounter\nkern.timecounter.tick=1\nkern.timecounter.timestepwarnings=0\nkern.timecounter.hardware=tsc\nkern.timecounter.choice=i8254(0) acpihpet0(1000) tsc(2000) acpitimer0(1000)\n\nHm, here it's using the tsc timer. So that is a difference from 5.9\n\n$ ./monotime\n133998 Starting\n408137 Starting\n578042 Starting\n326139 Starting\n133998 Back 310670.000931851668 => 310670.000801582864\n133998 Stopped\n326139 Stopped\n578042 Stopped\n408137 Stopped\n\nit's only in 6.9 the problem with the timer going backwards shows up.\n\nIf I switch timer to acpihpet0 this is the result:\n\n$ ./monotime\n101622 Starting\n480782 Starting\n219318 Starting\n316650 Starting\n101622 Stopped\n480782 Stopped\n219318 Stopped\n316650 Stopped\n\nso that seems to solve the problem with the timer going backwards.\n\n\n> In https://postgr.es/m/32aaeb66-71b2-4af0-91ef-1a992ac4d58b%40mksoft.nu you\n> said it was using acpitimer0 and that it's a vmware VM. It might also be a\n> vmware bug, not an openbsd one...\n\nMight be a bug in vmware but it's running the latest patchlevel for 6.7 \nand no other VM have this problem so seems to only happen in openbsd 6.9 \nfor some reason. Maybe that is the only VM that is using tsc as a timer \nsource though?\n\n/Mikael\n\n\n",
"msg_date": "Fri, 13 May 2022 22:35:22 +0200",
"msg_from": "=?UTF-8?Q?Mikael_Kjellstr=c3=b6m?= <mikael.kjellstrom@mksoft.nu>",
"msg_from_op": false,
"msg_subject": "Re: recovery test failure on morepork with timestamp mystery"
}
] |
[
{
"msg_contents": "Hi,\n\nPFA, attached patch to $SUBJECT.\n\n-- \nRegards,\nAmul Sul\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 13 May 2022 10:20:52 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": true,
"msg_subject": "Correct comment in ProcedureCreate() for pgstat_create_function()\n call."
},
{
"msg_contents": "Sorry, hit the send button too early :|\n\nAttached here !!\n\nOn Fri, May 13, 2022 at 10:20 AM Amul Sul <sulamul@gmail.com> wrote:\n>\n> Hi,\n>\n> PFA, attached patch to $SUBJECT.\n>\n> --\n> Regards,\n> Amul Sul\n> EDB: http://www.enterprisedb.com",
"msg_date": "Fri, 13 May 2022 10:22:57 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Correct comment in ProcedureCreate() for pgstat_create_function()\n call."
},
{
"msg_contents": "On Fri, May 13, 2022 at 10:22:57AM +0530, Amul Sul wrote:\n> Sorry, hit the send button too early :|\n\n- /* ensure that stats are dropped if transaction commits */\n+ /* ensure that stats are dropped if transaction aborts */\n if (!is_update)\n pgstat_create_function(retval);\n\nAs of what pgstat_create_function() does to create the stats of a new\nfunction in a transactional way, it looks like you are right. Will\nfix if there are no objections.\n--\nMichael",
"msg_date": "Fri, 13 May 2022 16:09:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Correct comment in ProcedureCreate() for\n pgstat_create_function() call."
},
{
"msg_contents": "On Fri, May 13, 2022 at 04:09:00PM +0900, Michael Paquier wrote:\n> As of what pgstat_create_function() does to create the stats of a new\n> function in a transactional way, it looks like you are right. Will\n> fix if there are no objections.\n\nAnd done with fcab82a. Thanks, Amul.\n--\nMichael",
"msg_date": "Sat, 14 May 2022 08:29:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Correct comment in ProcedureCreate() for\n pgstat_create_function() call."
}
] |
[
{
"msg_contents": "We didn't have any use of TransactionId as members of List, until\nRelationSyncEntry->streamed_txns was introduced (464824323e57, pg14).\nIt's currently implemented as a list of int. This is not wrong at\npresent, but it may soon be, and I'm sure it rubs some people the wrong\nway.\n\nBut is the rubbing way wrong enough to add support for TransactionId in\npg_list.h, including, say, T_XidList?\n\nThe minimal patch (attached) is quite small AFAICS.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/",
"msg_date": "Fri, 13 May 2022 10:30:12 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "list of TransactionIds"
},
{
"msg_contents": "On Sat, May 14, 2022 at 1:57 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> We didn't have any use of TransactionId as members of List, until\n> RelationSyncEntry->streamed_txns was introduced (464824323e57, pg14).\n> It's currently implemented as a list of int. This is not wrong at\n> present, but it may soon be, and I'm sure it rubs some people the wrong\n> way.\n>\n> But is the rubbing way wrong enough to add support for TransactionId in\n> pg_list.h, including, say, T_XidList?\n>\n\n+1. I don't know if we have a need for this at other places but I feel\nit is a good idea to make its current use better.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 14 May 2022 15:15:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: list of TransactionIds"
},
{
"msg_contents": "On 2022-May-14, Amit Kapila wrote:\n\n> On Sat, May 14, 2022 at 1:57 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > We didn't have any use of TransactionId as members of List, until\n> > RelationSyncEntry->streamed_txns was introduced (464824323e57, pg14).\n> > It's currently implemented as a list of int. This is not wrong at\n> > present, but it may soon be, and I'm sure it rubs some people the wrong\n> > way.\n> >\n> > But is the rubbing way wrong enough to add support for TransactionId in\n> > pg_list.h, including, say, T_XidList?\n> \n> +1. I don't know if we have a need for this at other places but I feel\n> it is a good idea to make its current use better.\n\nI hesitate to add this the day just before beta. This is already in\npg14, so maybe it's not a big deal if pg15 remains the same for the time\nbeing. Or we can change it for beta2. Or we could just punt until\npg16. Any preferences?\n\n(Adding this to pg14 seems out of the question. It's probably okay\nABI-wise to add a new node tag at the end of the list, but I'm not sure\nit's warranted.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No deja de ser humillante para una persona de ingenio saber\nque no hay tonto que no le pueda enseñar algo.\" (Jean B. Say)\n\n\n",
"msg_date": "Sun, 15 May 2022 13:35:16 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: list of TransactionIds"
},
{
"msg_contents": "On Sun, May 15, 2022 at 5:05 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-May-14, Amit Kapila wrote:\n>\n> > On Sat, May 14, 2022 at 1:57 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > >\n> > > We didn't have any use of TransactionId as members of List, until\n> > > RelationSyncEntry->streamed_txns was introduced (464824323e57, pg14).\n> > > It's currently implemented as a list of int. This is not wrong at\n> > > present, but it may soon be, and I'm sure it rubs some people the wrong\n> > > way.\n> > >\n> > > But is the rubbing way wrong enough to add support for TransactionId in\n> > > pg_list.h, including, say, T_XidList?\n> >\n> > +1. I don't know if we have a need for this at other places but I feel\n> > it is a good idea to make its current use better.\n>\n> I hesitate to add this the day just before beta. This is already in\n> pg14, so maybe it's not a big deal if pg15 remains the same for the time\n> being. Or we can change it for beta2. Or we could just punt until\n> pg16. Any preferences?\n>\n\nI prefer to do this for pg16 unless we see some bug due to this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 16 May 2022 07:58:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: list of TransactionIds"
},
{
"msg_contents": "On Mon, May 16, 2022 at 07:58:37AM +0530, Amit Kapila wrote:\n> I prefer to do this for pg16 unless we see some bug due to this.\n\nAgreed. This does not seem worth taking any risk with after beta1,\nand v14 got released this way.\n--\nMichael",
"msg_date": "Mon, 16 May 2022 14:16:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: list of TransactionIds"
},
{
"msg_contents": "On 2022-May-16, Amit Kapila wrote:\n\n> On Sun, May 15, 2022 at 5:05 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > I hesitate to add this the day just before beta. This is already in\n> > pg14, so maybe it's not a big deal if pg15 remains the same for the time\n> > being. Or we can change it for beta2. Or we could just punt until\n> > pg16. Any preferences?\n> \n> I prefer to do this for pg16 unless we see some bug due to this.\n\nPushed now, to master only.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 4 Jul 2022 15:27:13 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: list of TransactionIds"
},
{
"msg_contents": "On Monday, July 4, 2022 9:27 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\r\n\r\nHi,\r\n\r\n> \r\n> Pushed now, to master only.\r\n\r\nThanks for introducing these APIs!\r\n\r\nWhile trying to use the newly introduced list_member_xid(), I noticed that it\r\ninternally use lfirst_oid instead of lfirst_xid. It works ok for now. Just in\r\ncase we change xid to 64 bits in the future, I think we’d better use lfirst_xid\r\nhere.\r\n\r\nHere is a tiny patch to fix that.\r\n\r\nBest regards,\r\nHou Zhijie",
"msg_date": "Thu, 20 Oct 2022 07:34:31 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: list of TransactionIds"
},
{
"msg_contents": "Hello\n\nOn 2022-Oct-20, houzj.fnst@fujitsu.com wrote:\n\n> While trying to use the newly introduced list_member_xid(), I noticed that it\n> internally use lfirst_oid instead of lfirst_xid. It works ok for now. Just in\n> case we change xid to 64 bits in the future, I think we’d better use lfirst_xid\n> here.\n\nEgad.\n\n> Here is a tiny patch to fix that.\n\nPushed, thanks.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"I can't go to a restaurant and order food because I keep looking at the\nfonts on the menu. Five minutes later I realize that it's also talking\nabout food\" (Donald Knuth)\n\n\n",
"msg_date": "Thu, 20 Oct 2022 09:43:55 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: list of TransactionIds"
}
] |
[
{
"msg_contents": "\nHi, hackers\n\nI had an incident on my Postgres 14 that queries hung in wait event\nIPC / MessageQueueInternal, MessageQueueReceive. It likes [1],\nhowever, it doesn't have any discussions.\n\nThe process cannot be terminated by pg_terminate_backend(), although\nit returns true.\n\nHere is the call stack comes from pstack:\n\n485073: /opt/local/pgsql/14/bin/postgres\n fffffb7fef216f4a ioctl (d, d001, fffffb7fffdfa0e0)\n 00000000008b8ec2 WaitEventSetWait () + 112\n 00000000008b920f WaitLatch () + 6f\n 00000000008bf434 shm_mq_wait_internal () + 64\n 00000000008bff74 shm_mq_receive () + 2b4\n 000000000079fdc8 TupleQueueReaderNext () + 28\n 000000000077d8ca gather_merge_readnext () + 13a\n 000000000077db25 ExecGatherMerge () + 215\n 0000000000790675 ExecNextLoop () + 175\n 0000000000790675 ExecNextLoop () + 175\n 000000000076267d standard_ExecutorRun () + fd\n fffffb7fe3965fbd pgss_executorRun () + fd\n 00000000008df99b PortalRunSelect () + 1cb\n 00000000008e0dcf PortalRun () + 17f\n 00000000008ddacd PostgresMain () + 100d\n 0000000000857f62 ServerLoop () + cd2\n 0000000000858cee main () + 453\n 00000000005ab777 _start_crt () + 87\n 00000000005ab6d8 _start () + 18\n\n\nAny suggestions? Thanks in advance!\n\n[1] https://www.postgresql.org/message-id/flat/E9FA92C2921F31408041863B74EE4C2001A479E590%40CCPMAILDAG03.cantab.local\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 13 May 2022 18:16:23 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Backends stunk in wait event IPC/MessageQueueInternal"
},
{
"msg_contents": "On Fri, May 13, 2022 at 06:16:23PM +0800, Japin Li wrote:\n> I had an incident on my Postgres 14 that queries hung in wait event\n> IPC / MessageQueueInternal, MessageQueueReceive. It likes [1],\n> however, it doesn't have any discussions.\n\nIf the process is still running, or if the problem recurs, I suggest to create\na corefile with gcore, aka gdb generate-core-file. Then, we can look at the\nbacktrace at our leisure, even if the cluster needed to be restarted right\naway.\n\nWhat minor version of postgres is this, and what OS ?\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 13 May 2022 06:41:19 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Backends stunk in wait event IPC/MessageQueueInternal"
},
{
"msg_contents": "\nOn Fri, 13 May 2022 at 19:41, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Fri, May 13, 2022 at 06:16:23PM +0800, Japin Li wrote:\n>> I had an incident on my Postgres 14 that queries hung in wait event\n>> IPC / MessageQueueInternal, MessageQueueReceive. It likes [1],\n>> however, it doesn't have any discussions.\n>\n> If the process is still running, or if the problem recurs, I suggest to create\n> a corefile with gcore, aka gdb generate-core-file. Then, we can look at the\n> backtrace at our leisure, even if the cluster needed to be restarted right\n> away.\n>\n\nThanks for your advice, I will try it later.\n\n> What minor version of postgres is this, and what OS ?\n\nPostgreSQL 14.2 and Solaris.\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 13 May 2022 21:13:59 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Backends stunk in wait event IPC/MessageQueueInternal"
},
{
"msg_contents": "On Fri, May 13, 2022 at 6:16 AM Japin Li <japinli@hotmail.com> wrote:\n> The process cannot be terminated by pg_terminate_backend(), although\n> it returns true.\n\npg_terminate_backend() just sends SIGINT. What I'm wondering is what\nhappens when the stuck process receives SIGINT. It would be useful, I\nthink, to check the value of the global variable InterruptHoldoffCount\nin the stuck process by attaching to it with gdb. I would also try\nrunning \"strace -p $PID\" on the stuck process and then try terminating\nit again with pg_terminate_backend(). Either the system call in which\nit's currently stuck returns and then it makes the same system call\nagain and hangs again ... or the signal doesn't dislodge it from the\nsystem call in which it's stuck in the first place. It would be useful\nto know which of those two things is happening.\n\nOne thing I find a bit curious is that the top of the stack in your\ncase is ioctl(). And there are no calls to ioctl() anywhere in\nlatch.c, nor have there ever been. What operating system is this? We\nhave 4 different versions of WaitEventSetWaitBlock() that call\nepoll_wait(), kevent(), poll(), and WaitForMultipleObjects()\nrespectively. I wonder which of those we're using, and whether one of\nthose calls is showing up as ioctl() in the stacktrace, or whether\nthere's some other function being called in here that is somehow\nresulting in ioctl() getting called.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 13 May 2022 10:08:56 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Backends stunk in wait event IPC/MessageQueueInternal"
},
{
"msg_contents": "On Sat, May 14, 2022 at 2:09 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, May 13, 2022 at 6:16 AM Japin Li <japinli@hotmail.com> wrote:\n> > The process cannot be terminated by pg_terminate_backend(), although\n> > it returns true.\n\n> One thing I find a bit curious is that the top of the stack in your\n> case is ioctl(). And there are no calls to ioctl() anywhere in\n> latch.c, nor have there ever been. What operating system is this? We\n> have 4 different versions of WaitEventSetWaitBlock() that call\n> epoll_wait(), kevent(), poll(), and WaitForMultipleObjects()\n> respectively. I wonder which of those we're using, and whether one of\n> those calls is showing up as ioctl() in the stacktrace, or whether\n> there's some other function being called in here that is somehow\n> resulting in ioctl() getting called.\n\nI guess this is really illumos (née OpenSolaris), not Solaris, using\nour epoll build mode, with illumos's emulation of epoll, which maps\nepoll onto Sun's /dev/poll driver:\n\nhttps://github.com/illumos/illumos-gate/blob/master/usr/src/lib/libc/port/sys/epoll.c#L230\n\nThat'd explain:\n\n fffffb7fef216f4a ioctl (d, d001, fffffb7fffdfa0e0)\n\nThat matches the value DP_POLL from:\n\nhttps://github.com/illumos/illumos-gate/blob/master/usr/src/uts/common/sys/devpoll.h#L44\n\nOr if it's really Solaris, huh, are people moving illumos code back\ninto closed Solaris these days?\n\nAs for why it's hanging, I don't know, but one thing that we changed\nin 14 was that we started using signalfd() to receive latch signals on\nsystems that have it, and illumos also has an emulation of signalfd()\nthat our configure script finds:\n\nhttps://github.com/illumos/illumos-gate/blob/master/usr/src/uts/common/io/signalfd.c\n\nThere were in fact a couple of unexplained hangs on the illumos build\nfarm animals, and then they were changed to use -DWAIT_USE_POLL so\nthat they wouldn't automatically choose epoll()/signalfd(). That is\nnot very satisfactory, but as far as I know there is a bug in either\nepoll() or signalfd(), or at least some difference compared to the\nLinux implementation they are emulating. spent quite a bit of time\nping ponging emails back and forth with the owner of a hanging BF\nanimal trying to get a minimal repro for a bug report, without\nsuccess. I mean, it's possible that the bug is in PostgreSQL (though\nno complaint has ever reached me about this stuff on Linux), but while\ntrying to investigate it a kernel panic happened[1], which I think\ncounts as a point against that theory...\n\n(For what it's worth, WSL1 also emulates these two Linux interfaces\nand also apparently doesn't do so well enough for our purposes, also\nfor reasons not understood by us.)\n\nIn short, I'd recommend -DWAIT_USE_POLL for now. It's possible that\nwe could do something to prevent the selection of WAIT_USE_EPOLL on\nthat platform, or that we should have a halfway option epoll() but not\nsignalfd() (= go back to using the self-pipe trick), patches welcome,\nbut that feels kinda strange and would be very niche combination that\nisn't fun to maintain... the real solution is to fix the bug.\n\n[1] https://www.illumos.org/issues/13700\n\n\n",
"msg_date": "Sat, 14 May 2022 09:25:08 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Backends stunk in wait event IPC/MessageQueueInternal"
},
{
"msg_contents": "On Sat, May 14, 2022 at 9:25 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> In short, I'd recommend -DWAIT_USE_POLL for now. It's possible that\n> we could do something to prevent the selection of WAIT_USE_EPOLL on\n> that platform, or that we should have a halfway option epoll() but not\n> signalfd() (= go back to using the self-pipe trick), patches welcome,\n> but that feels kinda strange and would be very niche combination that\n> isn't fun to maintain... the real solution is to fix the bug.\n\nI felt a bit sad about writing that, so I took a crack at trying to\nwrite a patch that separates the signalfd/self-pipe choice from the\nepoll/poll choice. Maybe it's not too bad.\n\nJapin, are you able to reproduce the problem reliably? Did I guess\nright, that you're on illumos? Does this help? I used\ndefined(__sun__) to select the option, but I don't remember if that's\nthe right way to detect that OS family, could you confirm that, or\nadjust as required?",
"msg_date": "Sat, 14 May 2022 10:25:20 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Backends stunk in wait event IPC/MessageQueueInternal"
},
{
"msg_contents": "On Sat, May 14, 2022 at 10:25 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Japin, are you able to reproduce the problem reliably? Did I guess\n> right, that you're on illumos? Does this help? I used\n> defined(__sun__) to select the option, but I don't remember if that's\n> the right way to detect that OS family, could you confirm that, or\n> adjust as required?\n\nBetter version. Now you can independently set -DWAIT_USE_{POLL,EPOLL}\nand -DWAIT_USE_{SELF_PIPE,SIGNALFD} for testing, and it picks a\nsensible default.",
"msg_date": "Sat, 14 May 2022 15:01:59 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Backends stunk in wait event IPC/MessageQueueInternal"
},
{
"msg_contents": "\nOn Sat, 14 May 2022 at 11:01, Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, May 14, 2022 at 10:25 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Japin, are you able to reproduce the problem reliably? Did I guess\n>> right, that you're on illumos? Does this help? I used\n>> defined(__sun__) to select the option, but I don't remember if that's\n>> the right way to detect that OS family, could you confirm that, or\n>> adjust as required?\n>\n> Better version. Now you can independently set -DWAIT_USE_{POLL,EPOLL}\n> and -DWAIT_USE_{SELF_PIPE,SIGNALFD} for testing, and it picks a\n> sensible default.\n\nSorry for the late reply. My bad! It actually SmartOS, which is based on illumos.\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Sat, 14 May 2022 16:42:51 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Backends stunk in wait event IPC/MessageQueueInternal"
},
{
"msg_contents": "\nOn Fri, 13 May 2022 at 22:08, Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, May 13, 2022 at 6:16 AM Japin Li <japinli@hotmail.com> wrote:\n>> The process cannot be terminated by pg_terminate_backend(), although\n>> it returns true.\n>\n> pg_terminate_backend() just sends SIGINT. What I'm wondering is what\n> happens when the stuck process receives SIGINT. It would be useful, I\n> think, to check the value of the global variable InterruptHoldoffCount\n> in the stuck process by attaching to it with gdb. I would also try\n> running \"strace -p $PID\" on the stuck process and then try terminating\n> it again with pg_terminate_backend(). Either the system call in which\n> it's currently stuck returns and then it makes the same system call\n> again and hangs again ... or the signal doesn't dislodge it from the\n> system call in which it's stuck in the first place. It would be useful\n> to know which of those two things is happening.\n>\n> One thing I find a bit curious is that the top of the stack in your\n> case is ioctl(). And there are no calls to ioctl() anywhere in\n> latch.c, nor have there ever been. What operating system is this? We\n> have 4 different versions of WaitEventSetWaitBlock() that call\n> epoll_wait(), kevent(), poll(), and WaitForMultipleObjects()\n> respectively. I wonder which of those we're using, and whether one of\n> those calls is showing up as ioctl() in the stacktrace, or whether\n> there's some other function being called in here that is somehow\n> resulting in ioctl() getting called.\n\nThanks for your advice. I will try this on Monday.\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Sat, 14 May 2022 16:49:22 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Backends stunk in wait event IPC/MessageQueueInternal"
},
{
"msg_contents": "\nOn Sat, 14 May 2022 at 11:01, Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, May 14, 2022 at 10:25 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Japin, are you able to reproduce the problem reliably? Did I guess\n>> right, that you're on illumos? Does this help? I used\n>> defined(__sun__) to select the option, but I don't remember if that's\n>> the right way to detect that OS family, could you confirm that, or\n>> adjust as required?\n>\n> Better version. Now you can independently set -DWAIT_USE_{POLL,EPOLL}\n> and -DWAIT_USE_{SELF_PIPE,SIGNALFD} for testing, and it picks a\n> sensible default.\n\nThanks for your patch! The illumos already defined the following macros.\n\n$ gcc -dM -E - </dev/null | grep -e 'illumos' -e 'sun'\n#define __sun 1\n#define __illumos__ 1\n#define sun 1\n#define __sun__ 1\n\nMaybe use the __illumos__ macro more accurity.\n\n+#elif defined(WAIT_USE_EPOLL) && defined(HAVE_SYS_SIGNALFD_H) && \\\n+ !defined(__sun__)\n\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Mon, 16 May 2022 11:45:35 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Backends stunk in wait event IPC/MessageQueueInternal"
},
{
"msg_contents": "On Mon, May 16, 2022 at 3:45 PM Japin Li <japinli@hotmail.com> wrote:\n> Maybe use the __illumos__ macro more accurity.\n>\n> +#elif defined(WAIT_USE_EPOLL) && defined(HAVE_SYS_SIGNALFD_H) && \\\n> + !defined(__sun__)\n\nThanks, updated, and with a new commit message.\n\nI don't know much about these OSes (though I used lots of Sun machines\nduring the Jurassic period). I know that there are three\ndistributions of illumos: OmniOS, SmartOS and OpenIndiana, and they\nshare the same kernel and base system. The off-list reports I\nreceived about hangs and kernel panics were from OpenIndiana animals\nhake and haddock, which are not currently reporting (I'll ask why),\nand then their owner defined -DWAIT_USE_POLL to clear that up while we\nwaited for progress on his kernel panic bug report. I see that OmniOS\nanimal pollock is currently reporting and also uses -DWAIT_USE_POLL,\nbut I couldn't find any discussion about that.\n\nOf course, you might be hitting some completely different problem,\ngiven the lack of information. I'd be interested in the output of \"p\n*MyLatch\" (= to see if the latch has already been set), and whether\n\"kill -URG PID\" dislodges the stuck process. But given the open\nkernel bug report that I've now been reminded of, I'm thinking about\npushing this anyway. Then we could ask the animal owners to remove\n-DWAIT_USE_POLL so that they'd effectively be running with\n-DWAIT_USE_EPOLL and -DWAIT_USE_SELF_PIPE, which would be more like\nPostgreSQL 13, but people who want to reproduce the problem on the\nillumos side could build with -DWAIT_USE_SIGNALFD.",
"msg_date": "Tue, 17 May 2022 15:31:24 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Backends stunk in wait event IPC/MessageQueueInternal"
},
{
"msg_contents": "On Tue, May 17, 2022 at 3:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, May 16, 2022 at 3:45 PM Japin Li <japinli@hotmail.com> wrote:\n> > Maybe use the __illumos__ macro more accurity.\n> >\n> > +#elif defined(WAIT_USE_EPOLL) && defined(HAVE_SYS_SIGNALFD_H) && \\\n> > + !defined(__sun__)\n>\n> Thanks, updated, and with a new commit message.\n\nPushed to master and REL_14_STABLE.\n\nI'll email the illumos build farm animal owners to say that they\nshould be able to remove -DWAIT_USE_POLL.\n\nTheoretically, it might be useful that we've separated the\nWAIT_USE_SELF_PIPE code from WAIT_USE_POLL if someone eventually wants\nto complete the set of possible WaitEventSet implementations by adding\n/dev/poll (Solaris, HPUX) and pollset (AIX) support. I don't think\nthose have a nicer way to receive race-free signal wakeups.\nRealistically no one's likely to show up with a patch for those old\nproprietary Unixen at this point on the timeline, I just think it's\ninteresting that every OS had something better than poll(), we just\nneed that fallback for lack of patches, not lack of kernel features.\nIronically the typical monster AIX systems I've run into in the wild\nare probably much more capable of suffering from poll() contention\nthan all these puny x86 systems, with oodles of CPUs and NUMA nodes.\nIf someone *is* still interested in scalability on AIX, I'd recommend\nlooking at pollset for latch.c, and also the stalled huge pages\nthing[1].\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGJE4dq%2BNZHrm%3DpNSNCYwDCH%2BT6HtaWm5Lm8vZzygknPpA%40mail.gmail.com\n\n\n",
"msg_date": "Sun, 26 Jun 2022 11:18:12 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Backends stunk in wait event IPC/MessageQueueInternal"
},
{
"msg_contents": "On Sun, Jun 26, 2022 at 11:18 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, May 17, 2022 at 3:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Mon, May 16, 2022 at 3:45 PM Japin Li <japinli@hotmail.com> wrote:\n> > > Maybe use the __illumos__ macro more accurity.\n> > >\n> > > +#elif defined(WAIT_USE_EPOLL) && defined(HAVE_SYS_SIGNALFD_H) && \\\n> > > + !defined(__sun__)\n> >\n> > Thanks, updated, and with a new commit message.\n>\n> Pushed to master and REL_14_STABLE.\n\nFTR: I noticed that https://www.illumos.org/issues/13700 had been\nmarked fixed, so I asked if we should remove our check[1]. Nope,\nanother issue was opened at https://www.illumos.org/issues/14892,\nwhich I'll keep an eye on. It seems we're pretty good at hitting\npoll/event-related kernel bugs in various OSes.\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3ab4fc5dcf30ebc90a23ad878342dc528e2d25ce\n\n\n",
"msg_date": "Sun, 28 Aug 2022 11:03:04 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Backends stunk in wait event IPC/MessageQueueInternal"
},
{
"msg_contents": "On Sun, Aug 28, 2022 at 11:03 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sun, Jun 26, 2022 at 11:18 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Tue, May 17, 2022 at 3:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > On Mon, May 16, 2022 at 3:45 PM Japin Li <japinli@hotmail.com> wrote:\n> > > > Maybe use the __illumos__ macro more accurity.\n> > > >\n> > > > +#elif defined(WAIT_USE_EPOLL) && defined(HAVE_SYS_SIGNALFD_H) && \\\n> > > > + !defined(__sun__)\n> > >\n> > > Thanks, updated, and with a new commit message.\n> >\n> > Pushed to master and REL_14_STABLE.\n>\n> FTR: I noticed that https://www.illumos.org/issues/13700 had been\n> marked fixed, so I asked if we should remove our check[1]. Nope,\n> another issue was opened at https://www.illumos.org/issues/14892,\n> which I'll keep an eye on. It seems we're pretty good at hitting\n> poll/event-related kernel bugs in various OSes.\n\nI happened to notice in the release notes for OmniOS that Stephen\nposted in the nearby GSSAPI thread that this has now been fixed. I\nthink there's no point in changing the back branches (hard to\nsynchronise with kernel upgrades), but I also don't want to leave this\nweird wart in the code forever. Shall we remove it in 16? I don't\npersonally care if it's 16 or 17, but I wanted to make a note about\nthe cleanup opportunity either way, and will add this to the open\ncommitfest.",
"msg_date": "Fri, 14 Apr 2023 09:49:14 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Backends stunk in wait event IPC/MessageQueueInternal"
}
] |
[
{
"msg_contents": "Hi,\nw.r.t. v33-0001-Remove-self-joins.patch :\n\nremoves inner join of plane table -> removes inner join of plain table\nin an query plan -> in a query plan\n\n+ * Used as the relation_has_unique_index_for,\n\nSince relation_has_unique_index_for() becomes a wrapper of\nrelation_has_unique_index_ext, the above sentence doesn't make much sense.\nI think you can drop this part.\n\nbut if extra_clauses doesn't NULL -> If extra_clauses isn't NULL\n\n+ is_req_equal =\n+ (rinfo->required_relids == rinfo->clause_relids) ? true : false;\n\nThe above can be simplified to:\n is_req_equal = rinfo->required_relids == rinfo->clause_relids;\n\n+ ListCell *otherCell;\notherCell should be initialized to NULL.\n\n+ if (bms_is_member(k, info->syn_lefthand) &&\n+ !bms_is_member(r, info->syn_lefthand))\n+ jinfo_check = false;\n+ else if (bms_is_member(k, info->syn_righthand) &&\n+ !bms_is_member(r, info->syn_righthand))\n+ jinfo_check = false;\n+ else if (bms_is_member(r, info->syn_lefthand) &&\n+ !bms_is_member(k, info->syn_lefthand))\n+ jinfo_check = false;\n\nI think the above code can be simplified:\n\nIf bms_is_member(k, info->syn_lefthand) ^ bms_is_member(r,\ninfo->syn_lefthand) is true, jinfo_check is false.\nIf bms_is_member(k, info->syn_righthand) ^ bms_is_member(r,\ninfo->syn_righthand) is true, jinfo_check is false.\nOtherwise jinfo_check is true.\n\nCheers\n\nHi,w.r.t. v33-0001-Remove-self-joins.patch :removes inner join of plane table -> removes inner join of plain tablein an query plan -> in a query plan+ * Used as the relation_has_unique_index_for,Since relation_has_unique_index_for() becomes a wrapper of relation_has_unique_index_ext, the above sentence doesn't make much sense. I think you can drop this part.but if extra_clauses doesn't NULL -> If extra_clauses isn't NULL+ is_req_equal =+ (rinfo->required_relids == rinfo->clause_relids) ? true : false;The above can be simplified to: is_req_equal = rinfo->required_relids == rinfo->clause_relids;+ ListCell *otherCell;otherCell should be initialized to NULL.+ if (bms_is_member(k, info->syn_lefthand) &&+ !bms_is_member(r, info->syn_lefthand))+ jinfo_check = false;+ else if (bms_is_member(k, info->syn_righthand) &&+ !bms_is_member(r, info->syn_righthand))+ jinfo_check = false;+ else if (bms_is_member(r, info->syn_lefthand) &&+ !bms_is_member(k, info->syn_lefthand))+ jinfo_check = false;I think the above code can be simplified:If bms_is_member(k, info->syn_lefthand) ^ bms_is_member(r, info->syn_lefthand) is true, jinfo_check is false.If bms_is_member(k, info->syn_righthand) ^ bms_is_member(r, info->syn_righthand) is true, jinfo_check is false.Otherwise jinfo_check is true.Cheers",
"msg_date": "Fri, 13 May 2022 15:42:18 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: Removing unneeded self joins"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nAttached is a draft of the release announcement for the PostgreSQL 15 \r\nBeta 1 release. The goal of this announcement is to raise awareness \r\naround many of the new features appearing in PostgreSQL 15 and to \r\nencourage people to test. The success of the PostgreSQL 15 GA depends \r\nheavily on people testing during the Beta period!\r\n\r\nPlease review this announcement for feature description accuracy or if \r\nthere is something omitted that should be highlighted. Note that we \r\ncannot highlight everything that is coming in PostgreSQL 15 (that is why \r\nwe have the release notes), but are aiming to showcase features that are \r\nimpactful and inspirational.\r\n\r\nPlease provide feedback no later than 2022-05-19 0:00 AoE[1].\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://en.wikipedia.org/wiki/Anywhere_on_Earth",
"msg_date": "Sat, 14 May 2022 14:52:35 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 15 Beta 1 release announcement draft"
},
{
"msg_contents": "On Sat, May 14, 2022 at 02:52:35PM -0400, Jonathan S. Katz wrote:\n> PostgreSQL 15 is made generally available, thouh some details of the release can\n\nthough\n\n> a SQL standard command for conditionally perform write operations (`INSERT`,\n\nperforming\n\n> he [`range_agg`](https://www.postgresql.org/docs/15/functions-aggregate.html)\n\nThe\n\n> PostgreSQL system and [TOAST](https://www.postgresql.org/docs/15/storage-toast.html)\n> tables, used for storing data that is larger than a single page (8kB), can now\n> utilize\n> [index deduplication](https://www.postgresql.org/docs/15/btree-implementation.html#BTREE-DEDUPLICATION)\n> and benefit from smaller indexes and faster lookups.\n\nIMO this doesn't need to be listed.\n\n> `pg_basebackup` client can now also decompress backups that use LZ4 an Zstandard\n\nand\n\n> Write-ahead log (WAL) files can now be compressed using both LZ4 an Zstandard\n\nand\n\n> configuration parameter. Additionally, PostgreSQL 15 also adds the\n> [`recovery_prefetch`](https://www.postgresql.org/docs/15/runtime-config-wal.html#GUC-RECOVERY-PREFETCH)\n\nremove \"the\" or add \"option\" ?\n\n> PostgreSQL 15 makes it possible to skip applying changes using the\n> [`ALTER SUBSCRIPTION ... SKIP`](https://www.postgresql.org/docs/15/sql-altersubscription.html).\n\nadd \"command\".\n\n> PostgreSQL 15 introduces the\n> [`jsonlog` format for logging](https://www.postgresql.org/docs/15/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-JSONLOG). This allows PostgreSQL logs to be consumed by many programs\n> that perform structured logging aggregation and analysis. PostgreSQL 15 now by\n\nlog aggregation?\n\n> default logs checkpoints and slow autovacuum operations.\n\n> PostgreSQL 15 adds support for\n> \"[security invoker views](https://www.postgresql.org/docs/15/sql-createview.html)\",\n> which users the privileges of the user executing the query instead of the user\n\nuses\n\n\n",
"msg_date": "Sat, 14 May 2022 14:26:58 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 Beta 1 release announcement draft"
},
{
"msg_contents": "On Sun, May 15, 2022 at 12:22 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>\n> Please provide feedback no later than 2022-05-19 0:00 AoE[1].\n>\n\n> [`recovery_prefetch`](https://www.postgresql.org/docs/15/runtime-config-wal.html#GUC-RECOVERY-PREFETCH)\n> that can help speed up all recovery operations by prefetching data blocks.\n\nIs it okay to say that this feature speeds up *all* recovery\noperations? See the discussion between Simon and Tomas [1] related to\nthis.\n\n[1] - https://www.postgresql.org/message-id/3f4d65c8-ad61-9f57-d5ad-6c1ea841e471%40enterprisedb.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 16 May 2022 08:28:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 Beta 1 release announcement draft"
},
{
"msg_contents": "On 5/15/22 10:58 PM, Amit Kapila wrote:\r\n> On Sun, May 15, 2022 at 12:22 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\r\n>>\r\n>> Please provide feedback no later than 2022-05-19 0:00 AoE[1].\r\n>>\r\n> \r\n>> [`recovery_prefetch`](https://www.postgresql.org/docs/15/runtime-config-wal.html#GUC-RECOVERY-PREFETCH)\r\n>> that can help speed up all recovery operations by prefetching data blocks.\r\n> \r\n> Is it okay to say that this feature speeds up *all* recovery\r\n> operations? See the discussion between Simon and Tomas [1] related to\r\n> this.\r\n\r\nI'll <strike>all</strike> to hedge.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Tue, 17 May 2022 08:50:10 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 15 Beta 1 release announcement draft"
},
{
"msg_contents": "Thanks everyone for the feedback. As per usual, I did a `MERGE` based on \r\nthe suggestions.\r\n\r\nI provided credits in press.git. Here is v2 of the draft.\r\n\r\nPlease provide any feedback no later than 2022-05-19 0:00 AoE.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Tue, 17 May 2022 08:58:14 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 15 Beta 1 release announcement draft"
},
{
"msg_contents": "On Tue, May 17, 2022 at 08:58:14AM -0500, Jonathan S. Katz wrote:\n> PostgreSQL 15 adds [more regular expression functions](https://www.postgresql.org/docs/15/functions-matching.html#FUNCTIONS-POSIX-REGEXP),\n> including `regexp_count` , `regexp_instr`, `regexp_like`, and `regexp_substr`.\n> the [`range_agg`](https://www.postgresql.org/docs/15/functions-aggregate.html)\n> function, introduced in PostgreSQL 15 for aggregating\n\nCapital The\n\n> the `work_mem` parameter. Early benchmarks show that these sorts may see on\n> average an 2x speedup for these workloads on PostgreSQL 15.\n\nMaybe remove \"for these workloads\".\n\n> Write-ahead log (WAL) files can now be compressed using both LZ4 and Zstandard\n\nNow that I re-read it, I suppose this should say \"either .. or\" ...\n\n\n",
"msg_date": "Tue, 17 May 2022 09:49:41 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 Beta 1 release announcement draft"
},
{
"msg_contents": "On Wed, May 18, 2022 at 1:50 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> On 5/15/22 10:58 PM, Amit Kapila wrote:\n> > On Sun, May 15, 2022 at 12:22 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> >> Please provide feedback no later than 2022-05-19 0:00 AoE[1].\n> >\n> >> [`recovery_prefetch`](https://www.postgresql.org/docs/15/runtime-config-wal.html#GUC-RECOVERY-PREFETCH)\n> >> that can help speed up all recovery operations by prefetching data blocks.\n> >\n> > Is it okay to say that this feature speeds up *all* recovery\n> > operations? See the discussion between Simon and Tomas [1] related to\n> > this.\n>\n> I'll <strike>all</strike> to hedge.\n\n+1, thanks.\n\n\n",
"msg_date": "Thu, 19 May 2022 12:32:40 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 Beta 1 release announcement draft"
},
{
"msg_contents": "On Sat, May 14, 2022 at 2:52 PM Jonathan S. Katz <jkatz@postgresql.org>\nwrote:\n\n> Hi,\n>\n> Attached is a draft of the release announcement for the PostgreSQL 15\n> Beta 1 release. The goal of this announcement is to raise awareness\n> around many of the new features appearing in PostgreSQL 15 and to\n> encourage people to test. The success of the PostgreSQL 15 GA depends\n> heavily on people testing during the Beta period!\n>\n\nI have some belated feedback. I was excited to try this on Windows (I\ndon't have a build system for that) and so followed the first link in the\nmessage, to https://www.postgresql.org/download/. At first glance there is\nnothing about beta there, but there is a prominent Windows icon so I\nclick on that. And then to EDB, but there is no apparent way to download\nbeta, just the released versions. I poked around EDB a bit but didn't find\nanything promising, then backed out of all that poking around and\neventually all the way back to /download, where I scrolled down and finally\nfound the link to https://www.postgresql.org/download/snapshots/ which\ntells me what I need to know. But at this point I was more annoyed than\nexcited.\n\nAn invitation to download the beta should take me directly to the page\nrelevant to doing that. I shouldn't have to read the page backwards, or do\na breadth-first traversal, to get to the right place efficiently. People\nwill click on the first link which seems relevant, and \"Windows\" on the\ngeneric download page certainly seems relevant to Beta for Windows, until\nafter you have scrolled down to find the beta/RC specific link instead. (I\nnow recall being annoyed by this in a prior year as well, I guess I have a\nbad memory for avoiding mistakes but a good memory for recalling them).\nAlso, the download page should probably say \"binary packages and\ninstallers\" where it currently says \"There are source code and binary\npackages of beta and release candidates\", although I guess that is not\nabout the announcement itself.\n\nCheers,\n\nJeff\n\nOn Sat, May 14, 2022 at 2:52 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:Hi,\n\nAttached is a draft of the release announcement for the PostgreSQL 15 \nBeta 1 release. The goal of this announcement is to raise awareness \naround many of the new features appearing in PostgreSQL 15 and to \nencourage people to test. The success of the PostgreSQL 15 GA depends \nheavily on people testing during the Beta period!I have some belated feedback. I was excited to try this on Windows (I don't have a build system for that) and so followed the first link in the message, to https://www.postgresql.org/download/. At first glance there is nothing about beta there, but there is a prominent Windows icon so I click on that. And then to EDB, but there is no apparent way to download beta, just the released versions. I poked around EDB a bit but didn't find anything promising, then backed out of all that poking around and eventually all the way back to /download, where I scrolled down and finally found the link to https://www.postgresql.org/download/snapshots/ which tells me what I need to know. But at this point I was more annoyed than excited.An invitation to download the beta should take me directly to the page relevant to doing that. I shouldn't have to read the page backwards, or do a breadth-first traversal, to get to the right place efficiently. People will click on the first link which seems relevant, and \"Windows\" on the generic download page certainly seems relevant to Beta for Windows, until after you have scrolled down to find the beta/RC specific link instead. (I now recall being annoyed by this in a prior year as well, I guess I have a bad memory for avoiding mistakes but a good memory for recalling them). Also, the download page should probably say \"binary packages and installers\" where it currently says \"There are source code and binary packages of beta and release candidates\", although I guess that is not about the announcement itself. Cheers,Jeff",
"msg_date": "Tue, 24 May 2022 00:57:02 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 Beta 1 release announcement draft"
},
{
"msg_contents": "On 5/24/22 12:57 AM, Jeff Janes wrote:\r\n> On Sat, May 14, 2022 at 2:52 PM Jonathan S. Katz <jkatz@postgresql.org \r\n> <mailto:jkatz@postgresql.org>> wrote:\r\n> \r\n> Hi,\r\n> \r\n> Attached is a draft of the release announcement for the PostgreSQL 15\r\n> Beta 1 release. The goal of this announcement is to raise awareness\r\n> around many of the new features appearing in PostgreSQL 15 and to\r\n> encourage people to test. The success of the PostgreSQL 15 GA depends\r\n> heavily on people testing during the Beta period!\r\n> \r\n> \r\n> I have some belated feedback. I was excited to try this on Windows (I \r\n> don't have a build system for that) and so followed the first link in \r\n> the message, to https://www.postgresql.org/download/ \r\n> <https://www.postgresql.org/download/>. At first glance there is \r\n> nothing about beta there, but there is a prominent Windows icon so I \r\n> click on that. And then to EDB, but there is no apparent way to \r\n> download beta, just the released versions. I poked around EDB a bit but \r\n> didn't find anything promising, then backed out of all that poking \r\n> around and eventually all the way back to /download, where I scrolled \r\n> down and finally found the link to \r\n> https://www.postgresql.org/download/snapshots/ \r\n> <https://www.postgresql.org/download/snapshots/> which tells me what I \r\n> need to know. But at this point I was more annoyed than excited.\r\n> \r\n> An invitation to download the beta should take me directly to the page \r\n> relevant to doing that. I shouldn't have to read the page backwards, or \r\n> do a breadth-first traversal, to get to the right place efficiently. \r\n> People will click on the first link which seems relevant, and \"Windows\" \r\n> on the generic download page certainly seems relevant to Beta for \r\n> Windows, until after you have scrolled down to find the beta/RC specific \r\n> link instead. (I now recall being annoyed by this in a prior year as \r\n> well, I guess I have a bad memory for avoiding mistakes but a \r\n> good memory for recalling them). Also, the download page should \r\n> probably say \"binary packages and installers\" where it currently \r\n> says \"There are source code and binary packages of beta and release \r\n> candidates\", although I guess that is not about the announcement itself.\r\n> Cheers,\r\n\r\nSome of the community installers are outside the platform (e.g. \r\nWindows/Mac) and the timing of the releases of this around the beta \r\ncannot be fully accounted for in the announcement.\r\n\r\nHowever, I can relay your feedback to the packagers of these installers, \r\nand I suggest you do the same.\r\n\r\nWith regards to the additional feedback, we recently did some measured \r\nexperiments around changes to the flow of the download page that, first \r\nand foremost, brought people to their desired installers for stable \r\nversions. The results from those experiments showed an appropriate \r\nuptick on all accounts (which should be in a thread on -www).\r\n\r\nWe could perhaps tidy up the downloads page to make downloading \r\nBetas/RCs more clear, but that's a discussion for -www.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Tue, 24 May 2022 09:33:14 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 15 Beta 1 release announcement draft"
},
{
"msg_contents": "On Tue, May 24, 2022 at 9:33 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> However, I can relay your feedback to the packagers of these installers,\n> and I suggest you do the same.\n\nIsn't the issue that https://www.postgresql.org/download/windows/\ndoesn't have the link to the right place on the EDB site?\n\nhttps://www.postgresql.org/download/snapshots/ has a link to\nhttps://www.enterprisedb.com/products-services-training/pgdevdownload\nbut https://www.postgresql.org/download/windows/ does not.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 May 2022 10:38:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 Beta 1 release announcement draft"
},
{
"msg_contents": "On Tue, May 24, 2022 at 12:57:02AM -0400, Jeff Janes wrote:\n> I have some belated feedback. I was excited to try this on Windows (I don't\n> have a build system for that)\n\nThis is unrelated to beta1, but someone (Thomas Munro?) had the idea to allow\nretrieving the windows binaries built by cirrus ci. This is untested; if you\ntry it, I'd be interested to know how it works.\n\nhttps://github.com/justinpryzby/postgres/commit/a014c1ed59b\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 26 May 2022 20:54:35 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 15 Beta 1 release announcement draft (windows)"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nAt function load_relcache_init_file, there is an unnecessary function call,\nto initialize pgstat_info pointer to NULL.\n\nMemSet(&rel->pgstat_info, 0, sizeof(rel->pgstat_info));\n\nI think that intention with use of MemSet was:\nMemSet(&rel->pgstat_info, 0, sizeof(*rel->pgstat_info));\n\nInitialize with sizeof of Struct size, not with sizeof pointer size.\nBut so it breaks.\n\nAttached a tiny patch.\n\nregards,\nRanier Vilela",
"msg_date": "Sat, 14 May 2022 18:46:53 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "On Sun, 15 May 2022 at 09:47, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> At function load_relcache_init_file, there is an unnecessary function call,\n> to initialize pgstat_info pointer to NULL.\n>\n> MemSet(&rel->pgstat_info, 0, sizeof(rel->pgstat_info));\n\nWhat seems to have happened here is the field was changed to become a\npointer in 77947c51c. It's not incorrect to use MemSet() to zero out\nthe pointer field. What it does probably do is confuse the casual\nreader into thinking the field is a struct rather than a pointer to\none. It's probably worth making that consistent with the other\nfields so nobody gets confused.\n\nCan you add a CF entry for PG16 for this so we come back to it after we branch?\n\nDavid\n\n\n",
"msg_date": "Tue, 17 May 2022 11:26:04 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em seg., 16 de mai. de 2022 às 20:26, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Sun, 15 May 2022 at 09:47, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > At function load_relcache_init_file, there is an unnecessary function\n> call,\n> > to initialize pgstat_info pointer to NULL.\n> >\n> > MemSet(&rel->pgstat_info, 0, sizeof(rel->pgstat_info));\n>\n> What seems to have happened here is the field was changed to become a\n> pointer in 77947c51c. It's not incorrect to use MemSet() to zero out\n> the pointer field. What it does probably do is confuse the casual\n> reader into thinking the field is a struct rather than a pointer to\n> one. It's probably worth making that consistent with the other\n> fields so nobody gets confused.\n>\n> Can you add a CF entry for PG16 for this so we come back to it after we\n> branch?\n>\nOf course.\nI will add it.\n\nregards,\nRanier Vilela\n\nEm seg., 16 de mai. de 2022 às 20:26, David Rowley <dgrowleyml@gmail.com> escreveu:On Sun, 15 May 2022 at 09:47, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> At function load_relcache_init_file, there is an unnecessary function call,\n> to initialize pgstat_info pointer to NULL.\n>\n> MemSet(&rel->pgstat_info, 0, sizeof(rel->pgstat_info));\n\nWhat seems to have happened here is the field was changed to become a\npointer in 77947c51c. It's not incorrect to use MemSet() to zero out\nthe pointer field. What it does probably do is confuse the casual\nreader into thinking the field is a struct rather than a pointer to\none. It's probably worth making that consistent with the other\nfields so nobody gets confused.\n\nCan you add a CF entry for PG16 for this so we come back to it after we branch?Of course.\nI will add it. regards,Ranier Vilela",
"msg_date": "Tue, 17 May 2022 10:33:57 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em ter., 17 de mai. de 2022 às 10:33, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em seg., 16 de mai. de 2022 às 20:26, David Rowley <dgrowleyml@gmail.com>\n> escreveu:\n>\n>> On Sun, 15 May 2022 at 09:47, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>> > At function load_relcache_init_file, there is an unnecessary function\n>> call,\n>> > to initialize pgstat_info pointer to NULL.\n>> >\n>> > MemSet(&rel->pgstat_info, 0, sizeof(rel->pgstat_info));\n>>\n>> What seems to have happened here is the field was changed to become a\n>> pointer in 77947c51c. It's not incorrect to use MemSet() to zero out\n>> the pointer field. What it does probably do is confuse the casual\n>> reader into thinking the field is a struct rather than a pointer to\n>> one. It's probably worth making that consistent with the other\n>> fields so nobody gets confused.\n>>\n>> Can you add a CF entry for PG16 for this so we come back to it after we\n>> branch?\n>>\n> Of course.\n> I will add it.\n>\nCreated https://commitfest.postgresql.org/38/3640/\nHowever, I would like to add more.\nI found, I believe, a serious problem of incorrect usage of the memset api.\nHistorically, people have relied on using memset or MemSet, using the\nvariable name as an argument for the sizeof.\nWhile it works correctly, for arrays, when it comes to pointers to\nstructures, things go awry.\n\n#include <stdio.h>\n\nstruct test_t\n{\n double b;\n int a;\n char c;\n};\n\ntypedef struct test_t Test;\n\nint main()\n{\n Test * my_test;\n\n printf(\"Sizeof pointer=%u\\n\", sizeof(my_test));\n printf(\"Sizeof struct=%u\\n\", sizeof(Test));\n}\n\nOutput:\nSizeof pointer=8\nSizeof struct=16\n\nSo throughout the code there are these misuses.\n\nSo, taking advantage of this CF I'm going to add one more big patch, with\nsuggestions to fix the calls.\nThis pass vcregress check.\n\nregards,\nRanier Vilela",
"msg_date": "Tue, 17 May 2022 19:52:30 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "On Tue, May 17, 2022 at 07:52:30PM -0300, Ranier Vilela wrote:\n> I found, I believe, a serious problem of incorrect usage of the memset api.\n> Historically, people have relied on using memset or MemSet, using the\n> variable name as an argument for the sizeof.\n> While it works correctly, for arrays, when it comes to pointers to\n> structures, things go awry.\n\nKnowing how sizeof() works is required before using it - the same is true for\npointers.\n\n> So throughout the code there are these misuses.\n\nWhy do you think it's a misuse ?\n\nTake the first one as an example. It says:\n\n GenericCosts costs;\n MemSet(&costs, 0, sizeof(costs));\n\nYou sent a patch to change it to sizeof(GenericCosts).\n\nBut it's not a pointer, so they are the same.\n\nIs that true for every change in your patch ?\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 17 May 2022 18:18:55 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> I found, I believe, a serious problem of incorrect usage of the memset api.\n> Historically, people have relied on using memset or MemSet, using the\n> variable name as an argument for the sizeof.\n> While it works correctly, for arrays, when it comes to pointers to\n> structures, things go awry.\n\nYou'll have to convince people that any of these places are in\nfact incorrect. Everyone who's used C for any length of time\nis well aware of the possibility of getting sizeof() wrong in\nthis sort of context, and I think we've been careful about it.\n\nAlso, as a stylistic matter I think it's best to write\n\"memset(&x, 0, sizeof(x))\" where we can. Replacing sizeof(x)\nwith sizeof(some type name) has its own risks of error, and\ntherefore is not automatically an improvement.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 May 2022 19:22:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em ter., 17 de mai. de 2022 às 20:18, Justin Pryzby <pryzby@telsasoft.com>\nescreveu:\n\n> On Tue, May 17, 2022 at 07:52:30PM -0300, Ranier Vilela wrote:\n> > I found, I believe, a serious problem of incorrect usage of the memset\n> api.\n> > Historically, people have relied on using memset or MemSet, using the\n> > variable name as an argument for the sizeof.\n> > While it works correctly, for arrays, when it comes to pointers to\n> > structures, things go awry.\n>\n> Knowing how sizeof() works is required before using it - the same is true\n> for\n> pointers.\n>\n> > So throughout the code there are these misuses.\n>\n> Why do you think it's a misuse ?\n>\n> Take the first one as an example. It says:\n>\n> GenericCosts costs;\n> MemSet(&costs, 0, sizeof(costs));\n>\n> You sent a patch to change it to sizeof(GenericCosts).\n>\n> But it's not a pointer, so they are the same.\n>\n> Is that true for every change in your patch ?\n>\nIt seems true, sorry.\nThanks Justin for pointing out my big mistake.\n\nI hope this isn't all wasted work, but should I remove the 002 patch.\n\nregards,\nRanier Vilela\n\nEm ter., 17 de mai. de 2022 às 20:18, Justin Pryzby <pryzby@telsasoft.com> escreveu:On Tue, May 17, 2022 at 07:52:30PM -0300, Ranier Vilela wrote:\n> I found, I believe, a serious problem of incorrect usage of the memset api.\n> Historically, people have relied on using memset or MemSet, using the\n> variable name as an argument for the sizeof.\n> While it works correctly, for arrays, when it comes to pointers to\n> structures, things go awry.\n\nKnowing how sizeof() works is required before using it - the same is true for\npointers.\n\n> So throughout the code there are these misuses.\n\nWhy do you think it's a misuse ?\n\nTake the first one as an example. It says:\n\n GenericCosts costs;\n MemSet(&costs, 0, sizeof(costs));\n\nYou sent a patch to change it to sizeof(GenericCosts).\n\nBut it's not a pointer, so they are the same.\n\nIs that true for every change in your patch ?It seems true, sorry.Thanks Justin for pointing out my big mistake.I hope this isn't all wasted work, but should I remove the 002 patch.regards,Ranier Vilela",
"msg_date": "Tue, 17 May 2022 21:08:59 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "This one caught my attention:\n\ndiff --git a/contrib/pgcrypto/crypt-blowfish.c b/contrib/pgcrypto/crypt-blowfish.c\nindex a663852ccf..63fcef562d 100644\n--- a/contrib/pgcrypto/crypt-blowfish.c\n+++ b/contrib/pgcrypto/crypt-blowfish.c\n@@ -750,7 +750,7 @@ _crypt_blowfish_rn(const char *key, const char *setting,\n /* Overwrite the most obvious sensitive data we have on the stack. Note\n * that this does not guarantee there's no sensitive data left on the\n * stack and/or in registers; I'm not aware of portable code that does. */\n-\tpx_memset(&data, 0, sizeof(data));\n+\tpx_memset(&data, 0, sizeof(struct data));\n \n \treturn output;\n }\n\nThe curious thing here is that sizeof(data) is correct, because it\nrefers to a variable defined earlier in that function, whose type is an\nanonymous struct declared there. But I don't know what \"struct data\"\nrefers to, precisely because that struct is unnamed. Am I misreading it?\n\n\nAlso:\n\ndiff --git a/contrib/pgstattuple/pgstatindex.c b/contrib/pgstattuple/pgstatindex.c\nindex e1048e47ff..87be62f023 100644\n--- a/contrib/pgstattuple/pgstatindex.c\n+++ b/contrib/pgstattuple/pgstatindex.c\n@@ -601,7 +601,7 @@ pgstathashindex(PG_FUNCTION_ARGS)\n \t\t\t\t errmsg(\"cannot access temporary indexes of other sessions\")));\n \n \t/* Get the information we need from the metapage. */\n-\tmemset(&stats, 0, sizeof(stats));\n+\tmemset(&stats, 0, sizeof(HashIndexStat));\n \tmetabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_READ, LH_META_PAGE);\n \tmetap = HashPageGetMeta(BufferGetPage(metabuf));\n \tstats.version = metap->hashm_version;\n\nI think the working theory here is that the original line is correct\nnow, and it continues to be correct if somebody edits the function and\nmakes variable 'stats' be of a different type. But if you change the\nsizeof() to use the type name, then there are two places that you need\nto edit, and they are not necessarily close together; so it is correct\nnow and could become a bug in the future. I don't think we're fully\nconsistent about this, but I think you're proposing to change it in the\nopposite direction that we'd prefer.\n\nFor the case where the variable is a pointer, the developer could write\n'sizeof(*variable)' instead of being forced to specify the type name,\nfor example (just a random one):\n\ndiff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c\nindex a434cf93ef..e92c03686f 100644\n--- a/contrib/bloom/blutils.c\n+++ b/contrib/bloom/blutils.c\n@@ -438,7 +438,7 @@ BloomFillMetapage(Relation index, Page metaPage)\n \t */\n \tBloomInitPage(metaPage, BLOOM_META);\n \tmetadata = BloomPageGetMeta(metaPage);\n-\tmemset(metadata, 0, sizeof(BloomMetaPageData));\n+\tmemset(metadata, 0, sizeof(*metadata));\n \tmetadata->magickNumber = BLOOM_MAGICK_NUMBER;\n \tmetadata->opts = *opts;\n \t((PageHeader) metaPage)->pd_lower += sizeof(BloomMetaPageData);\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 18 May 2022 10:54:58 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em qua., 18 de mai. de 2022 às 05:54, Alvaro Herrera <\nalvherre@alvh.no-ip.org> escreveu:\n\n> This one caught my attention:\n>\n> diff --git a/contrib/pgcrypto/crypt-blowfish.c\n> b/contrib/pgcrypto/crypt-blowfish.c\n> index a663852ccf..63fcef562d 100644\n> --- a/contrib/pgcrypto/crypt-blowfish.c\n> +++ b/contrib/pgcrypto/crypt-blowfish.c\n> @@ -750,7 +750,7 @@ _crypt_blowfish_rn(const char *key, const char\n> *setting,\n> /* Overwrite the most obvious sensitive data we have on the stack. Note\n> * that this does not guarantee there's no sensitive data left on the\n> * stack and/or in registers; I'm not aware of portable code that does. */\n> - px_memset(&data, 0, sizeof(data));\n> + px_memset(&data, 0, sizeof(struct data));\n>\n> return output;\n> }\n>\n> The curious thing here is that sizeof(data) is correct, because it\n> refers to a variable defined earlier in that function, whose type is an\n> anonymous struct declared there. But I don't know what \"struct data\"\n> refers to, precisely because that struct is unnamed. Am I misreading it?\n>\n No, you are right.\nThis is definitely wrong.\n\n\n>\n> Also:\n>\n> diff --git a/contrib/pgstattuple/pgstatindex.c\n> b/contrib/pgstattuple/pgstatindex.c\n> index e1048e47ff..87be62f023 100644\n> --- a/contrib/pgstattuple/pgstatindex.c\n> +++ b/contrib/pgstattuple/pgstatindex.c\n> @@ -601,7 +601,7 @@ pgstathashindex(PG_FUNCTION_ARGS)\n> errmsg(\"cannot access temporary indexes\n> of other sessions\")));\n>\n> /* Get the information we need from the metapage. */\n> - memset(&stats, 0, sizeof(stats));\n> + memset(&stats, 0, sizeof(HashIndexStat));\n> metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_READ,\n> LH_META_PAGE);\n> metap = HashPageGetMeta(BufferGetPage(metabuf));\n> stats.version = metap->hashm_version;\n>\n> I think the working theory here is that the original line is correct\n> now, and it continues to be correct if somebody edits the function and\n> makes variable 'stats' be of a different type. But if you change the\n> sizeof() to use the type name, then there are two places that you need\n> to edit, and they are not necessarily close together; so it is correct\n> now and could become a bug in the future. I don't think we're fully\n> consistent about this, but I think you're proposing to change it in the\n> opposite direction that we'd prefer.\n>\nYes. I think that only advantage using the name of structure is\nwhen you read the line of MemSet, you know what kind type\nis filled.\n\n\n> For the case where the variable is a pointer, the developer could write\n> 'sizeof(*variable)' instead of being forced to specify the type name,\n> for example (just a random one):\n>\nCould have used this style to make the patch.\nBut the intention was to correct a possible misinterpretation,\nwhich in this case, showed that I was totally wrong.\n\nSorry by the noise.\n\nregards,\nRanier Vilela\n\nEm qua., 18 de mai. de 2022 às 05:54, Alvaro Herrera <alvherre@alvh.no-ip.org> escreveu:This one caught my attention:\n\ndiff --git a/contrib/pgcrypto/crypt-blowfish.c b/contrib/pgcrypto/crypt-blowfish.c\nindex a663852ccf..63fcef562d 100644\n--- a/contrib/pgcrypto/crypt-blowfish.c\n+++ b/contrib/pgcrypto/crypt-blowfish.c\n@@ -750,7 +750,7 @@ _crypt_blowfish_rn(const char *key, const char *setting,\n /* Overwrite the most obvious sensitive data we have on the stack. Note\n * that this does not guarantee there's no sensitive data left on the\n * stack and/or in registers; I'm not aware of portable code that does. */\n- px_memset(&data, 0, sizeof(data));\n+ px_memset(&data, 0, sizeof(struct data));\n\n return output;\n }\n\nThe curious thing here is that sizeof(data) is correct, because it\nrefers to a variable defined earlier in that function, whose type is an\nanonymous struct declared there. But I don't know what \"struct data\"\nrefers to, precisely because that struct is unnamed. Am I misreading it? No, you are right.This is definitely wrong.\n\n\nAlso:\n\ndiff --git a/contrib/pgstattuple/pgstatindex.c b/contrib/pgstattuple/pgstatindex.c\nindex e1048e47ff..87be62f023 100644\n--- a/contrib/pgstattuple/pgstatindex.c\n+++ b/contrib/pgstattuple/pgstatindex.c\n@@ -601,7 +601,7 @@ pgstathashindex(PG_FUNCTION_ARGS)\n errmsg(\"cannot access temporary indexes of other sessions\")));\n\n /* Get the information we need from the metapage. */\n- memset(&stats, 0, sizeof(stats));\n+ memset(&stats, 0, sizeof(HashIndexStat));\n metabuf = _hash_getbuf(rel, HASH_METAPAGE, HASH_READ, LH_META_PAGE);\n metap = HashPageGetMeta(BufferGetPage(metabuf));\n stats.version = metap->hashm_version;\n\nI think the working theory here is that the original line is correct\nnow, and it continues to be correct if somebody edits the function and\nmakes variable 'stats' be of a different type. But if you change the\nsizeof() to use the type name, then there are two places that you need\nto edit, and they are not necessarily close together; so it is correct\nnow and could become a bug in the future. I don't think we're fully\nconsistent about this, but I think you're proposing to change it in the\nopposite direction that we'd prefer.Yes. I think that only advantage using the name of structure iswhen you read the line of MemSet, you know what kind typeis filled. \n\nFor the case where the variable is a pointer, the developer could write\n'sizeof(*variable)' instead of being forced to specify the type name,\nfor example (just a random one):Could have used this style to make the patch.But the intention was to correct a possible misinterpretation, which in this case, showed that I was totally wrong.Sorry by the noise.regards,Ranier Vilela",
"msg_date": "Wed, 18 May 2022 08:36:29 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "On 18.05.22 01:18, Justin Pryzby wrote:\n> Take the first one as an example. It says:\n> \n> GenericCosts costs;\n> MemSet(&costs, 0, sizeof(costs));\n> \n> You sent a patch to change it to sizeof(GenericCosts).\n> \n> But it's not a pointer, so they are the same.\n\nThis instance can more easily be written as\n\n costs = {0};\n\n\n",
"msg_date": "Wed, 18 May 2022 15:52:15 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em qua., 18 de mai. de 2022 às 10:52, Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> escreveu:\n\n> On 18.05.22 01:18, Justin Pryzby wrote:\n> > Take the first one as an example. It says:\n> >\n> > GenericCosts costs;\n> > MemSet(&costs, 0, sizeof(costs));\n> >\n> > You sent a patch to change it to sizeof(GenericCosts).\n> >\n> > But it's not a pointer, so they are the same.\n>\n> This instance can more easily be written as\n>\n> costs = {0};\n>\nThat would initialize the content at compilation and not at runtime,\ncorrect?\nAnd we would avoid MemSet/memset altogether.\n\nThere are a lot of cases using MemSet (with struct variables) and at\nWindows 64 bits, long are 4 (four) bytes.\nSo I believe that MemSet is less efficient on Windows than on Linux.\n\"The size of the '_vstart' buffer is not a multiple of the element size of\nthe type 'long'.\"\nmessage from PVS-Studio static analysis tool.\n\nregards,\nRanier Vilela\n\nEm qua., 18 de mai. de 2022 às 10:52, Peter Eisentraut <peter.eisentraut@enterprisedb.com> escreveu:On 18.05.22 01:18, Justin Pryzby wrote:\n> Take the first one as an example. It says:\n> \n> GenericCosts costs;\n> MemSet(&costs, 0, sizeof(costs));\n> \n> You sent a patch to change it to sizeof(GenericCosts).\n> \n> But it's not a pointer, so they are the same.\n\nThis instance can more easily be written as\n\n costs = {0};That would initialize the content at compilation and not at runtime, correct?And we would avoid MemSet/memset altogether. There are a lot of cases using MemSet (with struct variables) and at Windows 64 bits, long are 4 (four) bytes.So I believe that MemSet is less efficient on Windows than on Linux.\"The size of the '_vstart' buffer is not a multiple of the element size of the type 'long'.\"message from PVS-Studio static analysis tool.regards,Ranier Vilela",
"msg_date": "Wed, 18 May 2022 11:08:08 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "On Thu, 19 May 2022 at 02:08, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> That would initialize the content at compilation and not at runtime, correct?\n\nYour mental model of compilation and run-time might be flawed here.\nHere's no such thing as zeroing memory at compile time. There's only\nemitting instructions that perform those tasks at run-time.\nhttps://godbolt.org/ might help your understanding.\n\n> There are a lot of cases using MemSet (with struct variables) and at Windows 64 bits, long are 4 (four) bytes.\n> So I believe that MemSet is less efficient on Windows than on Linux.\n> \"The size of the '_vstart' buffer is not a multiple of the element size of the type 'long'.\"\n> message from PVS-Studio static analysis tool.\n\nI've been wondering for a while if we really need to have the MemSet()\nmacro. I see it was added in 8cb415449 (1997). I think compilers have\nevolved quite a bit in the past 25 years, so it could be time to\nrevisit that.\n\nYour comment on the sizeof(long) on win64 is certainly true. I wrote\nthe attached C program to test the performance difference.\n\n(windows 64-bit)\n>cl memset.c /Ox\n>memset 200000000\nRunning 200000000 loops\nMemSet: size 8: 1.833000 seconds\nMemSet: size 16: 1.841000 seconds\nMemSet: size 32: 1.838000 seconds\nMemSet: size 64: 1.851000 seconds\nMemSet: size 128: 3.228000 seconds\nMemSet: size 256: 5.278000 seconds\nMemSet: size 512: 3.943000 seconds\nmemset: size 8: 0.065000 seconds\nmemset: size 16: 0.131000 seconds\nmemset: size 32: 0.262000 seconds\nmemset: size 64: 0.530000 seconds\nmemset: size 128: 1.169000 seconds\nmemset: size 256: 2.950000 seconds\nmemset: size 512: 3.191000 seconds\n\nIt seems like there's no cases there where MemSet is faster than\nmemset. I was careful to only provide MemSet() with inputs that\nresult in it not using the memset fallback. I also provided constants\nso that the decision about which method to use was known at compile\ntime.\n\nIt's not clear to me why 512 is faster than 256. I saw the same on a repeat run.\n\nChanging \"long\" to \"long long\" it looks like:\n\n>memset 200000000\nRunning 200000000 loops\nMemSet: size 8: 0.066000 seconds\nMemSet: size 16: 1.978000 seconds\nMemSet: size 32: 1.982000 seconds\nMemSet: size 64: 1.973000 seconds\nMemSet: size 128: 1.970000 seconds\nMemSet: size 256: 3.225000 seconds\nMemSet: size 512: 5.366000 seconds\nmemset: size 8: 0.069000 seconds\nmemset: size 16: 0.132000 seconds\nmemset: size 32: 0.265000 seconds\nmemset: size 64: 0.527000 seconds\nmemset: size 128: 1.161000 seconds\nmemset: size 256: 2.976000 seconds\nmemset: size 512: 3.179000 seconds\n\nThe situation is a little different on my Linux machine:\n\n$ gcc memset.c -o memset -O2\n$ ./memset 200000000\nRunning 200000000 loops\nMemSet: size 8: 0.000002 seconds\nMemSet: size 16: 0.000000 seconds\nMemSet: size 32: 0.094041 seconds\nMemSet: size 64: 0.184618 seconds\nMemSet: size 128: 1.781503 seconds\nMemSet: size 256: 2.547910 seconds\nMemSet: size 512: 4.005173 seconds\nmemset: size 8: 0.046156 seconds\nmemset: size 16: 0.046123 seconds\nmemset: size 32: 0.092291 seconds\nmemset: size 64: 0.184509 seconds\nmemset: size 128: 1.781518 seconds\nmemset: size 256: 2.577104 seconds\nmemset: size 512: 4.004757 seconds\n\nIt looks like part of the work might be getting optimised away in the\n8-16 MemSet() calls.\n\nclang seems to have the opposite for size 8.\n\n$ clang memset.c -o memset -O2\n$ ./memset 200000000\nRunning 200000000 loops\nMemSet: size 8: 0.007653 seconds\nMemSet: size 16: 0.005771 seconds\nMemSet: size 32: 0.011539 seconds\nMemSet: size 64: 0.023095 seconds\nMemSet: size 128: 0.046130 seconds\nMemSet: size 256: 0.092269 seconds\nMemSet: size 512: 0.968564 seconds\nmemset: size 8: 0.000000 seconds\nmemset: size 16: 0.005776 seconds\nmemset: size 32: 0.011559 seconds\nmemset: size 64: 0.023069 seconds\nmemset: size 128: 0.046129 seconds\nmemset: size 256: 0.092243 seconds\nmemset: size 512: 0.968534 seconds\n\nThere does not seem to be any significant reduction in the size of the\nbinary from changing the MemSet macro to directly use memset. It went\nfrom 9865008 bytes down to 9860800 bytes (4208 bytes less).\n\nDavid",
"msg_date": "Thu, 19 May 2022 10:57:02 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I've been wondering for a while if we really need to have the MemSet()\n> macro. I see it was added in 8cb415449 (1997). I think compilers have\n> evolved quite a bit in the past 25 years, so it could be time to\n> revisit that.\n\nYeah, I've thought for awhile that technology has moved on from that.\nNobody's really taken the trouble to measure it though. (And no,\nresults from one compiler on one machine are not terribly convincing.)\n\nThe thing that makes this a bit more difficult than it might be is\nthe special cases we have for known-aligned and so on targets, which\nare particularly critical for palloc0 and makeNode etc. So there's\nmore than one case to look into. But I'd argue that those special\ncases are actually what we want to worry about the most: zeroing\nrelatively small, known-aligned node structs is THE use case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 May 2022 19:20:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em qua., 18 de mai. de 2022 às 19:57, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Thu, 19 May 2022 at 02:08, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > That would initialize the content at compilation and not at runtime,\n> correct?\n>\n> Your mental model of compilation and run-time might be flawed here.\n> Here's no such thing as zeroing memory at compile time. There's only\n> emitting instructions that perform those tasks at run-time.\n> https://godbolt.org/ might help your understanding.\n>\n> > There are a lot of cases using MemSet (with struct variables) and at\n> Windows 64 bits, long are 4 (four) bytes.\n> > So I believe that MemSet is less efficient on Windows than on Linux.\n> > \"The size of the '_vstart' buffer is not a multiple of the element size\n> of the type 'long'.\"\n> > message from PVS-Studio static analysis tool.\n>\n> I've been wondering for a while if we really need to have the MemSet()\n> macro. I see it was added in 8cb415449 (1997). I think compilers have\n> evolved quite a bit in the past 25 years, so it could be time to\n> revisit that.\n>\n+1\nAll compilers currently have memset optimized.\n\n\n> Your comment on the sizeof(long) on win64 is certainly true. I wrote\n> the attached C program to test the performance difference.\n>\n> (windows 64-bit)\n> >cl memset.c /Ox\n> >memset 200000000\n> Running 200000000 loops\n> MemSet: size 8: 1.833000 seconds\n> MemSet: size 16: 1.841000 seconds\n> MemSet: size 32: 1.838000 seconds\n> MemSet: size 64: 1.851000 seconds\n> MemSet: size 128: 3.228000 seconds\n> MemSet: size 256: 5.278000 seconds\n> MemSet: size 512: 3.943000 seconds\n> memset: size 8: 0.065000 seconds\n> memset: size 16: 0.131000 seconds\n> memset: size 32: 0.262000 seconds\n> memset: size 64: 0.530000 seconds\n> memset: size 128: 1.169000 seconds\n> memset: size 256: 2.950000 seconds\n> memset: size 512: 3.191000 seconds\n>\n> It seems like there's no cases there where MemSet is faster than\n> memset. I was careful to only provide MemSet() with inputs that\n> result in it not using the memset fallback. I also provided constants\n> so that the decision about which method to use was known at compile\n> time.\n>\n> It's not clear to me why 512 is faster than 256.\n\nProbably broken alignment with 256?\nAnother warning from PVS-Studio:\n[1] \"The pointer '_start' is cast to a more strictly aligned pointer type.\"\n\nsrc/contrib/postgres_fdw/connection.c (Line 1690)\nMemSet(values, 0, sizeof(values));\n\n\n\n> I saw the same on a repeat run.\n>\n> Changing \"long\" to \"long long\" it looks like:\n>\n> >memset 200000000\n> Running 200000000 loops\n> MemSet: size 8: 0.066000 seconds\n> MemSet: size 16: 1.978000 seconds\n> MemSet: size 32: 1.982000 seconds\n> MemSet: size 64: 1.973000 seconds\n> MemSet: size 128: 1.970000 seconds\n> MemSet: size 256: 3.225000 seconds\n> MemSet: size 512: 5.366000 seconds\n> memset: size 8: 0.069000 seconds\n> memset: size 16: 0.132000 seconds\n> memset: size 32: 0.265000 seconds\n> memset: size 64: 0.527000 seconds\n> memset: size 128: 1.161000 seconds\n> memset: size 256: 2.976000 seconds\n> memset: size 512: 3.179000 seconds\n>\n> The situation is a little different on my Linux machine:\n>\n> $ gcc memset.c -o memset -O2\n> $ ./memset 200000000\n> Running 200000000 loops\n> MemSet: size 8: 0.000002 seconds\n> MemSet: size 16: 0.000000 seconds\n> MemSet: size 32: 0.094041 seconds\n> MemSet: size 64: 0.184618 seconds\n> MemSet: size 128: 1.781503 seconds\n> MemSet: size 256: 2.547910 seconds\n> MemSet: size 512: 4.005173 seconds\n> memset: size 8: 0.046156 seconds\n> memset: size 16: 0.046123 seconds\n> memset: size 32: 0.092291 seconds\n> memset: size 64: 0.184509 seconds\n> memset: size 128: 1.781518 seconds\n> memset: size 256: 2.577104 seconds\n> memset: size 512: 4.004757 seconds\n>\n> It looks like part of the work might be getting optimised away in the\n> 8-16 MemSet() calls.\n>\nOn linux (long) have 8 bytes.\nI'm still surprised that MemSet (8/16) is faster.\n\n\n> clang seems to have the opposite for size 8.\n>\n> $ clang memset.c -o memset -O2\n> $ ./memset 200000000\n> Running 200000000 loops\n> MemSet: size 8: 0.007653 seconds\n> MemSet: size 16: 0.005771 seconds\n> MemSet: size 32: 0.011539 seconds\n> MemSet: size 64: 0.023095 seconds\n> MemSet: size 128: 0.046130 seconds\n> MemSet: size 256: 0.092269 seconds\n> MemSet: size 512: 0.968564 seconds\n> memset: size 8: 0.000000 seconds\n> memset: size 16: 0.005776 seconds\n> memset: size 32: 0.011559 seconds\n> memset: size 64: 0.023069 seconds\n> memset: size 128: 0.046129 seconds\n> memset: size 256: 0.092243 seconds\n> memset: size 512: 0.968534 seconds\n>\n> There does not seem to be any significant reduction in the size of the\n> binary from changing the MemSet macro to directly use memset. It went\n> from 9865008 bytes down to 9860800 bytes (4208 bytes less).\n>\nAnyway I think on Windows 64 bits,\nit is very worthwhile to remove the MemSet macro.\n\nregards,\nRanier Vilela\n\n[1] https://pvs-studio.com/en/docs/warnings/v1032/\n\nEm qua., 18 de mai. de 2022 às 19:57, David Rowley <dgrowleyml@gmail.com> escreveu:On Thu, 19 May 2022 at 02:08, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> That would initialize the content at compilation and not at runtime, correct?\n\nYour mental model of compilation and run-time might be flawed here.\nHere's no such thing as zeroing memory at compile time. There's only\nemitting instructions that perform those tasks at run-time.\nhttps://godbolt.org/ might help your understanding.\n\n> There are a lot of cases using MemSet (with struct variables) and at Windows 64 bits, long are 4 (four) bytes.\n> So I believe that MemSet is less efficient on Windows than on Linux.\n> \"The size of the '_vstart' buffer is not a multiple of the element size of the type 'long'.\"\n> message from PVS-Studio static analysis tool.\n\nI've been wondering for a while if we really need to have the MemSet()\nmacro. I see it was added in 8cb415449 (1997). I think compilers have\nevolved quite a bit in the past 25 years, so it could be time to\nrevisit that.+1All compilers currently have memset optimized.\n\nYour comment on the sizeof(long) on win64 is certainly true. I wrote\nthe attached C program to test the performance difference.\n\n(windows 64-bit)\n>cl memset.c /Ox\n>memset 200000000\nRunning 200000000 loops\nMemSet: size 8: 1.833000 seconds\nMemSet: size 16: 1.841000 seconds\nMemSet: size 32: 1.838000 seconds\nMemSet: size 64: 1.851000 seconds\nMemSet: size 128: 3.228000 seconds\nMemSet: size 256: 5.278000 seconds\nMemSet: size 512: 3.943000 seconds\nmemset: size 8: 0.065000 seconds\nmemset: size 16: 0.131000 seconds\nmemset: size 32: 0.262000 seconds\nmemset: size 64: 0.530000 seconds\nmemset: size 128: 1.169000 seconds\nmemset: size 256: 2.950000 seconds\nmemset: size 512: 3.191000 seconds\n\nIt seems like there's no cases there where MemSet is faster than\nmemset. I was careful to only provide MemSet() with inputs that\nresult in it not using the memset fallback. I also provided constants\nso that the decision about which method to use was known at compile\ntime.\n\nIt's not clear to me why 512 is faster than 256.Probably broken alignment with 256?Another warning from PVS-Studio:[1] \"The pointer '_start' is cast to a more strictly aligned pointer type.\"src/contrib/postgres_fdw/connection.c (Line 1690)MemSet(values, 0, sizeof(values)); I saw the same on a repeat run.\n\nChanging \"long\" to \"long long\" it looks like:\n\n>memset 200000000\nRunning 200000000 loops\nMemSet: size 8: 0.066000 seconds\nMemSet: size 16: 1.978000 seconds\nMemSet: size 32: 1.982000 seconds\nMemSet: size 64: 1.973000 seconds\nMemSet: size 128: 1.970000 seconds\nMemSet: size 256: 3.225000 seconds\nMemSet: size 512: 5.366000 seconds\nmemset: size 8: 0.069000 seconds\nmemset: size 16: 0.132000 seconds\nmemset: size 32: 0.265000 seconds\nmemset: size 64: 0.527000 seconds\nmemset: size 128: 1.161000 seconds\nmemset: size 256: 2.976000 seconds\nmemset: size 512: 3.179000 seconds\n\nThe situation is a little different on my Linux machine:\n\n$ gcc memset.c -o memset -O2\n$ ./memset 200000000\nRunning 200000000 loops\nMemSet: size 8: 0.000002 seconds\nMemSet: size 16: 0.000000 seconds\nMemSet: size 32: 0.094041 seconds\nMemSet: size 64: 0.184618 seconds\nMemSet: size 128: 1.781503 seconds\nMemSet: size 256: 2.547910 seconds\nMemSet: size 512: 4.005173 seconds\nmemset: size 8: 0.046156 seconds\nmemset: size 16: 0.046123 seconds\nmemset: size 32: 0.092291 seconds\nmemset: size 64: 0.184509 seconds\nmemset: size 128: 1.781518 seconds\nmemset: size 256: 2.577104 seconds\nmemset: size 512: 4.004757 seconds\n\nIt looks like part of the work might be getting optimised away in the\n8-16 MemSet() calls.On linux (long) have 8 bytes.I'm still surprised that MemSet (8/16) is faster. \n\nclang seems to have the opposite for size 8.\n\n$ clang memset.c -o memset -O2\n$ ./memset 200000000\nRunning 200000000 loops\nMemSet: size 8: 0.007653 seconds\nMemSet: size 16: 0.005771 seconds\nMemSet: size 32: 0.011539 seconds\nMemSet: size 64: 0.023095 seconds\nMemSet: size 128: 0.046130 seconds\nMemSet: size 256: 0.092269 seconds\nMemSet: size 512: 0.968564 seconds\nmemset: size 8: 0.000000 seconds\nmemset: size 16: 0.005776 seconds\nmemset: size 32: 0.011559 seconds\nmemset: size 64: 0.023069 seconds\nmemset: size 128: 0.046129 seconds\nmemset: size 256: 0.092243 seconds\nmemset: size 512: 0.968534 seconds\n\nThere does not seem to be any significant reduction in the size of the\nbinary from changing the MemSet macro to directly use memset. It went\nfrom 9865008 bytes down to 9860800 bytes (4208 bytes less).Anyway I think on Windows 64 bits, it is very worthwhile to remove the MemSet macro.regards,Ranier Vilela[1] https://pvs-studio.com/en/docs/warnings/v1032/",
"msg_date": "Wed, 18 May 2022 20:47:14 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em qua., 18 de mai. de 2022 às 20:20, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n\n> zeroing\n> relatively small, known-aligned node structs is THE use case.\n>\nCurrently, especially on 64-bit Windows, MemSet can break alignment.\n\nregards,\nRanier Vilela\n\nEm qua., 18 de mai. de 2022 às 20:20, Tom Lane <tgl@sss.pgh.pa.us> escreveu: zeroing\nrelatively small, known-aligned node structs is THE use case.Currently, especially on 64-bit Windows, MemSet can break alignment.regards,Ranier Vilela",
"msg_date": "Wed, 18 May 2022 20:51:01 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em qua., 18 de mai. de 2022 às 19:57, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Thu, 19 May 2022 at 02:08, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > That would initialize the content at compilation and not at runtime,\n> correct?\n>\n> Your mental model of compilation and run-time might be flawed here.\n> Here's no such thing as zeroing memory at compile time. There's only\n> emitting instructions that perform those tasks at run-time.\n> https://godbolt.org/ might help your understanding.\n>\n> > There are a lot of cases using MemSet (with struct variables) and at\n> Windows 64 bits, long are 4 (four) bytes.\n> > So I believe that MemSet is less efficient on Windows than on Linux.\n> > \"The size of the '_vstart' buffer is not a multiple of the element size\n> of the type 'long'.\"\n> > message from PVS-Studio static analysis tool.\n>\n> I've been wondering for a while if we really need to have the MemSet()\n> macro. I see it was added in 8cb415449 (1997). I think compilers have\n> evolved quite a bit in the past 25 years, so it could be time to\n> revisit that.\n>\n> Your comment on the sizeof(long) on win64 is certainly true. I wrote\n> the attached C program to test the performance difference.\n>\n> (windows 64-bit)\n> >cl memset.c /Ox\n> >memset 200000000\n> Running 200000000 loops\n> MemSet: size 8: 1.833000 seconds\n> MemSet: size 16: 1.841000 seconds\n> MemSet: size 32: 1.838000 seconds\n> MemSet: size 64: 1.851000 seconds\n> MemSet: size 128: 3.228000 seconds\n> MemSet: size 256: 5.278000 seconds\n> MemSet: size 512: 3.943000 seconds\n> memset: size 8: 0.065000 seconds\n> memset: size 16: 0.131000 seconds\n> memset: size 32: 0.262000 seconds\n> memset: size 64: 0.530000 seconds\n> memset: size 128: 1.169000 seconds\n> memset: size 256: 2.950000 seconds\n> memset: size 512: 3.191000 seconds\n>\n> It seems like there's no cases there where MemSet is faster than\n> memset. I was careful to only provide MemSet() with inputs that\n> result in it not using the memset fallback. I also provided constants\n> so that the decision about which method to use was known at compile\n> time.\n>\n> It's not clear to me why 512 is faster than 256. I saw the same on a\n> repeat run.\n>\n> Changing \"long\" to \"long long\" it looks like:\n>\n> >memset 200000000\n> Running 200000000 loops\n> MemSet: size 8: 0.066000 seconds\n> MemSet: size 16: 1.978000 seconds\n> MemSet: size 32: 1.982000 seconds\n> MemSet: size 64: 1.973000 seconds\n> MemSet: size 128: 1.970000 seconds\n> MemSet: size 256: 3.225000 seconds\n> MemSet: size 512: 5.366000 seconds\n> memset: size 8: 0.069000 seconds\n> memset: size 16: 0.132000 seconds\n> memset: size 32: 0.265000 seconds\n> memset: size 64: 0.527000 seconds\n> memset: size 128: 1.161000 seconds\n> memset: size 256: 2.976000 seconds\n> memset: size 512: 3.179000 seconds\n>\n> The situation is a little different on my Linux machine:\n>\n> $ gcc memset.c -o memset -O2\n> $ ./memset 200000000\n> Running 200000000 loops\n> MemSet: size 8: 0.000002 seconds\n> MemSet: size 16: 0.000000 seconds\n> MemSet: size 32: 0.094041 seconds\n> MemSet: size 64: 0.184618 seconds\n> MemSet: size 128: 1.781503 seconds\n> MemSet: size 256: 2.547910 seconds\n> MemSet: size 512: 4.005173 seconds\n> memset: size 8: 0.046156 seconds\n> memset: size 16: 0.046123 seconds\n> memset: size 32: 0.092291 seconds\n> memset: size 64: 0.184509 seconds\n> memset: size 128: 1.781518 seconds\n> memset: size 256: 2.577104 seconds\n> memset: size 512: 4.004757 seconds\n>\n> It looks like part of the work might be getting optimised away in the\n> 8-16 MemSet() calls.\n>\n> clang seems to have the opposite for size 8.\n>\n> $ clang memset.c -o memset -O2\n> $ ./memset 200000000\n> Running 200000000 loops\n> MemSet: size 8: 0.007653 seconds\n> MemSet: size 16: 0.005771 seconds\n> MemSet: size 32: 0.011539 seconds\n> MemSet: size 64: 0.023095 seconds\n> MemSet: size 128: 0.046130 seconds\n> MemSet: size 256: 0.092269 seconds\n> MemSet: size 512: 0.968564 seconds\n> memset: size 8: 0.000000 seconds\n> memset: size 16: 0.005776 seconds\n> memset: size 32: 0.011559 seconds\n> memset: size 64: 0.023069 seconds\n> memset: size 128: 0.046129 seconds\n> memset: size 256: 0.092243 seconds\n> memset: size 512: 0.968534 seconds\n>\nThe results from clang, only reinforce the argument in favor of native\nmemset.\nThere is still room for gcc to improve with 8/16 bytes and for sure at some\npoint they will.\nWhich will make memset faster on all platforms and compilers.\n\nregards,\nRanier Vilela\n\nEm qua., 18 de mai. de 2022 às 19:57, David Rowley <dgrowleyml@gmail.com> escreveu:On Thu, 19 May 2022 at 02:08, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> That would initialize the content at compilation and not at runtime, correct?\n\nYour mental model of compilation and run-time might be flawed here.\nHere's no such thing as zeroing memory at compile time. There's only\nemitting instructions that perform those tasks at run-time.\nhttps://godbolt.org/ might help your understanding.\n\n> There are a lot of cases using MemSet (with struct variables) and at Windows 64 bits, long are 4 (four) bytes.\n> So I believe that MemSet is less efficient on Windows than on Linux.\n> \"The size of the '_vstart' buffer is not a multiple of the element size of the type 'long'.\"\n> message from PVS-Studio static analysis tool.\n\nI've been wondering for a while if we really need to have the MemSet()\nmacro. I see it was added in 8cb415449 (1997). I think compilers have\nevolved quite a bit in the past 25 years, so it could be time to\nrevisit that.\n\nYour comment on the sizeof(long) on win64 is certainly true. I wrote\nthe attached C program to test the performance difference.\n\n(windows 64-bit)\n>cl memset.c /Ox\n>memset 200000000\nRunning 200000000 loops\nMemSet: size 8: 1.833000 seconds\nMemSet: size 16: 1.841000 seconds\nMemSet: size 32: 1.838000 seconds\nMemSet: size 64: 1.851000 seconds\nMemSet: size 128: 3.228000 seconds\nMemSet: size 256: 5.278000 seconds\nMemSet: size 512: 3.943000 seconds\nmemset: size 8: 0.065000 seconds\nmemset: size 16: 0.131000 seconds\nmemset: size 32: 0.262000 seconds\nmemset: size 64: 0.530000 seconds\nmemset: size 128: 1.169000 seconds\nmemset: size 256: 2.950000 seconds\nmemset: size 512: 3.191000 seconds\n\nIt seems like there's no cases there where MemSet is faster than\nmemset. I was careful to only provide MemSet() with inputs that\nresult in it not using the memset fallback. I also provided constants\nso that the decision about which method to use was known at compile\ntime.\n\nIt's not clear to me why 512 is faster than 256. I saw the same on a repeat run.\n\nChanging \"long\" to \"long long\" it looks like:\n\n>memset 200000000\nRunning 200000000 loops\nMemSet: size 8: 0.066000 seconds\nMemSet: size 16: 1.978000 seconds\nMemSet: size 32: 1.982000 seconds\nMemSet: size 64: 1.973000 seconds\nMemSet: size 128: 1.970000 seconds\nMemSet: size 256: 3.225000 seconds\nMemSet: size 512: 5.366000 seconds\nmemset: size 8: 0.069000 seconds\nmemset: size 16: 0.132000 seconds\nmemset: size 32: 0.265000 seconds\nmemset: size 64: 0.527000 seconds\nmemset: size 128: 1.161000 seconds\nmemset: size 256: 2.976000 seconds\nmemset: size 512: 3.179000 seconds\n\nThe situation is a little different on my Linux machine:\n\n$ gcc memset.c -o memset -O2\n$ ./memset 200000000\nRunning 200000000 loops\nMemSet: size 8: 0.000002 seconds\nMemSet: size 16: 0.000000 seconds\nMemSet: size 32: 0.094041 seconds\nMemSet: size 64: 0.184618 seconds\nMemSet: size 128: 1.781503 seconds\nMemSet: size 256: 2.547910 seconds\nMemSet: size 512: 4.005173 seconds\nmemset: size 8: 0.046156 seconds\nmemset: size 16: 0.046123 seconds\nmemset: size 32: 0.092291 seconds\nmemset: size 64: 0.184509 seconds\nmemset: size 128: 1.781518 seconds\nmemset: size 256: 2.577104 seconds\nmemset: size 512: 4.004757 seconds\n\nIt looks like part of the work might be getting optimised away in the\n8-16 MemSet() calls.\n\nclang seems to have the opposite for size 8.\n\n$ clang memset.c -o memset -O2\n$ ./memset 200000000\nRunning 200000000 loops\nMemSet: size 8: 0.007653 seconds\nMemSet: size 16: 0.005771 seconds\nMemSet: size 32: 0.011539 seconds\nMemSet: size 64: 0.023095 seconds\nMemSet: size 128: 0.046130 seconds\nMemSet: size 256: 0.092269 seconds\nMemSet: size 512: 0.968564 seconds\nmemset: size 8: 0.000000 seconds\nmemset: size 16: 0.005776 seconds\nmemset: size 32: 0.011559 seconds\nmemset: size 64: 0.023069 seconds\nmemset: size 128: 0.046129 seconds\nmemset: size 256: 0.092243 seconds\nmemset: size 512: 0.968534 secondsThe results from clang, only reinforce the argument in favor of native memset.There is still room for gcc to improve with 8/16 bytes and for sure at some point they will.Which will make memset faster on all platforms and compilers.regards,Ranier Vilela",
"msg_date": "Wed, 18 May 2022 21:04:23 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Taking it a step further.\nCreated a new patch into commitfest, targeting 16 version.\nhttps://commitfest.postgresql.org/38/3645/\n\nCurrently native memset is well optimized on several platforms, including\nWindows 64 bits [1].\n\nHowever, even the native memset has problems,\nI redid the David's memset.c test:\n\nC:\\usr\\src\\tests\\memset>memset2 2000000000\nRunning 2000000000 loops\nMemSet: size 8: 6.635000 seconds\nMemSet: size 16: 6.594000 seconds\nMemSet: size 32: 6.694000 seconds\nMemSet: size 64: 9.002000 seconds\nMemSet: size 128: 10.598000 seconds\nMemSet: size 256: 25.061000 seconds\nMemSet: size 512: 27.365000 seconds\nmemset: size 8: 0.594000 seconds\nmemset: size 16: 0.595000 seconds\nmemset: size 32: 1.189000 seconds\nmemset: size 64: 2.378000 seconds\nmemset: size 128: 4.753000 seconds\nmemset: size 256: 24.391000 seconds\nmemset: size 512: 27.064000 seconds\n\nBoth MemSet/memset perform very poorly with 256/512.\n\nBut, I believe it is worth removing the use of MemSet, because the usage is\nempirical and has been mixed with memset in several places in the code,\nwithout any criteria.\nUsing just memset makes the mental process of using it more simplified and\nit seems like there aren't any regressions when removing the use of MemSet.\n\nWindows 10 64 bit\nmsvc 2019 64 bit\nRAM 8GB\nSSD 256GB\nPostgres (15beta1 with original configuration)\n\n1. pgbench -c 50 -T 300 -S -n -U postgres\nHEAD:\npgbench (15beta1)\ntransaction type: <builtin: select only>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 50\nnumber of threads: 1\nmaximum number of tries: 1\nduration: 300 s\nnumber of transactions actually processed: 10448967\nnumber of failed transactions: 0 (0.000%)\nlatency average = 1.432 ms\ninitial connection time = 846.186 ms\ntps = 34926.861987 (without initial connection time)\n\nPATCHED (without MemSet)\npgbench (15beta1)\ntransaction type: <builtin: select only>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 50\nnumber of threads: 1\nmaximum number of tries: 1\nduration: 300 s\nnumber of transactions actually processed: 10655332\nnumber of failed transactions: 0 (0.000%)\nlatency average = 1.404 ms\ninitial connection time = 866.203 ms\ntps = 35621.045750 (without initial connection time)\n\n\n2.\nCREATE TABLE t_test (x numeric);\nINSERT INTO t_test SELECT random()\n FROM generate_series(1, 5000000);\nANALYZE;\nSHOW work_mem;\n\nHEAD:\npostgres=# explain analyze SELECT * FROM t_test ORDER BY x;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Gather Merge (cost=397084.73..883229.71 rows=4166666 width=11) (actual\ntime=1328.331..2743.310 rows=5000000 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Sort (cost=396084.71..401293.04 rows=2083333 width=11) (actual\ntime=1278.442..1513.510 rows=1666667 loops=3)\n Sort Key: x\n Sort Method: external merge Disk: 25704kB\n Worker 0: Sort Method: external merge Disk: 23960kB\n Worker 1: Sort Method: external merge Disk: 23960kB\n -> Parallel Seq Scan on t_test (cost=0.00..47861.33 rows=2083333\nwidth=11) (actual time=0.234..128.607 rows=1666667 loops=3)\n Planning Time: 0.064 ms\n Execution Time: 2863.381 ms\n(11 rows)\n\n\nPATCHED:\npostgres=# explain analyze SELECT * FROM t_test ORDER BY x;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Gather Merge (cost=397084.73..883229.71 rows=4166666 width=11) (actual\ntime=1309.703..2705.027 rows=5000000 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Sort (cost=396084.71..401293.04 rows=2083333 width=11) (actual\ntime=1281.111..1515.928 rows=1666667 loops=3)\n Sort Key: x\n Sort Method: external merge Disk: 24880kB\n Worker 0: Sort Method: external merge Disk: 24776kB\n Worker 1: Sort Method: external merge Disk: 23960kB\n -> Parallel Seq Scan on t_test (cost=0.00..47861.33 rows=2083333\nwidth=11) (actual time=0.260..130.277 rows=1666667 loops=3)\n Planning Time: 0.060 ms\n Execution Time: 2825.201 ms\n(11 rows)\n\nI leave MemSetAligned and MemSetLoop to another step.\n\nregards,\nRanier Vilela\n\n[1]\nhttps://msrc-blog.microsoft.com/2021/01/11/building-faster-amd64-memset-routines/",
"msg_date": "Thu, 19 May 2022 13:09:46 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "On Thu, 19 May 2022 at 11:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The thing that makes this a bit more difficult than it might be is\n> the special cases we have for known-aligned and so on targets, which\n> are particularly critical for palloc0 and makeNode etc. So there's\n> more than one case to look into. But I'd argue that those special\n> cases are actually what we want to worry about the most: zeroing\n> relatively small, known-aligned node structs is THE use case.\n\nI think the makeNode() trickery would be harder to get rid of, or for\nthat matter, anything where the size/alignment is unknown at compile\ntime. I think the more interesting ones that we might be able to get\nrid of are the ones where the alignment and size *are* known at\ncompile time. Also probably anything that passes a compile-time const\nthat's not 0 will fallback on memset anyway, so might as well be\nremoved to tidy things up.\n\nIt just all seems a bit untidy when you look at functions like\nExecStoreAllNullTuple() which use a mix of memset and MemSet without\nany apparent explanation of why. That particular one is likely that\nway due to the first size guaranteed to be multiples of sizeof(Datum)\nand the latter not.\n\nNaturally, we'd need to run enough benchmarks to prove to ourselves\nthat we're not causing any slowdowns. The intention of memset.c was\nto try to put something out there that people could test so we could\nget an idea if there are any machines/compilers that we might need to\nbe concerned about.\n\nDavid\n\n\n",
"msg_date": "Fri, 20 May 2022 11:20:34 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "On 19.05.22 18:09, Ranier Vilela wrote:\n> Taking it a step further.\n> Created a new patch into commitfest, targeting 16 version.\n> https://commitfest.postgresql.org/38/3645/ \n> <https://commitfest.postgresql.org/38/3645/>\n\nI have committed your 001 patch, which was clearly a (harmless) mistake.\n\nI have also committed a patch that gets rid of MemSet() calls where the \nvalue is a constant not-0, because that just falls back to memset() anyway.\n\nI'm on board with trying to get rid of MemSet(), but first I need to \nanalyze all the performance numbers and arguments that were shown in \nthis thread.\n\n\n",
"msg_date": "Fri, 1 Jul 2022 00:37:23 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em qui., 30 de jun. de 2022 às 19:37, Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> escreveu:\n\n> On 19.05.22 18:09, Ranier Vilela wrote:\n> > Taking it a step further.\n> > Created a new patch into commitfest, targeting 16 version.\n> > https://commitfest.postgresql.org/38/3645/\n> > <https://commitfest.postgresql.org/38/3645/>\n>\n> I have committed your 001 patch, which was clearly a (harmless) mistake.\n>\nThank you.\n\n\n>\n> I have also committed a patch that gets rid of MemSet() calls where the\n> value is a constant not-0, because that just falls back to memset() anyway.\n>\n> I'm on board with trying to get rid of MemSet(), but first I need to\n> analyze all the performance numbers and arguments that were shown in\n> this thread.\n>\nOne good argument is that using memset, allows to compiler\nanalyze and remove completely memset call if he understands\nthat can do it, which with MemSet is not possible.\n\nregards,\nRanier Vilela\n\nEm qui., 30 de jun. de 2022 às 19:37, Peter Eisentraut <peter.eisentraut@enterprisedb.com> escreveu:On 19.05.22 18:09, Ranier Vilela wrote:\n> Taking it a step further.\n> Created a new patch into commitfest, targeting 16 version.\n> https://commitfest.postgresql.org/38/3645/ \n> <https://commitfest.postgresql.org/38/3645/>\n\nI have committed your 001 patch, which was clearly a (harmless) mistake.Thank you. \n\nI have also committed a patch that gets rid of MemSet() calls where the \nvalue is a constant not-0, because that just falls back to memset() anyway.\n\nI'm on board with trying to get rid of MemSet(), but first I need to \nanalyze all the performance numbers and arguments that were shown in \nthis thread.One good argument is that using memset, allows to compileranalyze and remove completely memset call if he understands that can do it, which with MemSet is not possible.regards,Ranier Vilela",
"msg_date": "Fri, 1 Jul 2022 08:10:37 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em qui., 30 de jun. de 2022 às 19:37, Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> escreveu:\n\n> I have also committed a patch that gets rid of MemSet() calls where the\n> value is a constant not-0, because that just falls back to memset() anyway.\n>\nPeter there are some missing paths in this commit.\n\nDespite having included the attached patch, there is no need to credit me\nas the author, just as a report.\n\nregards,\nRanier Vilela",
"msg_date": "Fri, 1 Jul 2022 12:58:14 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "On 18.05.22 15:52, Peter Eisentraut wrote:\n> On 18.05.22 01:18, Justin Pryzby wrote:\n>> Take the first one as an example. It says:\n>>\n>> GenericCosts costs;\n>> MemSet(&costs, 0, sizeof(costs));\n>>\n>> You sent a patch to change it to sizeof(GenericCosts).\n>>\n>> But it's not a pointer, so they are the same.\n> \n> This instance can more easily be written as\n> \n> costs = {0};\n\nThe attached patch replaces all MemSet() calls with struct \ninitialization where that is easily possible. (For example, some cases \nhave to worry about padding bits, so I left those.)\n\nThis reduces the number of MemSet() calls from about 400 to about 200. \nMaybe this can help simplify the investigation of the merits of the \nremaining calls.",
"msg_date": "Thu, 7 Jul 2022 13:00:15 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "On 2022-Jul-07, Peter Eisentraut wrote:\n\n> diff --git a/src/bin/pg_basebackup/pg_basebackup.c b/src/bin/pg_basebackup/pg_basebackup.c\n> index 4445a86aee..79b23fa7d7 100644\n> --- a/src/bin/pg_basebackup/pg_basebackup.c\n> +++ b/src/bin/pg_basebackup/pg_basebackup.c\n\n> @@ -1952,7 +1948,6 @@ BaseBackup(char *compression_algorithm, char *compression_detail,\n> \telse\n> \t\tstarttli = latesttli;\n> \tPQclear(res);\n> -\tMemSet(xlogend, 0, sizeof(xlogend));\n> \n> \tif (verbose && includewal != NO_WAL)\n> \t\tpg_log_info(\"write-ahead log start point: %s on timeline %u\",\n\nYou removed the MemSet here, but there's no corresponding\ninitialization.\n\n> diff --git a/src/port/snprintf.c b/src/port/snprintf.c\n> index abb1c59770..e646b0e642 100644\n> --- a/src/port/snprintf.c\n> +++ b/src/port/snprintf.c\n> @@ -756,12 +756,9 @@ find_arguments(const char *format, va_list args,\n> \tint\t\t\tlongflag;\n> \tint\t\t\tfmtpos;\n> \tint\t\t\ti;\n> -\tint\t\t\tlast_dollar;\n> -\tPrintfArgType argtypes[PG_NL_ARGMAX + 1];\n> -\n> \t/* Initialize to \"no dollar arguments known\" */\n> -\tlast_dollar = 0;\n> -\tMemSet(argtypes, 0, sizeof(argtypes));\n> +\tint\t\t\tlast_dollar = 0;\n> +\tPrintfArgType argtypes[PG_NL_ARGMAX + 1] = {0};\n\npgindent will insert a blank line before the comment, which I personally\nfind quite ugly (because it splits the block of declarations).\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"El Maquinismo fue proscrito so pena de cosquilleo hasta la muerte\"\n(Ijon Tichy en Viajes, Stanislaw Lem)\n\n\n",
"msg_date": "Thu, 7 Jul 2022 13:16:07 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "On 01.07.22 17:58, Ranier Vilela wrote:\n> Em qui., 30 de jun. de 2022 às 19:37, Peter Eisentraut \n> <peter.eisentraut@enterprisedb.com \n> <mailto:peter.eisentraut@enterprisedb.com>> escreveu:\n> \n> I have also committed a patch that gets rid of MemSet() calls where the\n> value is a constant not-0, because that just falls back to memset()\n> anyway.\n> \n> Peter there are some missing paths in this commit.\n\nAs I wrote in the commit message:\n\n(There are a few MemSet() calls that I didn't change to maintain the \nconsistency with their surrounding code.)\n\n\n",
"msg_date": "Thu, 7 Jul 2022 13:44:38 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em qui., 7 de jul. de 2022 às 08:00, Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> escreveu:\n\n> On 18.05.22 15:52, Peter Eisentraut wrote:\n> > On 18.05.22 01:18, Justin Pryzby wrote:\n> >> Take the first one as an example. It says:\n> >>\n> >> GenericCosts costs;\n> >> MemSet(&costs, 0, sizeof(costs));\n> >>\n> >> You sent a patch to change it to sizeof(GenericCosts).\n> >>\n> >> But it's not a pointer, so they are the same.\n> >\n> > This instance can more easily be written as\n> >\n> > costs = {0};\n>\n> The attached patch replaces all MemSet() calls with struct\n> initialization where that is easily possible. (For example, some cases\n> have to worry about padding bits, so I left those.)\n>\nSounds great.\n\n#include <stdio.h>\n#include <string.h>\n\nint main(void) {\n bool nulls[4] = {0};\n int i;\n\n memset(nulls, 0, sizeof(nulls));\n\n for(i = 0; i < 4; i++)\n {\n nulls[i] = 0;\n }\n\n return 1;\n}\n\nmain:\n push rbp\n mov rbp, rsp\n sub rsp, 16\n mov DWORD PTR [rbp-8], 0 // bool nulls[4] = {0}; lea\n rax, [rbp-8]\n mov edx, 4\n mov esi, 0\n mov rdi, rax\n call memset\n mov DWORD PTR [rbp-4], 0\n jmp .L2\n.L3:\n mov eax, DWORD PTR [rbp-4]\n cdqe\n mov BYTE PTR [rbp-8+rax], 0\n add DWORD PTR [rbp-4], 1\n.L2:\n cmp DWORD PTR [rbp-4], 3\n jle .L3\n mov eax, 1\n leave\n ret\n\nOnly one line using {0}.\n\n+1\n\nregards,\nRanier Vilela\n\nEm qui., 7 de jul. de 2022 às 08:00, Peter Eisentraut <peter.eisentraut@enterprisedb.com> escreveu:On 18.05.22 15:52, Peter Eisentraut wrote:\n> On 18.05.22 01:18, Justin Pryzby wrote:\n>> Take the first one as an example. It says:\n>>\n>> GenericCosts costs;\n>> MemSet(&costs, 0, sizeof(costs));\n>>\n>> You sent a patch to change it to sizeof(GenericCosts).\n>>\n>> But it's not a pointer, so they are the same.\n> \n> This instance can more easily be written as\n> \n> costs = {0};\n\nThe attached patch replaces all MemSet() calls with struct \ninitialization where that is easily possible. (For example, some cases \nhave to worry about padding bits, so I left those.)Sounds great.\n#include <stdio.h>#include <string.h>int main(void) { bool nulls[4] = {0}; int i; memset(nulls, 0, sizeof(nulls)); for(i = 0; i < 4; i++) { nulls[i] = 0; } return 1;}\n\nmain: push rbp mov rbp, rsp sub rsp, 16 mov DWORD PTR [rbp-8], 0 // bool nulls[4] = {0};\n lea rax, [rbp-8] mov edx, 4 mov esi, 0 mov rdi, rax call memset mov DWORD PTR [rbp-4], 0 jmp .L2.L3: mov eax, DWORD PTR [rbp-4] cdqe mov BYTE PTR [rbp-8+rax], 0 add DWORD PTR [rbp-4], 1.L2: cmp DWORD PTR [rbp-4], 3 jle .L3 mov eax, 1 leave ret Only one line using {0}.+1regards,Ranier Vilela",
"msg_date": "Thu, 7 Jul 2022 08:53:29 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em qui., 7 de jul. de 2022 às 08:00, Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> escreveu:\n>diff --git a/src/backend/access/transam/twophase.c\nb/src/backend/access/transam/twophase.c\n>index 41b31c5c6f..803d169f57 100644\n>--- a/src/backend/access/transam/twophase.c\n>+++ b/src/backend/access/transam/twophase.c\n>@@ -780,8 +780,8 @@ pg_prepared_xact(PG_FUNCTION_ARGS)\n> {\n> GlobalTransaction gxact = &status->array[status->currIdx++];\n> PGPROC *proc = &ProcGlobal->allProcs[gxact->pgprocno];\n>- Datum values[5];\n>- bool nulls[5];\n>+ Datum values[5] = {0};\n>+ bool nulls[5] = {0};\n\nvalues variable no initialization or MemSet needed.\n\ndiff --git a/src/backend/access/transam/xlogfuncs.c\nb/src/backend/access/transam/xlogfuncs.c\nindex 02bd919ff6..61e0f4a29c 100644\n--- a/src/backend/access/transam/xlogfuncs.c\n+++ b/src/backend/access/transam/xlogfuncs.c\n@@ -106,8 +106,8 @@ pg_backup_stop(PG_FUNCTION_ARGS)\n {\n #define PG_STOP_BACKUP_V2_COLS 3\n TupleDesc tupdesc;\n- Datum values[PG_STOP_BACKUP_V2_COLS];\n- bool nulls[PG_STOP_BACKUP_V2_COLS];\n+ Datum values[PG_STOP_BACKUP_V2_COLS] = {0};\n+ bool nulls[PG_STOP_BACKUP_V2_COLS] = {0};\n\nSame, values variable no initialization or MemSet needed.\n\ndiff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.c\nindex 5f1726c095..17ff617fba 100644\n--- a/src/backend/catalog/aclchk.c\n+++ b/src/backend/catalog/aclchk.c\n@@ -1188,9 +1188,6 @@ SetDefaultACL(InternalDefaultACL *iacls)\n Acl *old_acl;\n Acl *new_acl;\n HeapTuple newtuple;\n- Datum values[Natts_pg_default_acl];\n- bool nulls[Natts_pg_default_acl];\n- bool replaces[Natts_pg_default_acl];\n int noldmembers;\n int nnewmembers;\n Oid *oldmembers;\n@@ -1341,13 +1338,11 @@ SetDefaultACL(InternalDefaultACL *iacls)\n }\n else\n {\n+ Datum values[Natts_pg_default_acl] = {0};\n+ bool nulls[Natts_pg_default_acl] = {0};\n\nreplaces, can be reduced more one level.\n\nline 1365:\nelse\n{\n +bool replaces[Natts_pg_default_acl] = {0};\n defAclOid = ((Form_pg_default_acl) GETSTRUCT(tuple))->oid;\n\nplease, wait a minute, I will produce a new version of your patch, with\nsome changes for your review.\n\nregards,\nRanier Vilela\n\nEm qui., 7 de jul. de 2022 às 08:00, Peter Eisentraut <peter.eisentraut@enterprisedb.com> escreveu:>diff --git a/src/backend/access/transam/twophase.c b/src/backend/access/transam/twophase.c>index 41b31c5c6f..803d169f57 100644>--- a/src/backend/access/transam/twophase.c>+++ b/src/backend/access/transam/twophase.c>@@ -780,8 +780,8 @@ pg_prepared_xact(PG_FUNCTION_ARGS)> \t{> \t\tGlobalTransaction gxact = &status->array[status->currIdx++];> \t\tPGPROC\t *proc = &ProcGlobal->allProcs[gxact->pgprocno];>-\t\tDatum\t\tvalues[5];>-\t\tbool\t\tnulls[5];>+\t\tDatum\t\tvalues[5] = {0};>+\t\tbool\t\tnulls[5] = {0};values variable no initialization or MemSet needed.diff --git a/src/backend/access/transam/xlogfuncs.c b/src/backend/access/transam/xlogfuncs.cindex 02bd919ff6..61e0f4a29c 100644--- a/src/backend/access/transam/xlogfuncs.c+++ b/src/backend/access/transam/xlogfuncs.c@@ -106,8 +106,8 @@ pg_backup_stop(PG_FUNCTION_ARGS) { #define PG_STOP_BACKUP_V2_COLS 3 \tTupleDesc\ttupdesc;-\tDatum\t\tvalues[PG_STOP_BACKUP_V2_COLS];-\tbool\t\tnulls[PG_STOP_BACKUP_V2_COLS];+\tDatum\t\tvalues[PG_STOP_BACKUP_V2_COLS] = {0};+\tbool\t\tnulls[PG_STOP_BACKUP_V2_COLS] = {0};Same, values variable no initialization or MemSet needed.diff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.cindex 5f1726c095..17ff617fba 100644--- a/src/backend/catalog/aclchk.c+++ b/src/backend/catalog/aclchk.c@@ -1188,9 +1188,6 @@ SetDefaultACL(InternalDefaultACL *iacls) \tAcl\t\t *old_acl; \tAcl\t\t *new_acl; \tHeapTuple\tnewtuple;-\tDatum\t\tvalues[Natts_pg_default_acl];-\tbool\t\tnulls[Natts_pg_default_acl];-\tbool\t\treplaces[Natts_pg_default_acl]; \tint\t\t\tnoldmembers; \tint\t\t\tnnewmembers; \tOid\t\t *oldmembers;@@ -1341,13 +1338,11 @@ SetDefaultACL(InternalDefaultACL *iacls) \t} \telse \t{+\t\tDatum\t\tvalues[Natts_pg_default_acl] = {0};+\t\tbool\t\tnulls[Natts_pg_default_acl] = {0};replaces, can be reduced more one level.line 1365:\t\telse\t\t{\n +bool\t\treplaces[Natts_pg_default_acl] = {0}; defAclOid = ((Form_pg_default_acl) GETSTRUCT(tuple))->oid;please, wait a minute, I will produce a new version of your patch, with some changes for your review.regards,Ranier Vilela",
"msg_date": "Thu, 7 Jul 2022 09:45:34 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em qui., 7 de jul. de 2022 às 09:45, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em qui., 7 de jul. de 2022 às 08:00, Peter Eisentraut <\n> peter.eisentraut@enterprisedb.com> escreveu:\n> >diff --git a/src/backend/access/transam/twophase.c\n> b/src/backend/access/transam/twophase.c\n> >index 41b31c5c6f..803d169f57 100644\n> >--- a/src/backend/access/transam/twophase.c\n> >+++ b/src/backend/access/transam/twophase.c\n> >@@ -780,8 +780,8 @@ pg_prepared_xact(PG_FUNCTION_ARGS)\n> > {\n> > GlobalTransaction gxact = &status->array[status->currIdx++];\n> > PGPROC *proc = &ProcGlobal->allProcs[gxact->pgprocno];\n> >- Datum values[5];\n> >- bool nulls[5];\n> >+ Datum values[5] = {0};\n> >+ bool nulls[5] = {0};\n>\n> values variable no initialization or MemSet needed.\n>\n> diff --git a/src/backend/access/transam/xlogfuncs.c\n> b/src/backend/access/transam/xlogfuncs.c\n> index 02bd919ff6..61e0f4a29c 100644\n> --- a/src/backend/access/transam/xlogfuncs.c\n> +++ b/src/backend/access/transam/xlogfuncs.c\n> @@ -106,8 +106,8 @@ pg_backup_stop(PG_FUNCTION_ARGS)\n> {\n> #define PG_STOP_BACKUP_V2_COLS 3\n> TupleDesc tupdesc;\n> - Datum values[PG_STOP_BACKUP_V2_COLS];\n> - bool nulls[PG_STOP_BACKUP_V2_COLS];\n> + Datum values[PG_STOP_BACKUP_V2_COLS] = {0};\n> + bool nulls[PG_STOP_BACKUP_V2_COLS] = {0};\n>\n> Same, values variable no initialization or MemSet needed.\n>\n> diff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.c\n> index 5f1726c095..17ff617fba 100644\n> --- a/src/backend/catalog/aclchk.c\n> +++ b/src/backend/catalog/aclchk.c\n> @@ -1188,9 +1188,6 @@ SetDefaultACL(InternalDefaultACL *iacls)\n> Acl *old_acl;\n> Acl *new_acl;\n> HeapTuple newtuple;\n> - Datum values[Natts_pg_default_acl];\n> - bool nulls[Natts_pg_default_acl];\n> - bool replaces[Natts_pg_default_acl];\n> int noldmembers;\n> int nnewmembers;\n> Oid *oldmembers;\n> @@ -1341,13 +1338,11 @@ SetDefaultACL(InternalDefaultACL *iacls)\n> }\n> else\n> {\n> + Datum values[Natts_pg_default_acl] = {0};\n> + bool nulls[Natts_pg_default_acl] = {0};\n>\n> replaces, can be reduced more one level.\n>\n> line 1365:\n> else\n> {\n> +bool replaces[Natts_pg_default_acl] = {0};\n> defAclOid = ((Form_pg_default_acl) GETSTRUCT(tuple))->oid;\n>\n> please, wait a minute, I will produce a new version of your patch, with\n> some changes for your review.\n>\nAttached the v1 of your patch.\nI think that all is safe to switch MemSet by {0}.\n\nregards,\nRanier Vilela",
"msg_date": "Thu, 7 Jul 2022 14:01:25 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "\nOn 07.07.22 13:16, Alvaro Herrera wrote:\n> On 2022-Jul-07, Peter Eisentraut wrote:\n> \n>> diff --git a/src/bin/pg_basebackup/pg_basebackup.c b/src/bin/pg_basebackup/pg_basebackup.c\n>> index 4445a86aee..79b23fa7d7 100644\n>> --- a/src/bin/pg_basebackup/pg_basebackup.c\n>> +++ b/src/bin/pg_basebackup/pg_basebackup.c\n> \n>> @@ -1952,7 +1948,6 @@ BaseBackup(char *compression_algorithm, char *compression_detail,\n>> \telse\n>> \t\tstarttli = latesttli;\n>> \tPQclear(res);\n>> -\tMemSet(xlogend, 0, sizeof(xlogend));\n>> \n>> \tif (verbose && includewal != NO_WAL)\n>> \t\tpg_log_info(\"write-ahead log start point: %s on timeline %u\",\n> \n> You removed the MemSet here, but there's no corresponding\n> initialization.\n\nMaybe that was an oversight by me, but it seems to me that that \ninitialization was useless anyway, since xlogend is later \nunconditionally overwritten anyway.\n\n>> diff --git a/src/port/snprintf.c b/src/port/snprintf.c\n>> index abb1c59770..e646b0e642 100644\n>> --- a/src/port/snprintf.c\n>> +++ b/src/port/snprintf.c\n>> @@ -756,12 +756,9 @@ find_arguments(const char *format, va_list args,\n>> \tint\t\t\tlongflag;\n>> \tint\t\t\tfmtpos;\n>> \tint\t\t\ti;\n>> -\tint\t\t\tlast_dollar;\n>> -\tPrintfArgType argtypes[PG_NL_ARGMAX + 1];\n>> -\n>> \t/* Initialize to \"no dollar arguments known\" */\n>> -\tlast_dollar = 0;\n>> -\tMemSet(argtypes, 0, sizeof(argtypes));\n>> +\tint\t\t\tlast_dollar = 0;\n>> +\tPrintfArgType argtypes[PG_NL_ARGMAX + 1] = {0};\n> \n> pgindent will insert a blank line before the comment, which I personally\n> find quite ugly (because it splits the block of declarations).\n\nYeah. I think I can convert that to an end-of-line comment instead.\n\n\n",
"msg_date": "Mon, 11 Jul 2022 15:26:16 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em qui., 7 de jul. de 2022 às 14:01, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Attached the v1 of your patch.\n> I think that all is safe to switch MemSet by {0}.\n>\nHere the rebased patch v2, against latest head.\n\nregards,\nRanier Vilela",
"msg_date": "Mon, 11 Jul 2022 16:06:33 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "On 11.07.22 21:06, Ranier Vilela wrote:\n> Em qui., 7 de jul. de 2022 às 14:01, Ranier Vilela <ranier.vf@gmail.com \n> <mailto:ranier.vf@gmail.com>> escreveu:\n> \n> Attached the v1 of your patch.\n> I think that all is safe to switch MemSet by {0}.\n> \n> Here the rebased patch v2, against latest head.\n\nI have committed my patch with Álvaro's comments addressed.\n\nYour patch appears to add in changes that are either arguably out of \nscope or would need further review (e.g., changing memset() calls, \nchanging the scope of some variables, changing places that need to worry \nabout padding bits). Please submit separate patches for those, and we \ncan continue the analysis.\n\n\n",
"msg_date": "Sat, 16 Jul 2022 08:58:10 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em sáb, 16 de jul de 2022 2:58 AM, Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> escreveu:\n\n> On 11.07.22 21:06, Ranier Vilela wrote:\n> > Em qui., 7 de jul. de 2022 às 14:01, Ranier Vilela <ranier.vf@gmail.com\n> > <mailto:ranier.vf@gmail.com>> escreveu:\n> >\n> > Attached the v1 of your patch.\n> > I think that all is safe to switch MemSet by {0}.\n> >\n> > Here the rebased patch v2, against latest head.\n>\n> I have committed my patch with Álvaro's comments addressed\n>\nI see.\nIt's annoing that old compiler (gcc 4.7.2) don't handle this style.\n\n\n> Your patch appears to add in changes that are either arguably out of\n> scope or would need further review (e.g., changing memset() calls,\n> changing the scope of some variables, changing places that need to worry\n> about padding bits). Please submit separate patches for those, and we\n> can continue the analysis.\n>\nSure.\n\nRegards\nRanier Vilela\n\n>\n\nEm sáb, 16 de jul de 2022 2:58 AM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> escreveu:On 11.07.22 21:06, Ranier Vilela wrote:\n> Em qui., 7 de jul. de 2022 às 14:01, Ranier Vilela <ranier.vf@gmail.com \n> <mailto:ranier.vf@gmail.com>> escreveu:\n> \n> Attached the v1 of your patch.\n> I think that all is safe to switch MemSet by {0}.\n> \n> Here the rebased patch v2, against latest head.\n\nI have committed my patch with Álvaro's comments addressedI see.It's annoing that old compiler (gcc 4.7.2) don't handle this style.\n\nYour patch appears to add in changes that are either arguably out of \nscope or would need further review (e.g., changing memset() calls, \nchanging the scope of some variables, changing places that need to worry \nabout padding bits). Please submit separate patches for those, and we \ncan continue the analysis.Sure.RegardsRanier Vilela",
"msg_date": "Sat, 16 Jul 2022 15:54:56 -0400",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em sáb., 16 de jul. de 2022 às 16:54, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n>\n>\n> Em sáb, 16 de jul de 2022 2:58 AM, Peter Eisentraut <\n> peter.eisentraut@enterprisedb.com> escreveu:\n>\n>> On 11.07.22 21:06, Ranier Vilela wrote:\n>> > Em qui., 7 de jul. de 2022 às 14:01, Ranier Vilela <ranier.vf@gmail.com\n>> > <mailto:ranier.vf@gmail.com>> escreveu:\n>> >\n>> > Attached the v1 of your patch.\n>> > I think that all is safe to switch MemSet by {0}.\n>> >\n>> > Here the rebased patch v2, against latest head.\n>>\n>> I have committed my patch with Álvaro's comments addressed\n>>\n> I see.\n> It's annoing that old compiler (gcc 4.7.2) don't handle this style.\n>\n>\n>> Your patch appears to add in changes that are either arguably out of\n>> scope or would need further review (e.g., changing memset() calls,\n>> changing the scope of some variables, changing places that need to worry\n>> about padding bits). Please submit separate patches for those, and we\n>> can continue the analysis.\n>>\n> Sure.\n>\nHi, sorry for the delay.\nLike how\nhttps://github.com/postgres/postgres/commit/9fd45870c1436b477264c0c82eb195df52bc0919\nNew attempt to remove more MemSet calls, that are safe.\n\nAttached v3 patch.\n\nregards,\nRanier Vilela\n\n>",
"msg_date": "Mon, 1 Aug 2022 14:08:48 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "On Tue, Aug 2, 2022 at 3:09 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Em sáb., 16 de jul. de 2022 às 16:54, Ranier Vilela <ranier.vf@gmail.com> escreveu:\n>>\n>>\n>>\n>> Em sáb, 16 de jul de 2022 2:58 AM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> escreveu:\n>>>\n>>> On 11.07.22 21:06, Ranier Vilela wrote:\n>>> > Em qui., 7 de jul. de 2022 às 14:01, Ranier Vilela <ranier.vf@gmail.com\n>>> > <mailto:ranier.vf@gmail.com>> escreveu:\n>>> >\n>>> > Attached the v1 of your patch.\n>>> > I think that all is safe to switch MemSet by {0}.\n>>> >\n>>> > Here the rebased patch v2, against latest head.\n>>>\n>>> I have committed my patch with Álvaro's comments addressed\n>>\n>> I see.\n>> It's annoing that old compiler (gcc 4.7.2) don't handle this style.\n>>\n>>>\n>>> Your patch appears to add in changes that are either arguably out of\n>>> scope or would need further review (e.g., changing memset() calls,\n>>> changing the scope of some variables, changing places that need to worry\n>>> about padding bits). Please submit separate patches for those, and we\n>>> can continue the analysis.\n>>\n>> Sure.\n>\n> Hi, sorry for the delay.\n> Like how https://github.com/postgres/postgres/commit/9fd45870c1436b477264c0c82eb195df52bc0919\n> New attempt to remove more MemSet calls, that are safe.\n>\n> Attached v3 patch.\n>\n> regards,\n> Ranier Vilela\n\nHi, I have not been closely following this thread, but it's starting\nto sound very deja-vu with something I proposed 3 years ago. See [1]\n\"Make use of C99 designated initialisers for nulls/values arrays\".\nThat started off with lots of support, but then there was a suggestion\nthat the {0} should be implemented as a macro, and the subsequent\ndiscussions about that macro eventually bikeshedded the patch to\ndeath.\n\nIt might be a good idea if you check that old thread so you can avoid\nthe same pitfalls. I hope you have more luck than I did ;-)\n\n------\n[1] https://www.postgresql.org/message-id/flat/2793d0d2-c65f-5db0-4f89-251188438391%40gmail.com#102ee1b34a8341f28758efc347874b8a\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 2 Aug 2022 11:19:25 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em seg., 1 de ago. de 2022 às 22:19, Peter Smith <smithpb2250@gmail.com>\nescreveu:\n\n> On Tue, Aug 2, 2022 at 3:09 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >\n> > Em sáb., 16 de jul. de 2022 às 16:54, Ranier Vilela <ranier.vf@gmail.com>\n> escreveu:\n> >>\n> >>\n> >>\n> >> Em sáb, 16 de jul de 2022 2:58 AM, Peter Eisentraut <\n> peter.eisentraut@enterprisedb.com> escreveu:\n> >>>\n> >>> On 11.07.22 21:06, Ranier Vilela wrote:\n> >>> > Em qui., 7 de jul. de 2022 às 14:01, Ranier Vilela <\n> ranier.vf@gmail.com\n> >>> > <mailto:ranier.vf@gmail.com>> escreveu:\n> >>> >\n> >>> > Attached the v1 of your patch.\n> >>> > I think that all is safe to switch MemSet by {0}.\n> >>> >\n> >>> > Here the rebased patch v2, against latest head.\n> >>>\n> >>> I have committed my patch with Álvaro's comments addressed\n> >>\n> >> I see.\n> >> It's annoing that old compiler (gcc 4.7.2) don't handle this style.\n> >>\n> >>>\n> >>> Your patch appears to add in changes that are either arguably out of\n> >>> scope or would need further review (e.g., changing memset() calls,\n> >>> changing the scope of some variables, changing places that need to\n> worry\n> >>> about padding bits). Please submit separate patches for those, and we\n> >>> can continue the analysis.\n> >>\n> >> Sure.\n> >\n> > Hi, sorry for the delay.\n> > Like how\n> https://github.com/postgres/postgres/commit/9fd45870c1436b477264c0c82eb195df52bc0919\n> > New attempt to remove more MemSet calls, that are safe.\n> >\n> > Attached v3 patch.\n> >\n> > regards,\n> > Ranier Vilela\n>\n> Hi, I have not been closely following this thread, but it's starting\n> to sound very deja-vu with something I proposed 3 years ago. See [1]\n> \"Make use of C99 designated initialisers for nulls/values arrays\".\n> That started off with lots of support, but then there was a suggestion\n> that the {0} should be implemented as a macro, and the subsequent\n> discussions about that macro eventually bikeshedded the patch to\n> death.\n>\n> It might be a good idea if you check that old thread so you can avoid\n> the same pitfalls. I hope you have more luck than I did ;-)\n>\nI see, thanks.\nWe are using only {0}, just to avoid these pitfalls.\nAll changes here are safe, because, the tradeoff is\n\nMemSet with 0 to {0}\n\nAny else is ignored.\n\nThe rest of the calls with MemSet are alignment and padding dependent, and\nfor now, will not be played.\n\nregards,\nRanier Vilela\n\nEm seg., 1 de ago. de 2022 às 22:19, Peter Smith <smithpb2250@gmail.com> escreveu:On Tue, Aug 2, 2022 at 3:09 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Em sáb., 16 de jul. de 2022 às 16:54, Ranier Vilela <ranier.vf@gmail.com> escreveu:\n>>\n>>\n>>\n>> Em sáb, 16 de jul de 2022 2:58 AM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> escreveu:\n>>>\n>>> On 11.07.22 21:06, Ranier Vilela wrote:\n>>> > Em qui., 7 de jul. de 2022 às 14:01, Ranier Vilela <ranier.vf@gmail.com\n>>> > <mailto:ranier.vf@gmail.com>> escreveu:\n>>> >\n>>> > Attached the v1 of your patch.\n>>> > I think that all is safe to switch MemSet by {0}.\n>>> >\n>>> > Here the rebased patch v2, against latest head.\n>>>\n>>> I have committed my patch with Álvaro's comments addressed\n>>\n>> I see.\n>> It's annoing that old compiler (gcc 4.7.2) don't handle this style.\n>>\n>>>\n>>> Your patch appears to add in changes that are either arguably out of\n>>> scope or would need further review (e.g., changing memset() calls,\n>>> changing the scope of some variables, changing places that need to worry\n>>> about padding bits). Please submit separate patches for those, and we\n>>> can continue the analysis.\n>>\n>> Sure.\n>\n> Hi, sorry for the delay.\n> Like how https://github.com/postgres/postgres/commit/9fd45870c1436b477264c0c82eb195df52bc0919\n> New attempt to remove more MemSet calls, that are safe.\n>\n> Attached v3 patch.\n>\n> regards,\n> Ranier Vilela\n\nHi, I have not been closely following this thread, but it's starting\nto sound very deja-vu with something I proposed 3 years ago. See [1]\n\"Make use of C99 designated initialisers for nulls/values arrays\".\nThat started off with lots of support, but then there was a suggestion\nthat the {0} should be implemented as a macro, and the subsequent\ndiscussions about that macro eventually bikeshedded the patch to\ndeath.\n\nIt might be a good idea if you check that old thread so you can avoid\nthe same pitfalls. I hope you have more luck than I did ;-)I see, thanks.We are using only {0}, just to avoid these pitfalls.All changes here are safe, because, the tradeoff isMemSet with 0 to {0} Any else is ignored.The rest of the calls with MemSet are alignment and padding dependent, and for now, will not be played.regards,Ranier Vilela",
"msg_date": "Tue, 2 Aug 2022 08:55:59 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Hi Ranier,\n\nI'm pretty late to thread but would like to know about your claim in the\nthread:\n`All compilers currently have memset optimized.` I know one case of\noptimization where variable is not used after the memset.\nAre the cases for which the optimization is done consistent across all the\ncompilers?\n\nThanks,\nMahendrakar.\n\n\nOn Tue, 2 Aug 2022 at 17:26, Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> Em seg., 1 de ago. de 2022 às 22:19, Peter Smith <smithpb2250@gmail.com>\n> escreveu:\n>\n>> On Tue, Aug 2, 2022 at 3:09 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>> >\n>> > Em sáb., 16 de jul. de 2022 às 16:54, Ranier Vilela <\n>> ranier.vf@gmail.com> escreveu:\n>> >>\n>> >>\n>> >>\n>> >> Em sáb, 16 de jul de 2022 2:58 AM, Peter Eisentraut <\n>> peter.eisentraut@enterprisedb.com> escreveu:\n>> >>>\n>> >>> On 11.07.22 21:06, Ranier Vilela wrote:\n>> >>> > Em qui., 7 de jul. de 2022 às 14:01, Ranier Vilela <\n>> ranier.vf@gmail.com\n>> >>> > <mailto:ranier.vf@gmail.com>> escreveu:\n>> >>> >\n>> >>> > Attached the v1 of your patch.\n>> >>> > I think that all is safe to switch MemSet by {0}.\n>> >>> >\n>> >>> > Here the rebased patch v2, against latest head.\n>> >>>\n>> >>> I have committed my patch with Álvaro's comments addressed\n>> >>\n>> >> I see.\n>> >> It's annoing that old compiler (gcc 4.7.2) don't handle this style.\n>> >>\n>> >>>\n>> >>> Your patch appears to add in changes that are either arguably out of\n>> >>> scope or would need further review (e.g., changing memset() calls,\n>> >>> changing the scope of some variables, changing places that need to\n>> worry\n>> >>> about padding bits). Please submit separate patches for those, and we\n>> >>> can continue the analysis.\n>> >>\n>> >> Sure.\n>> >\n>> > Hi, sorry for the delay.\n>> > Like how\n>> https://github.com/postgres/postgres/commit/9fd45870c1436b477264c0c82eb195df52bc0919\n>> > New attempt to remove more MemSet calls, that are safe.\n>> >\n>> > Attached v3 patch.\n>> >\n>> > regards,\n>> > Ranier Vilela\n>>\n>> Hi, I have not been closely following this thread, but it's starting\n>> to sound very deja-vu with something I proposed 3 years ago. See [1]\n>> \"Make use of C99 designated initialisers for nulls/values arrays\".\n>> That started off with lots of support, but then there was a suggestion\n>> that the {0} should be implemented as a macro, and the subsequent\n>> discussions about that macro eventually bikeshedded the patch to\n>> death.\n>>\n>> It might be a good idea if you check that old thread so you can avoid\n>> the same pitfalls. I hope you have more luck than I did ;-)\n>>\n> I see, thanks.\n> We are using only {0}, just to avoid these pitfalls.\n> All changes here are safe, because, the tradeoff is\n>\n> MemSet with 0 to {0}\n>\n> Any else is ignored.\n>\n> The rest of the calls with MemSet are alignment and padding dependent, and\n> for now, will not be played.\n>\n> regards,\n> Ranier Vilela\n>\n\nHi Ranier,I'm pretty late to thread but would like to know about your claim in the thread:`All compilers currently have memset optimized.` I know one case of optimization where variable is not used after the memset.Are the cases for which the optimization is done consistent across all the compilers? Thanks,Mahendrakar.On Tue, 2 Aug 2022 at 17:26, Ranier Vilela <ranier.vf@gmail.com> wrote:Em seg., 1 de ago. de 2022 às 22:19, Peter Smith <smithpb2250@gmail.com> escreveu:On Tue, Aug 2, 2022 at 3:09 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Em sáb., 16 de jul. de 2022 às 16:54, Ranier Vilela <ranier.vf@gmail.com> escreveu:\n>>\n>>\n>>\n>> Em sáb, 16 de jul de 2022 2:58 AM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> escreveu:\n>>>\n>>> On 11.07.22 21:06, Ranier Vilela wrote:\n>>> > Em qui., 7 de jul. de 2022 às 14:01, Ranier Vilela <ranier.vf@gmail.com\n>>> > <mailto:ranier.vf@gmail.com>> escreveu:\n>>> >\n>>> > Attached the v1 of your patch.\n>>> > I think that all is safe to switch MemSet by {0}.\n>>> >\n>>> > Here the rebased patch v2, against latest head.\n>>>\n>>> I have committed my patch with Álvaro's comments addressed\n>>\n>> I see.\n>> It's annoing that old compiler (gcc 4.7.2) don't handle this style.\n>>\n>>>\n>>> Your patch appears to add in changes that are either arguably out of\n>>> scope or would need further review (e.g., changing memset() calls,\n>>> changing the scope of some variables, changing places that need to worry\n>>> about padding bits). Please submit separate patches for those, and we\n>>> can continue the analysis.\n>>\n>> Sure.\n>\n> Hi, sorry for the delay.\n> Like how https://github.com/postgres/postgres/commit/9fd45870c1436b477264c0c82eb195df52bc0919\n> New attempt to remove more MemSet calls, that are safe.\n>\n> Attached v3 patch.\n>\n> regards,\n> Ranier Vilela\n\nHi, I have not been closely following this thread, but it's starting\nto sound very deja-vu with something I proposed 3 years ago. See [1]\n\"Make use of C99 designated initialisers for nulls/values arrays\".\nThat started off with lots of support, but then there was a suggestion\nthat the {0} should be implemented as a macro, and the subsequent\ndiscussions about that macro eventually bikeshedded the patch to\ndeath.\n\nIt might be a good idea if you check that old thread so you can avoid\nthe same pitfalls. I hope you have more luck than I did ;-)I see, thanks.We are using only {0}, just to avoid these pitfalls.All changes here are safe, because, the tradeoff isMemSet with 0 to {0} Any else is ignored.The rest of the calls with MemSet are alignment and padding dependent, and for now, will not be played.regards,Ranier Vilela",
"msg_date": "Tue, 2 Aug 2022 18:47:11 +0530",
"msg_from": "mahendrakar s <mahendrakarforpg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em ter., 2 de ago. de 2022 às 10:17, mahendrakar s <\nmahendrakarforpg@gmail.com> escreveu:\n\n> Hi Ranier,\n>\n> I'm pretty late to thread but would like to know about your claim in the\n> thread:\n> `All compilers currently have memset optimized.`\n>\nWhat did I mean, modern compilers.\n\nI know one case of optimization where variable is not used after the memset.\n>\nProbably, the compiler decided to remove the variable altogether.\nThe most common is to remove the padding, when he understands that this is\npossible and safe.\nThis does not mean that this will happen in all cases.\nThe point here is, this is only possible when using memset.\n\n\n> Are the cases for which the optimization is done consistent across all the\n> compilers?\n>\nOf course not. But it does not matter.\n\nregards,\nRanier Vilela\n\nEm ter., 2 de ago. de 2022 às 10:17, mahendrakar s <mahendrakarforpg@gmail.com> escreveu:Hi Ranier,I'm pretty late to thread but would like to know about your claim in the thread:`All compilers currently have memset optimized.` What did I mean, modern compilers. I know one case of optimization where variable is not used after the memset.Probably, the compiler decided to remove the variable altogether.The most common is to remove the padding, when he understands that this is possible and safe.This does not mean that this will happen in all cases.The point here is, this is only possible when using memset. Are the cases for which the optimization is done consistent across all the compilers?Of course not. But it does not matter.regards,Ranier Vilela",
"msg_date": "Tue, 2 Aug 2022 11:18:12 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "On 01.08.22 19:08, Ranier Vilela wrote:\n> Like how \n> https://github.com/postgres/postgres/commit/9fd45870c1436b477264c0c82eb195df52bc0919 \n> <https://github.com/postgres/postgres/commit/9fd45870c1436b477264c0c82eb195df52bc0919>\n> New attempt to remove more MemSet calls, that are safe.\n> \n> Attached v3 patch.\n\nNote that struct initialization does not set padding bits. So any \nstruct that is used as a hash key or that goes to disk or something \nsimilar needs to be set with memset/MemSet instead. Various places in \nthe code make explicit comments about that, which your patch deletes, \nwhich is a mistake. This patch needs to be adjusted carefully with this \nin mind before it can be considered.\n\n\n\n",
"msg_date": "Thu, 11 Aug 2022 12:38:19 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em qui., 11 de ago. de 2022 às 07:38, Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> escreveu:\n\n> On 01.08.22 19:08, Ranier Vilela wrote:\n> > Like how\n> >\n> https://github.com/postgres/postgres/commit/9fd45870c1436b477264c0c82eb195df52bc0919\n> > <\n> https://github.com/postgres/postgres/commit/9fd45870c1436b477264c0c82eb195df52bc0919\n> >\n> > New attempt to remove more MemSet calls, that are safe.\n> >\n> > Attached v3 patch.\n>\n> Note that struct initialization does not set padding bits.\n\nAccording to:\nhttps://interrupt.memfault.com/blog/c-struct-padding-initialization\n\n2. individually set all members to 0:\n\nstruct foo a = {\n .i = 0,\n .b = 0,};\n\nSuffer from this problem.\n\n3. use { 0 } zero-initializer, not.\n\n So any\n> struct that is used as a hash key or that goes to disk or something\n> similar needs to be set with memset/MemSet instead. Various places in\n> the code make explicit comments about that, which your patch deletes,\n> which is a mistake. This patch needs to be adjusted carefully with this\n> in mind before it can be considered.\n>\nI think this needs better comprovation?\n\nregards,\nRanier Vilela\n\nEm qui., 11 de ago. de 2022 às 07:38, Peter Eisentraut <peter.eisentraut@enterprisedb.com> escreveu:On 01.08.22 19:08, Ranier Vilela wrote:\n> Like how \n> https://github.com/postgres/postgres/commit/9fd45870c1436b477264c0c82eb195df52bc0919 \n> <https://github.com/postgres/postgres/commit/9fd45870c1436b477264c0c82eb195df52bc0919>\n> New attempt to remove more MemSet calls, that are safe.\n> \n> Attached v3 patch.\n\nNote that struct initialization does not set padding bits.According to:https://interrupt.memfault.com/blog/c-struct-padding-initialization2. individually set all members to 0: \n\nstruct foo a = {\n .i = 0,\n .b = 0,\n};\n \nSuffer from this problem. 3. use { 0 } zero-initializer, not. So any \nstruct that is used as a hash key or that goes to disk or something \nsimilar needs to be set with memset/MemSet instead. Various places in \nthe code make explicit comments about that, which your patch deletes, \nwhich is a mistake. This patch needs to be adjusted carefully with this \nin mind before it can be considered.I think this needs better comprovation?regards,Ranier Vilela",
"msg_date": "Thu, 11 Aug 2022 08:15:16 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "On 2022-Aug-11, Ranier Vilela wrote:\n\n> According to:\n> https://interrupt.memfault.com/blog/c-struct-padding-initialization\n\nDid you actually read it?\n\nhttps://interrupt.memfault.com/blog/c-struct-padding-initialization#structure-zero-initialization\n\n: This looks great! However, it’s not obvious (from looking at those snippets)\n: what the value loaded into the padding region will be.\n:\n: The unfortunate answer is: it depends\n:\n: The C11 standard, chapter §6.2.6.1/6 says this:\n:\n: : When a value is stored in an object of structure or union type, including in a\n: : member object, the bytes of the object representation that correspond to any\n: : padding bytes take unspecified values.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La rebeldía es la virtud original del hombre\" (Arthur Schopenhauer)\n\n\n",
"msg_date": "Thu, 11 Aug 2022 13:49:04 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em qui., 11 de ago. de 2022 às 08:48, Alvaro Herrera <\nalvherre@alvh.no-ip.org> escreveu:\n\n> On 2022-Aug-11, Ranier Vilela wrote:\n>\n> > According to:\n> > https://interrupt.memfault.com/blog/c-struct-padding-initialization\n>\n> Did you actually read it?\n>\nYes, today.\n\n\n>\n>\n> https://interrupt.memfault.com/blog/c-struct-padding-initialization#structure-zero-initialization\n>\n> : This looks great! However, it’s not obvious (from looking at those\n> snippets)\n> : what the value loaded into the padding region will be.\n> :\n> : The unfortunate answer is: it depends\n> :\n> : The C11 standard, chapter §6.2.6.1/6 says this:\n> :\n> : : When a value is stored in an object of structure or union type,\n> including in a\n> : : member object, the bytes of the object representation that correspond\n> to any\n> : : padding bytes take unspecified values.\n>\nDid you see the Strategy 3 table, { 0 } ?\n\nregards,\nRanier Vilela\n\nEm qui., 11 de ago. de 2022 às 08:48, Alvaro Herrera <alvherre@alvh.no-ip.org> escreveu:On 2022-Aug-11, Ranier Vilela wrote:\n\n> According to:\n> https://interrupt.memfault.com/blog/c-struct-padding-initialization\n\nDid you actually read it?Yes, today. \n\nhttps://interrupt.memfault.com/blog/c-struct-padding-initialization#structure-zero-initialization\n\n: This looks great! However, it’s not obvious (from looking at those snippets)\n: what the value loaded into the padding region will be.\n:\n: The unfortunate answer is: it depends\n:\n: The C11 standard, chapter §6.2.6.1/6 says this:\n:\n: : When a value is stored in an object of structure or union type, including in a\n: : member object, the bytes of the object representation that correspond to any\n: : padding bytes take unspecified values.Did you see the Strategy 3 table, { 0 } ? regards,Ranier Vilela",
"msg_date": "Thu, 11 Aug 2022 08:51:53 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "On Thu, Aug 11, 2022 at 08:51:53AM -0300, Ranier Vilela wrote:\n> Em qui., 11 de ago. de 2022 �s 08:48, Alvaro Herrera <\n> alvherre@alvh.no-ip.org> escreveu:\n> \n> > On 2022-Aug-11, Ranier Vilela wrote:\n> >\n> > > According to:\n> > > https://interrupt.memfault.com/blog/c-struct-padding-initialization\n> >\n> Did you see the Strategy 3 table, { 0 } ?\n\nIt explicitly shows that at least Ubuntu clang version 13.0.0-2 with -01\ndoesn't do anything about the padding bytes (and that's after testing only 2\ndifferent compilers). Even if those compilers didn't show any problem, we\nstill couldn't rely on an undefined behavior and assume that no other compilers\nbehave differently.\n\n\n",
"msg_date": "Thu, 11 Aug 2022 20:23:03 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em qui., 11 de ago. de 2022 às 09:23, Julien Rouhaud <rjuju123@gmail.com>\nescreveu:\n\n> On Thu, Aug 11, 2022 at 08:51:53AM -0300, Ranier Vilela wrote:\n> > Em qui., 11 de ago. de 2022 às 08:48, Alvaro Herrera <\n> > alvherre@alvh.no-ip.org> escreveu:\n> >\n> > > On 2022-Aug-11, Ranier Vilela wrote:\n> > >\n> > > > According to:\n> > > > https://interrupt.memfault.com/blog/c-struct-padding-initialization\n> > >\n> > Did you see the Strategy 3 table, { 0 } ?\n>\n> It explicitly shows that at least Ubuntu clang version 13.0.0-2 with -01\n> doesn't do anything about the padding bytes (and that's after testing only\n> 2\n> different compilers). Even if those compilers didn't show any problem, we\n> still couldn't rely on an undefined behavior and assume that no other\n> compilers\n> behave differently.\n>\nYeah, although not a problem in the main current compilers clang, gcc and\nmsvc,\nit seems that this cannot be changed.\nBeing an undefined behavior, filling structures with holes, it seems to me\nthat you should always use MemSet or memset.\nSince even a current structure without holes could be changed in the future\nand become a bug.\n\nregards,\nRanier Vilela\n\nEm qui., 11 de ago. de 2022 às 09:23, Julien Rouhaud <rjuju123@gmail.com> escreveu:On Thu, Aug 11, 2022 at 08:51:53AM -0300, Ranier Vilela wrote:\n> Em qui., 11 de ago. de 2022 às 08:48, Alvaro Herrera <\n> alvherre@alvh.no-ip.org> escreveu:\n> \n> > On 2022-Aug-11, Ranier Vilela wrote:\n> >\n> > > According to:\n> > > https://interrupt.memfault.com/blog/c-struct-padding-initialization\n> >\n> Did you see the Strategy 3 table, { 0 } ?\n\nIt explicitly shows that at least Ubuntu clang version 13.0.0-2 with -01\ndoesn't do anything about the padding bytes (and that's after testing only 2\ndifferent compilers). Even if those compilers didn't show any problem, we\nstill couldn't rely on an undefined behavior and assume that no other compilers\nbehave differently.Yeah, although not a problem in the main current compilers clang, gcc and msvc,it seems that this cannot be changed.Being an undefined behavior, filling structures with holes, it seems to me that you should always use MemSet or memset.Since even a current structure without holes could be changed in the future and become a bug.regards,Ranier Vilela",
"msg_date": "Thu, 11 Aug 2022 13:59:33 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Hi Ranier,\n\nFollowing the comment in commit 9fd45870c1436b477264c0c82eb195df52bc0919,\n\n (The same could be done with appropriate memset() calls, but this\n patch is part of an effort to phase out MemSet(), so it doesn't touch\n memset() calls.)\n\nShould these obviously possible replacement of the standard library \nfunction \"memset\" be considered as well? For example, something like the \nattached one which is focusing on the pageinspect extension only.\n\n\nBest regards,\n\nDavid\n\nOn 2022-08-01 10:08 a.m., Ranier Vilela wrote:\n> Em sáb., 16 de jul. de 2022 às 16:54, Ranier Vilela \n> <ranier.vf@gmail.com> escreveu:\n>\n>\n>\n> Em sáb, 16 de jul de 2022 2:58 AM, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> escreveu:\n>\n> On 11.07.22 21:06, Ranier Vilela wrote:\n> > Em qui., 7 de jul. de 2022 às 14:01, Ranier Vilela\n> <ranier.vf@gmail.com\n> > <mailto:ranier.vf@gmail.com>> escreveu:\n> >\n> > Attached the v1 of your patch.\n> > I think that all is safe to switch MemSet by {0}.\n> >\n> > Here the rebased patch v2, against latest head.\n>\n> I have committed my patch with Álvaro's comments addressed\n>\n> I see.\n> It's annoing that old compiler (gcc 4.7.2) don't handle this style.\n>\n>\n> Your patch appears to add in changes that are either arguably\n> out of\n> scope or would need further review (e.g., changing memset()\n> calls,\n> changing the scope of some variables, changing places that\n> need to worry\n> about padding bits). Please submit separate patches for\n> those, and we\n> can continue the analysis.\n>\n> Sure.\n>\n> Hi, sorry for the delay.\n> Like how \n> https://github.com/postgres/postgres/commit/9fd45870c1436b477264c0c82eb195df52bc0919\n> New attempt to remove more MemSet calls, that are safe.\n>\n> Attached v3 patch.\n>\n> regards,\n> Ranier Vilela\n>",
"msg_date": "Fri, 19 Aug 2022 15:27:04 -0700",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em sex., 19 de ago. de 2022 às 19:27, David Zhang <david.zhang@highgo.ca>\nescreveu:\n\n> Hi Ranier,\n>\nHi David,\n\n>\n> Following the comment in commit 9fd45870c1436b477264c0c82eb195df52bc0919,\n>\n> (The same could be done with appropriate memset() calls, but this\n> patch is part of an effort to phase out MemSet(), so it doesn't touch\n> memset() calls.)\n>\n> Should these obviously possible replacement of the standard library\n> function \"memset\" be considered as well?\n>\n Yes, sure.\nIn modern C compilers like clang above 13, gcc and msvc the initialization\nwith {0},\nhas no problem, because all bits are correctly initialized to zero.\n\nHowever with some old compilers, such behavior is not strictly followed, so\nwith structs it is not safe to use.\nBut especially for arrays, whose use doesn't depend on filling the holes,\nit's certainly safe and cheap to use,\nwhich is the case here.\n\nFor example, something like the attached one which is focusing on the\n> pageinspect extension only.\n>\nSurely you did, but it has to be said, it was compiled and tested with at\nleast a make check.\nLooks like it's ok, LTGM.\n\nregards,\nRanier Vilela\n\n>\n\nEm sex., 19 de ago. de 2022 às 19:27, David Zhang <david.zhang@highgo.ca> escreveu:\n\nHi Ranier,Hi David, \n\n Following the comment in commit\n 9fd45870c1436b477264c0c82eb195df52bc0919,\n\n (The same could be done with\n appropriate memset() calls, but this\n patch is part of an effort to phase out MemSet(), so it\n doesn't touch\n memset() calls.)\n\n Should these obviously possible replacement of the standard\n library function \"memset\" be considered as well? Yes, sure.In modern C compilers like clang above 13, gcc and msvc the initialization with {0},has no problem, because all bits are correctly initialized to zero.However with some old compilers, such behavior is not strictly followed, so with structs it is not safe to use.But especially for arrays, whose use doesn't depend on filling the holes, it's certainly safe and cheap to use,which is the case here. For example,\n something like the attached one which is focusing on the\n pageinspect extension only.Surely you did, but it has to be said, it was compiled and tested with at least a make check.Looks like it's ok, LTGM. regards,Ranier Vilela",
"msg_date": "Sat, 20 Aug 2022 11:26:53 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "On 2022-Aug-19, David Zhang wrote:\n\n> Should these obviously possible replacement of the standard library function\n> \"memset\" be considered as well? For example, something like the attached one\n> which is focusing on the pageinspect extension only.\n\nIf you do this, you're creating a potential backpatching hazard. This\nis OK if we get something in return, so a question to ask is whether\nthere is any benefit in doing it.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Most hackers will be perfectly comfortable conceptualizing users as entropy\n sources, so let's move on.\" (Nathaniel Smith)\n\n\n",
"msg_date": "Wed, 24 Aug 2022 16:30:04 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "On 24.08.22 16:30, Alvaro Herrera wrote:\n> On 2022-Aug-19, David Zhang wrote:\n>> Should these obviously possible replacement of the standard library function\n>> \"memset\" be considered as well? For example, something like the attached one\n>> which is focusing on the pageinspect extension only.\n> \n> If you do this, you're creating a potential backpatching hazard. This\n> is OK if we get something in return, so a question to ask is whether\n> there is any benefit in doing it.\n\nI don't follow how this is a backpatching hazard.\n\n\n",
"msg_date": "Wed, 24 Aug 2022 20:50:56 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 24.08.22 16:30, Alvaro Herrera wrote:\n>> If you do this, you're creating a potential backpatching hazard. This\n>> is OK if we get something in return, so a question to ask is whether\n>> there is any benefit in doing it.\n\n> I don't follow how this is a backpatching hazard.\n\nCall me a trogdolyte, but I don't follow how it's an improvement.\nIt looks to me like an entirely random change that doesn't get rid\nof assumptions about what the bits are, it just replaces one set of\nassumptions with a different set. Moreover, the new set of assumptions\nmay include \"there are no padding bits in here\", which is mighty fragile\nand hard to verify. So I frankly do not find this a stylistic improvement.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Aug 2022 15:00:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "On Wed, Aug 24, 2022 at 3:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Call me a trogdolyte, but I don't follow how it's an improvement.\n> It looks to me like an entirely random change that doesn't get rid\n> of assumptions about what the bits are, it just replaces one set of\n> assumptions with a different set. Moreover, the new set of assumptions\n> may include \"there are no padding bits in here\", which is mighty fragile\n> and hard to verify. So I frankly do not find this a stylistic improvement.\n\nDitto.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Aug 2022 15:19:35 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em qua., 24 de ago. de 2022 às 16:00, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > On 24.08.22 16:30, Alvaro Herrera wrote:\n> >> If you do this, you're creating a potential backpatching hazard. This\n> >> is OK if we get something in return, so a question to ask is whether\n> >> there is any benefit in doing it.\n>\n> > I don't follow how this is a backpatching hazard.\n>\n> Call me a trogdolyte, but I don't follow how it's an improvement.\n> It looks to me like an entirely random change that doesn't get rid\n> of assumptions about what the bits are, it just replaces one set of\n> assumptions with a different set. Moreover, the new set of assumptions\n> may include \"there are no padding bits in here\", which is mighty fragile\n> and hard to verify. So I frankly do not find this a stylistic improvement.\n>\nBut, these same arguments apply to Designated Initializers [1].\n\nlike:\nstruct foo a = {\n .i = 0,\n .b = 0,\n};\n\nThat is slowly being introduced and IMHO brings the same problems with\npadding bits.\n\nregards,\nRanier Vilela\n\n[1] https://interrupt.memfault.com/blog/c-struct-padding-initialization\n\nEm qua., 24 de ago. de 2022 às 16:00, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 24.08.22 16:30, Alvaro Herrera wrote:\n>> If you do this, you're creating a potential backpatching hazard. This\n>> is OK if we get something in return, so a question to ask is whether\n>> there is any benefit in doing it.\n\n> I don't follow how this is a backpatching hazard.\n\nCall me a trogdolyte, but I don't follow how it's an improvement.\nIt looks to me like an entirely random change that doesn't get rid\nof assumptions about what the bits are, it just replaces one set of\nassumptions with a different set. Moreover, the new set of assumptions\nmay include \"there are no padding bits in here\", which is mighty fragile\nand hard to verify. So I frankly do not find this a stylistic improvement.But, these same arguments apply to Designated Initializers [1].like:struct foo a = { .i = 0, .b = 0,};That is slowly being introduced and IMHO brings the same problems with padding bits.regards,Ranier Vilela[1] https://interrupt.memfault.com/blog/c-struct-padding-initialization",
"msg_date": "Wed, 24 Aug 2022 16:20:15 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "On Wed, Aug 24, 2022 at 3:20 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> But, these same arguments apply to Designated Initializers [1].\n>\n> like:\n> struct foo a = {\n> .i = 0,\n> .b = 0,\n> };\n>\n> That is slowly being introduced and IMHO brings the same problems with padding bits.\n\nYep. I don't find that an improvement over a MemSet on the struct\neither, if we're just using it to fill in zeroes.\n\nIf we're using it to fill in non-zero values, though, then there's a\nreasonable argument that it offers some notational convenience.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Aug 2022 15:41:19 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "Em qua., 24 de ago. de 2022 às 16:41, Robert Haas <robertmhaas@gmail.com>\nescreveu:\n\n> On Wed, Aug 24, 2022 at 3:20 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > But, these same arguments apply to Designated Initializers [1].\n> >\n> > like:\n> > struct foo a = {\n> > .i = 0,\n> > .b = 0,\n> > };\n> >\n> > That is slowly being introduced and IMHO brings the same problems with\n> padding bits.\n>\n> Yep. I don't find that an improvement over a MemSet on the struct\n> either, if we're just using it to fill in zeroes.\n>\n> If we're using it to fill in non-zero values, though, then there's a\n> reasonable argument that it offers some notational convenience.\n>\nEven in that case, it still hides bugs.\nAll arguments against {0} apply entirely to this initialization type.\nBecause the padding bits remain uninitialized.\n\nNote that where all major compilers are correctly initializing padding bits\nwith {0}, then this misbehavior will become of no practical effect in the\nfuture.\n\nregards,\nRanier Vilela\n\nEm qua., 24 de ago. de 2022 às 16:41, Robert Haas <robertmhaas@gmail.com> escreveu:On Wed, Aug 24, 2022 at 3:20 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> But, these same arguments apply to Designated Initializers [1].\n>\n> like:\n> struct foo a = {\n> .i = 0,\n> .b = 0,\n> };\n>\n> That is slowly being introduced and IMHO brings the same problems with padding bits.\n\nYep. I don't find that an improvement over a MemSet on the struct\neither, if we're just using it to fill in zeroes.\n\nIf we're using it to fill in non-zero values, though, then there's a\nreasonable argument that it offers some notational convenience.Even in that case, it still hides bugs.All arguments against {0} apply entirely to this initialization type.Because the padding bits remain uninitialized.Note that where all major compilers are correctly initializing padding bitswith {0}, then this misbehavior will become of no practical effect in the future.regards,Ranier Vilela",
"msg_date": "Wed, 24 Aug 2022 16:49:59 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "On 2022-Aug-24, Peter Eisentraut wrote:\n\n> I don't follow how this is a backpatching hazard.\n\nIt changes code. Any bugfix in the surrounding code would have to fix a\nconflict. That is nonzero effort. Is it a huge risk? No, it is very\nsmall risk and a very small cost to fix such a conflict; but my claim is\nthat this change has zero benefit, therefore we should not incur a\nnonzero future effort.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"How amazing is that? I call it a night and come back to find that a bug has\nbeen identified and patched while I sleep.\" (Robert Davidson)\n http://archives.postgresql.org/pgsql-sql/2006-03/msg00378.php\n\n\n",
"msg_date": "Thu, 25 Aug 2022 10:38:41 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 10:38:41AM +0200, Alvaro Herrera wrote:\n> It changes code. Any bugfix in the surrounding code would have to fix a\n> conflict. That is nonzero effort. Is it a huge risk? No, it is very\n> small risk and a very small cost to fix such a conflict; but my claim is\n> that this change has zero benefit, therefore we should not incur a\n> nonzero future effort.\n\nAgreed to leave things as they are. This really comes down to if we\nwant to make this code more C99-ish or not, and the post-patch result\nis logically the same as the pre-patch result.\n--\nMichael",
"msg_date": "Thu, 25 Aug 2022 20:07:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Avoid unecessary MemSet call (src/backend/utils/cache/relcache.c)"
}
] |
[
{
"msg_contents": "notnulls discussion is forked from UniqueKey stuff, you can see the\nattachment\nfor the UnqiueKey introduction. Tom raised his opinion to track the\nnullability\ninside Var[1][2][3], this thread would start from there based on my\nunderstanding.\n\nGenerally tracking the null attributes inside Var would have something like:\n\nstruct Var\n{\n...;\n int nullable; // -1 unknown, 0 - not nullable. 1 - nullable\n};\n\nand then semantics of Var->nullable must be attached to a RelOptInfo. For\nexample:\n\nCREATE TABLE t1(a int, b int);\n\nSELECT abs(a) FROM t1 WHERE a > -100;\n\nThe var in RelOptInfo->reltarget should have nullable = 0 but the var in\nRelOptInfo->baserestrictinfo should have nullable = 1; The beauty of this\nare: a). It can distinguish the two situations perfectly b). Whenever we\nwant\nto know the nullable attribute of a Var for an expression, it is super easy\nto\nknow. In summary, we need to maintain the nullable attribute at 2 different\nplaces. one is the before the filters are executed(baserestrictinfo,\njoininfo,\nec_list at least). one is after the filters are executed\n(RelOptInfo.reltarget\nonly?)\n\nCome to JoinRel, we still need to maintain the 2 different cases as well.\n\nAs for the joinrel.reltarget, currently it looks up the inputrel's\nreltarget to\nget the Var, so it is easy to inherit from Var->nullable from inputrel, but\nwe need to consider the new changes introduced by current join,\nLike new NOT nullable attributes because of join clauses OR new nullable\nattributes because of outer join. Everything looks good for now.\n\nThe hard part is RelOptInfo.joininfo & root->eq_classes. All of them uses\nthe shared RestrictInfo, and it is unclear which Var->nullable should be\nused in\nthem. To not provide a wrong answer, I think we can assume nullable=-1\n(unknown)\nand let the upper layer decides what to do (do we have known use cases to\nuse\nthe nullable attribute here?).\n\nMore considerations about this strategy:\n1. We might use more memory for different var copies, the only known cases\n RelOptInfo->reltarget for now.\n2. _equalVar() has more complex semantics: shall we consider nulls or not.\n\nMy recent experience reminds me of another interesting use case of UniqueKey\nwhich may reduce the planning time a lot IIUC (Value 3 in then attachment).\nSince\nPG15 has just been released, I wonder if more people have time to discuss\nthis topic\nagain. Do I think the way in the right direction?\n\n[1] https://www.postgresql.org/message-id/1551312.1613142245%40sss.pgh.pa.us\n[2]\nhttps://www.postgresql.org/message-id/CAApHDvrRwhWCPKUD5H-EQoezHf%3DfnUUsPgTAnXsEOV8f8SF7XQ%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/1664320.1625577290%40sss.pgh.pa.us\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Sun, 15 May 2022 11:11:46 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Tracking notnull attributes inside Var"
},
{
"msg_contents": "On Sun, May 15, 2022 at 8:41 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n> The var in RelOptInfo->reltarget should have nullable = 0 but the var in\n> RelOptInfo->baserestrictinfo should have nullable = 1; The beauty of this\n> are: a). It can distinguish the two situations perfectly b). Whenever we want\n> to know the nullable attribute of a Var for an expression, it is super easy to\n> know. In summary, we need to maintain the nullable attribute at 2 different\n> places. one is the before the filters are executed(baserestrictinfo, joininfo,\n> ec_list at least). one is after the filters are executed (RelOptInfo.reltarget\n> only?)\n\nThanks for identifying this. What you have written makes sense and it\nmight open a few optimization opportunities. But let me put down some\nother thoughts here. You might want to take those into consideration\nwhen designing your solution.\n\nDo we want to just track nullable and non-nullable. May be we want\nexpand this class to nullable (var may be null), non-nullable (Var is\ndefinitely non-NULL), null (Var will be always NULL).\n\nBut the other way to look at this is along the lines of equivalence\nclasses. Equivalence classes record the expressions which are equal in\nthe final result of the query. The equivalence class members are not\nequal at all the stages of query execution. But because they are\nequal in the final result, we can impose that restriction on the lower\nlevels as well. Can we think of nullable in that fashion? If a Var is\nnon-nullable in the final result, we can impose that restriction on\nthe intermediate stages since rows with NULL values for that Var will\nbe filtered out somewhere. Similarly we could argue for null Var. But\nknowledge that a Var is nullable in the final result does not impose a\nNULL, non-NULL restriction on the intermediate stages. If we follow\nthis thought process, we don't need to differentiate Var at different\nstages in query.\n\n>\n> Come to JoinRel, we still need to maintain the 2 different cases as well.\n>\n> As for the joinrel.reltarget, currently it looks up the inputrel's reltarget to\n> get the Var, so it is easy to inherit from Var->nullable from inputrel, but\n> we need to consider the new changes introduced by current join,\n> Like new NOT nullable attributes because of join clauses OR new nullable\n> attributes because of outer join. Everything looks good for now.\n\nYes, if we want to maintain nullness at different stages in the query.\n\n>\n> The hard part is RelOptInfo.joininfo & root->eq_classes. All of them uses\n> the shared RestrictInfo, and it is unclear which Var->nullable should be used in\n> them. To not provide a wrong answer, I think we can assume nullable=-1 (unknown)\n> and let the upper layer decides what to do (do we have known use cases to use\n> the nullable attribute here?).\n\nI think what applies to baserestrictinfo and reltarget also applies to\njoininfo and join's reltarget. There will be three stages - join\nclauses, join quals and reltarget.\n\nIn EQs the Vars in RestrictInfo will come from joininfo but EQ member\nVars will derive their nullable-ness from corresponding reltarget. I\ncan be wrong though.\n\n>\n> More considerations about this strategy:\n> 1. We might use more memory for different var copies, the only known cases\n> RelOptInfo->reltarget for now.\n\nWhen a Var is copied the whole expression tree needs to be copied.\nThat might be more memory than just copies of Var nodes.\n\n> 2. _equalVar() has more complex semantics: shall we consider nulls or not.\n\nThis is interesting. It might have impact on set_plan_references and\nplanner's ability to search and match expressions.\n\nBut if we take the approach I have suggested earlier, this question\nwill not arise.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 17 May 2022 18:19:53 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Tracking notnull attributes inside Var"
},
{
"msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> notnulls discussion is forked from UniqueKey stuff, you can see the\n> attachment\n> for the UnqiueKey introduction. Tom raised his opinion to track the\n> nullability\n> inside Var[1][2][3], this thread would start from there based on my\n> understanding.\n\nI'm pretty certain that I never suggested this:\n\n> struct Var\n> {\n> ...;\n> int nullable; // -1 unknown, 0 - not nullable. 1 - nullable\n> };\n\nYou're free to pursue it if you like, but I think it will be a dead end.\nThe fundamental problem as you note is that equalVar() cannot do anything\nsane with a field defined that way. Also, we'd have to generate Vars\ninitially with nullable = unknown (else, for example, ALTER SET/DROP NOT\nNULL breaks stored views referring to the column). It'd be on the planner\nto run through the tree and replace that with \"nullable\" or \"not\nnullable\". It's hard to see how that's more advantageous than just\nkeeping the info in the associated RelOptInfo.\n\nAlso, I think you're confusing two related but distinct issues. For\ncertain optimization issues, we'd like to keep track of whether a column\nstored in a table is known NOT NULL. However, that's not the same thing\nas the question that I've muttered about, which is how to treat a Var\nthat's been possibly forced to null due to null-extension of an outer\njoin. That is a different value from the Var as read from the table,\nbut we currently represent it the same within the planner, which causes\nvarious sorts of undesirable complication. We cannot fix that by setting\nVar.nullable = true in above-the-join instances, because it might also\nbe true in below-the-join instances. \"Known not null in the table\" is\nnot the inverse of \"potentially nulled by an outer join\". Moreover, we\nprobably need to know *which* join is the one potentially nulling the Var,\nso a bool is not likely enough anyway.\n\nThe schemes I've been toying with tend to look more like putting a\nPlaceHolderVar-ish wrapper around the Var or expression that represents\nthe below-the-join value. The wrapper node could carry any join ID\ninfo that we find necessary. The thing that I'm kind of stalled on is\nhow to define this struct so that it's not a big headache for join\nstrength reduction (which could remove the need for a wrapper altogether)\nor outer-join reordering (which makes it a bit harder to define which\njoin we think is the one nulling the value).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 May 2022 13:25:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Tracking notnull attributes inside Var"
},
{
"msg_contents": "Hi Tom:\n\nThanks for your attention!\n\nOn Wed, May 18, 2022 at 1:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> > notnulls discussion is forked from UniqueKey stuff, you can see the\n> > attachment\n> > for the UnqiueKey introduction. Tom raised his opinion to track the\n> > nullability\n> > inside Var[1][2][3], this thread would start from there based on my\n> > understanding.\n>\n> I'm pretty certain that I never suggested this:\n>\n> > struct Var\n> > {\n> > ...;\n> > int nullable; // -1 unknown, 0 - not nullable. 1 - nullable\n> > };\n>\n> You're free to pursue it if you like, but I think it will be a dead end.\n>\n\nOK, Here is a huge misunderstanding. I have my own solution at the\nbeginning and then I think you want to go with this direction and I think\nit is really hard to understand, so I started this thread to make things\nclear. It is so great that the gap is filled now.\n\nThe fundamental problem as you note is that equalVar() cannot do anything\n> sane with a field defined that way. Also, we'd have to generate Vars\n> initially with nullable = unknown (else, for example, ALTER SET/DROP NOT\n> NULL breaks stored views referring to the column). It'd be on the planner\n> to run through the tree and replace that with \"nullable\" or \"not\n> nullable\". It's hard to see how that's more advantageous than just\n> keeping the info in the associated RelOptInfo.\n>\n\nAgreed.\n\n\n>\n> Also, I think you're confusing two related but distinct issues. For\n> certain optimization issues, we'd like to keep track of whether a column\n> stored in a table is known NOT NULL. However, that's not the same thing\n> as the question that I've muttered about, which is how to treat a Var\n> that's been possibly forced to null due to null-extension of an outer\n> join. That is a different value from the Var as read from the table,\n> but we currently represent it the same within the planner, which causes\n> various sorts of undesirable complication. We cannot fix that by setting\n\nVar.nullable = true in above-the-join instances, because it might also\n> be true in below-the-join instances. \"Known not null in the table\" is\n> not the inverse of \"potentially nulled by an outer join\". Moreover, we\n> probably need to know *which* join is the one potentially nulling the Var,\n> so a bool is not likely enough anyway.\n>\n\nI read the above graph several times, but *I think probably my code can\nexpress better than my words*. It would be great that you can have a\nlook at them. Just one point to mention now: Seems you didn't mention the\ncase where the NULL values are filtered by qual, not sure it is negligible\nor by mistake.\n\nCREATE TABLE t(a int);\nSELECT * FROM t WHERE a > 1;\n\nMy patch is my previous solution not the Inside Var one.\n\n\n> The schemes I've been toying with tend to look more like putting a\n> PlaceHolderVar-ish wrapper around the Var or expression that represents\n> the below-the-join value. The wrapper node could carry any join ID\n> info that we find necessary. The thing that I'm kind of stalled on is\n> how to define this struct so that it's not a big headache for join\n> strength reduction (which could remove the need for a wrapper altogether)\n> or outer-join reordering (which makes it a bit harder to define which\n> join we think is the one nulling the value).\n>\n>\nNot sure if the \"NULL values are filtered by qual '' matters in this\nsolution,\nand I'm pretty open for direction. But to avoid further misunderstanding\nfrom me, I would like to fill more gaps first by raising my patch now\nand continue talking in this direction.\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Fri, 20 May 2022 12:42:41 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tracking notnull attributes inside Var"
},
{
"msg_contents": "Hi Ashutosh:\n\n Nice to see you again!\n\nOn Tue, May 17, 2022 at 8:50 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> On Sun, May 15, 2022 at 8:41 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> >\n> > The var in RelOptInfo->reltarget should have nullable = 0 but the var in\n> > RelOptInfo->baserestrictinfo should have nullable = 1; The beauty of\n> this\n> > are: a). It can distinguish the two situations perfectly b). Whenever we\n> want\n> > to know the nullable attribute of a Var for an expression, it is super\n> easy to\n> > know. In summary, we need to maintain the nullable attribute at 2\n> different\n> > places. one is the before the filters are executed(baserestrictinfo,\n> joininfo,\n> > ec_list at least). one is after the filters are executed\n> (RelOptInfo.reltarget\n> > only?)\n>\n> Thanks for identifying this. What you have written makes sense and it\n> might open a few optimization opportunities. But let me put down some\n> other thoughts here. You might want to take those into consideration\n> when designing your solution.\n>\n\nThanks.\n\n\n>\n> Do we want to just track nullable and non-nullable. May be we want\n> expand this class to nullable (var may be null), non-nullable (Var is\n> definitely non-NULL), null (Var will be always NULL).\n>\n>\nCurrently it doesn't support \"Var will be always NULL\" . Do you have any\nuse cases for this? and I can't think of too many cases where we can get\nsuch information except something like \"SELECT a FROM t WHERE a\nIS NULL\".\n\nBut the other way to look at this is along the lines of equivalence\n> classes. Equivalence classes record the expressions which are equal in\n> the final result of the query. The equivalence class members are not\n> equal at all the stages of query execution. But because they are\n> equal in the final result, we can impose that restriction on the lower\n> levels as well. Can we think of nullable in that fashion? If a Var is\n> non-nullable in the final result, we can impose that restriction on\n> the intermediate stages since rows with NULL values for that Var will\n> be filtered out somewhere. Similarly we could argue for null Var. But\n> knowledge that a Var is nullable in the final result does not impose a\n> NULL, non-NULL restriction on the intermediate stages. If we follow\n> this thought process, we don't need to differentiate Var at different\n> stages in query.\n>\n\nI agree this is an option. If so we need to track it under the PlannerInfo\nstruct but it would not be as fine-grained as my previous. Without\nintermediate information, We can't know if a UnqiueKey contains multiple\nNULLs, this would not be an issue for the \"MARK Distinct as no-op\" case,\nbut I'm not sure it is OK for other UniqueKey user cases. So my current\nidea\nis I still prefer to maintain the intermediate information, unless we are\nsure it\ncosts too much or it is too complex to implement which I don't think so for\nnow\nat least. So if you have time to look at the attached patch, that would be\nsuper\ngreat as well.\n\n-- \nBest Regards\nAndy Fan\n\nHi Ashutosh: Nice to see you again!On Tue, May 17, 2022 at 8:50 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:On Sun, May 15, 2022 at 8:41 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n> The var in RelOptInfo->reltarget should have nullable = 0 but the var in\n> RelOptInfo->baserestrictinfo should have nullable = 1; The beauty of this\n> are: a). It can distinguish the two situations perfectly b). Whenever we want\n> to know the nullable attribute of a Var for an expression, it is super easy to\n> know. In summary, we need to maintain the nullable attribute at 2 different\n> places. one is the before the filters are executed(baserestrictinfo, joininfo,\n> ec_list at least). one is after the filters are executed (RelOptInfo.reltarget\n> only?)\n\nThanks for identifying this. What you have written makes sense and it\nmight open a few optimization opportunities. But let me put down some\nother thoughts here. You might want to take those into consideration\nwhen designing your solution.Thanks. \n\nDo we want to just track nullable and non-nullable. May be we want\nexpand this class to nullable (var may be null), non-nullable (Var is\ndefinitely non-NULL), null (Var will be always NULL).\nCurrently it doesn't support \"Var will be always NULL\" . Do you have anyuse cases for this? and I can't think of too many cases where we can getsuch information except something like \"SELECT a FROM t WHERE a IS NULL\". \nBut the other way to look at this is along the lines of equivalence\nclasses. Equivalence classes record the expressions which are equal in\nthe final result of the query. The equivalence class members are not\nequal at all the stages of query execution. But because they are\nequal in the final result, we can impose that restriction on the lower\nlevels as well. Can we think of nullable in that fashion? If a Var is\nnon-nullable in the final result, we can impose that restriction on\nthe intermediate stages since rows with NULL values for that Var will\nbe filtered out somewhere. Similarly we could argue for null Var. But\nknowledge that a Var is nullable in the final result does not impose a\nNULL, non-NULL restriction on the intermediate stages. If we follow\nthis thought process, we don't need to differentiate Var at different\nstages in query.I agree this is an option. If so we need to track it under the PlannerInfo struct but it would not be as fine-grained as my previous. Withoutintermediate information, We can't know if a UnqiueKey contains multipleNULLs, this would not be an issue for the \"MARK Distinct as no-op\" case,but I'm not sure it is OK for other UniqueKey user cases. So my current ideais I still prefer to maintain the intermediate information, unless we are sure it costs too much or it is too complex to implement which I don't think so for nowat least. So if you have time to look at the attached patch, that would be supergreat as well.-- Best RegardsAndy Fan",
"msg_date": "Fri, 20 May 2022 13:18:32 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tracking notnull attributes inside Var"
},
{
"msg_contents": "I thought about the strategy below in the past few days, and think it\nis better because it uses less cycles to get the same answer. IIUC, the\nrelated structs should be created during / after deconstruct_jointree rather\nthan join_search_xx stage.\n\n\n> The schemes I've been toying with tend to look more like putting a\n> PlaceHolderVar-ish wrapper around the Var or expression that represents\n> the below-the-join value. The wrapper node could carry any join ID\n> info that we find necessary.\n\n\nJust to confirm my understanding, the wrapper node should answer some\nquestions like this.\n\n/*\n * rel_is_nullable_side\n *\n * For the given join ID joinrelids, return if the relid is in the nullable\n * side.\n */\nstatic bool\nrel_is_nullable_side(PlannerInfo *root, Relids joinrelids, Index relid)\n{\nAssert(bms_is_member(relid, joinrelids));\n...\n}\n\n\n> The thing that I'm kind of stalled on is\n> how to define this struct so that it's not a big headache for join\n> strength reduction (which could remove the need for a wrapper altogether)\n\nor outer-join reordering (which makes it a bit harder to define which\n> join we think is the one nulling the value).\n>\n\nI think about the outer-join reorder case, can we just rely on\nSpecialJoinInfo.min_lefthands & min_righthands to get the answer?\nThe attached patch is based on that. and I did some test in the patch\nas well, looks the answer is correct.\n\nWhat's more, if the above is correct and the calls of rel_is_nullable_side\nis small, do we still need think about more effective data struct?\n\nThanks!\n\n-- \nBest Regards\nAndy Fan",
"msg_date": "Mon, 13 Jun 2022 19:27:56 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tracking notnull attributes inside Var"
}
] |
[
{
"msg_contents": "Currently, CREATE STATS requires you to think of a name for each stats\nobject, which is fairly painful, so users would prefer an\nautomatically assigned name.\n\nAttached patch allows this, which turns out to be very simple, since a\nname assignment function already exists.\n\nThe generated name is simple, but that's exactly what users do anyway,\nso it is not too bad.\n\nTests, docs included.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Sun, 15 May 2022 13:20:34 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Make name optional in CREATE STATISTICS"
},
{
"msg_contents": "ne 15. 5. 2022 v 14:20 odesílatel Simon Riggs <simon.riggs@enterprisedb.com>\nnapsal:\n\n> Currently, CREATE STATS requires you to think of a name for each stats\n> object, which is fairly painful, so users would prefer an\n> automatically assigned name.\n>\n> Attached patch allows this, which turns out to be very simple, since a\n> name assignment function already exists.\n>\n> The generated name is simple, but that's exactly what users do anyway,\n> so it is not too bad.\n>\n>\ngood idea\n\nPavel\n\n\n> Tests, docs included.\n>\n> --\n> Simon Riggs http://www.EnterpriseDB.com/\n>\n\nne 15. 5. 2022 v 14:20 odesílatel Simon Riggs <simon.riggs@enterprisedb.com> napsal:Currently, CREATE STATS requires you to think of a name for each stats\nobject, which is fairly painful, so users would prefer an\nautomatically assigned name.\n\nAttached patch allows this, which turns out to be very simple, since a\nname assignment function already exists.\n\nThe generated name is simple, but that's exactly what users do anyway,\nso it is not too bad.\ngood ideaPavel \nTests, docs included.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Sun, 15 May 2022 16:12:33 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make name optional in CREATE STATISTICS"
},
{
"msg_contents": "On Sun, 15 May 2022 at 14:20, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> Currently, CREATE STATS requires you to think of a name for each stats\n> object, which is fairly painful, so users would prefer an\n> automatically assigned name.\n>\n> Attached patch allows this, which turns out to be very simple, since a\n> name assignment function already exists.\n>\n> The generated name is simple, but that's exactly what users do anyway,\n> so it is not too bad.\n\nCool.\n\n> Tests, docs included.\n\nSomething I noticed is that this grammar change is quite different\nfrom how create index specifies its optional name. Because we already\nhave a seperate statement sections for with and without IF NOT EXISTS,\nadding another branch will add even more duplication. Using a new\nopt_name production (potentially renamed from opt_index_name?) would\nprobably reduce the amount of duplication in the grammar.\n\nWe might be able to use opt_if_not_exists to fully remove the\nduplicated grammars, but I don't think we would be able to keep the\n\"CREATE STATISTICS IF NOT EXISTS <<no name>> ON col1, col2 FROM table\"\nsyntax illegal.\n\nPlease also update the comment in gram.y above the updated section\nthat details the expected grammar for CREATE STATISTICS, as you seem\nto have overlooked that copy of grammar documentation.\n\nApart from these two small issues, this passes tests and seems complete.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 6 Jul 2022 20:35:17 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make name optional in CREATE STATISTICS"
},
{
"msg_contents": "On Wed, 6 Jul 2022 at 19:35, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Sun, 15 May 2022 at 14:20, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> >\n> > Currently, CREATE STATS requires you to think of a name for each stats\n> > object, which is fairly painful, so users would prefer an\n> > automatically assigned name.\n> >\n> > Attached patch allows this, which turns out to be very simple, since a\n> > name assignment function already exists.\n> >\n> > The generated name is simple, but that's exactly what users do anyway,\n> > so it is not too bad.\n>\n> Cool.\n\nThanks for your review.\n\n\n> > Tests, docs included.\n>\n> Something I noticed is that this grammar change is quite different\n> from how create index specifies its optional name. Because we already\n> have a seperate statement sections for with and without IF NOT EXISTS,\n> adding another branch will add even more duplication. Using a new\n> opt_name production (potentially renamed from opt_index_name?) would\n> probably reduce the amount of duplication in the grammar.\n\nThere are various other ways of doing this and, yes, we could refactor\nother parts of the grammar to make this work. There is a specific\nguideline about patch submission that says the best way to get a patch\nrejected is to include unnecessary changes. With that in mind, let's\nkeep the patch simple and exactly aimed at the original purpose.\n\nI'll leave it for committers to decide whether other refactoring is wanted.\n\n> We might be able to use opt_if_not_exists to fully remove the\n> duplicated grammars, but I don't think we would be able to keep the\n> \"CREATE STATISTICS IF NOT EXISTS <<no name>> ON col1, col2 FROM table\"\n> syntax illegal.\n>\n> Please also update the comment in gram.y above the updated section\n> that details the expected grammar for CREATE STATISTICS, as you seem\n> to have overlooked that copy of grammar documentation.\n\nI have made the comment show that the name is optional, thank you.\n\n\n> Apart from these two small issues, this passes tests and seems complete.\n\nPatch v4 attached\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Thu, 7 Jul 2022 11:54:51 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Make name optional in CREATE STATISTICS"
},
{
"msg_contents": "On Thu, 7 Jul 2022 at 12:55, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> There are various other ways of doing this and, yes, we could refactor\n> other parts of the grammar to make this work. There is a specific\n> guideline about patch submission that says the best way to get a patch\n> rejected is to include unnecessary changes. With that in mind, let's\n> keep the patch simple and exactly aimed at the original purpose.\n>\n> I'll leave it for committers to decide whether other refactoring is wanted.\n\nFair enough.\n\n> I have made the comment show that the name is optional, thank you.\n\nThe updated comment implies that IF NOT EXISTS is allowed without a\ndefined name, which is false:\n\n> + * CREATE STATISTICS [IF NOT EXISTS] [stats_name] [(stat types)]\n\nA more correct version would be\n\n+ * CREATE STATISTICS [ [IF NOT EXISTS] stats_name ]\n[(stat types)]\n\n> Patch v4 attached\n\nThanks for working on this!\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Thu, 7 Jul 2022 12:58:37 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make name optional in CREATE STATISTICS"
},
{
"msg_contents": "On Thu, 7 Jul 2022 at 11:58, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Thu, 7 Jul 2022 at 12:55, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> > There are various other ways of doing this and, yes, we could refactor\n> > other parts of the grammar to make this work. There is a specific\n> > guideline about patch submission that says the best way to get a patch\n> > rejected is to include unnecessary changes. With that in mind, let's\n> > keep the patch simple and exactly aimed at the original purpose.\n> >\n> > I'll leave it for committers to decide whether other refactoring is wanted.\n>\n> Fair enough.\n>\n> > I have made the comment show that the name is optional, thank you.\n>\n> The updated comment implies that IF NOT EXISTS is allowed without a\n> defined name, which is false:\n>\n> > + * CREATE STATISTICS [IF NOT EXISTS] [stats_name] [(stat types)]\n>\n> A more correct version would be\n>\n> + * CREATE STATISTICS [ [IF NOT EXISTS] stats_name ]\n> [(stat types)]\n\nThere you go\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Wed, 13 Jul 2022 07:07:46 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Make name optional in CREATE STATISTICS"
},
{
"msg_contents": "On Wed, 13 Jul 2022 at 08:07, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Thu, 7 Jul 2022 at 11:58, Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > A more correct version would be\n> >\n> > + * CREATE STATISTICS [ [IF NOT EXISTS] stats_name ]\n> > [(stat types)]\n>\n> There you go\n\nThanks!\n\nI think this is ready for a committer, so I've marked it as such.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 20 Jul 2022 13:00:48 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make name optional in CREATE STATISTICS"
},
{
"msg_contents": "On Wed, 20 Jul 2022 at 12:01, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Wed, 13 Jul 2022 at 08:07, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> >\n> > > + * CREATE STATISTICS [ [IF NOT EXISTS] stats_name ]\n>\n> I think this is ready for a committer, so I've marked it as such.\n>\n\nPicking this up...\n\nI tend to agree with Matthias' earlier point about avoiding code\nduplication in the grammar. Without going off and refactoring other\nparts of the grammar not related to this patch, it's still a slightly\nsmaller, simpler change, and less code duplication, to do this using a\nnew opt_stats_name production in the grammar, as in the attached.\n\nI also noticed a comment in CreateStatistics() that needed updating.\n\nBarring any further comments, I'll push this shortly.\n\nRegards,\nDean",
"msg_date": "Thu, 21 Jul 2022 15:12:29 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make name optional in CREATE STATISTICS"
},
{
"msg_contents": "On Thu, 21 Jul 2022 at 15:12, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Wed, 20 Jul 2022 at 12:01, Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> >\n> > On Wed, 13 Jul 2022 at 08:07, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> > >\n> > > > + * CREATE STATISTICS [ [IF NOT EXISTS] stats_name ]\n> >\n> > I think this is ready for a committer, so I've marked it as such.\n> >\n>\n> Picking this up...\n>\n> I tend to agree with Matthias' earlier point about avoiding code\n> duplication in the grammar. Without going off and refactoring other\n> parts of the grammar not related to this patch, it's still a slightly\n> smaller, simpler change, and less code duplication, to do this using a\n> new opt_stats_name production in the grammar, as in the attached.\n>\n> I also noticed a comment in CreateStatistics() that needed updating.\n>\n> Barring any further comments, I'll push this shortly.\n\nNice change, please proceed.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 21 Jul 2022 17:10:05 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Make name optional in CREATE STATISTICS"
},
{
"msg_contents": "On 7/21/22 16:12, Dean Rasheed wrote:\n> On Wed, 20 Jul 2022 at 12:01, Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n>>\n>> On Wed, 13 Jul 2022 at 08:07, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>>>\n>>>> + * CREATE STATISTICS [ [IF NOT EXISTS] stats_name ]\n>>\n>> I think this is ready for a committer, so I've marked it as such.\n>>\n> \n> Picking this up...\n> \n> I tend to agree with Matthias' earlier point about avoiding code\n> duplication in the grammar. Without going off and refactoring other\n> parts of the grammar not related to this patch, it's still a slightly\n> smaller, simpler change, and less code duplication, to do this using a\n> new opt_stats_name production in the grammar, as in the attached.\n> \n> I also noticed a comment in CreateStatistics() that needed updating.\n> \n> Barring any further comments, I'll push this shortly.\n> \n\n+1\n\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 21 Jul 2022 18:19:48 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Make name optional in CREATE STATISTICS"
},
{
"msg_contents": "On 2022-Jul-21, Dean Rasheed wrote:\n\n> I tend to agree with Matthias' earlier point about avoiding code\n> duplication in the grammar. Without going off and refactoring other\n> parts of the grammar not related to this patch, it's still a slightly\n> smaller, simpler change, and less code duplication, to do this using a\n> new opt_stats_name production in the grammar, as in the attached.\n> \n> I also noticed a comment in CreateStatistics() that needed updating.\n> \n> Barring any further comments, I'll push this shortly.\n\nThanks. I was looking at the recently modified REINDEX syntax and\nnoticed there another spot for taking an optional name. I ended up\nreusing OptSchemaName for that, as in the attached patch. I think\nadding production-specific additional productions is pointless and\nprobably bloats the grammar. So let me +1 your push of the patch you\nposted, just to keep things moving forward, but in addition I propose to\nlater rename OptSchemaName to something more generic and use it in these\nthree places.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Thu, 21 Jul 2022 19:42:12 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Make name optional in CREATE STATISTICS"
},
{
"msg_contents": "On Thu, 21 Jul 2022 at 18:42, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Thanks. I was looking at the recently modified REINDEX syntax and\n> noticed there another spot for taking an optional name. I ended up\n> reusing OptSchemaName for that, as in the attached patch. I think\n> adding production-specific additional productions is pointless and\n> probably bloats the grammar. So let me +1 your push of the patch you\n> posted, just to keep things moving forward, but in addition I propose to\n> later rename OptSchemaName to something more generic and use it in these\n> three places.\n>\n\nOK, pushed.\n\nBefore writing opt_stats_name, I went looking for anything else I\ncould use, but didn't see anything. The difference between this and\nthe index case, is that statistics objects can be schema-qualified, so\nit might be tricky to get something that'll work for all 3 places.\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 21 Jul 2022 19:36:38 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make name optional in CREATE STATISTICS"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 07:42:12PM +0200, Alvaro Herrera wrote:\n> Thanks. I was looking at the recently modified REINDEX syntax and\n> noticed there another spot for taking an optional name. I ended up\n> reusing OptSchemaName for that, as in the attached patch. I think\n> adding production-specific additional productions is pointless and\n> probably bloats the grammar. So let me +1 your push of the patch you\n> posted, just to keep things moving forward, but in addition I propose to\n> later rename OptSchemaName to something more generic and use it in these\n> three places.\n\nThis slightly changes the behavior of the grammar, as CONCURRENTLY\nwas working on DATABASE as follows:\n* On HEAD:\n=# reindex database concurrently postgres;\nREINDEX\n=# reindex (concurrently) database concurrently postgres;\nREINDEX\n=# reindex (concurrently) database ;\nREINDEX\n=# reindex (concurrently) database postgres;\nREINDEX\n=# reindex database concurrently postgres;\nREINDEX\n=# reindex database concurrently;\nERROR: 42601: syntax error at or near \";\"\n\nAnd actually, even on HEAD, the last case is marked as supported by\nthe docs but we don't allow it in the parser. My mistake on this\none.\n\nNow, with the patch, I get:\n=# reindex (concurrently) database concurrently;\nERROR: 42601: syntax error at or near \"concurrently\"\nLINE 1: reindex (concurrently) database concurrently postgres ;\n=# reindex (concurrently) database concurrently postgres;\nERROR: 42601: syntax error at or near \"concurrently\"\nLINE 1: reindex (concurrently) database concurrently postgres;\n=# reindex (concurrently) database ;\nREINDEX\n=# reindex (concurrently) database postgres;\nREINDEX\n=# reindex database concurrently postgres;\nERROR: 42601: syntax error at or near \"concurrently\"\nLINE 1: reindex database concurrently postgres;\n=# reindex database concurrently;\nERROR: 42601: syntax error at or near \"concurrently\"\n\nSo this indeed has as effect to make possible the use of CONCURRENTLY\nfor DATABASE and SYSTEM only within the parenthesized grammar. Seeing\nthe simplifications this creates, I'd agree with dropping this part of\nthe grammar. I think that I would add the following queries to\ncreate_index.sql to test this grammar, as we can rely on this code\npath generating an error:\nREINDEX (CONCURRENTLY) SYSTEM postgres;\nREINDEX (CONCURRENTLY) SYSTEM;\nAt least it would validate the parsing for DATABASE.\n\nThis breaks reindexdb for the database case, because the query\ngenerated in run_reindex_command() adds CONCURRENTLY only *after* the\ndatabase name, and we should be careful to generate something \nbackward-compatible in this tool, as well. The fix is simple: you\ncan add CONCURRENTLY within the section with TABLESPACE and VERBOSE\nfor a connection >= 14, and use it after the object name for <= 13.\n--\nMichael",
"msg_date": "Fri, 22 Jul 2022 09:45:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Make name optional in CREATE STATISTICS"
},
{
"msg_contents": "On 2022-Jul-22, Michael Paquier wrote:\n\n> So this indeed has as effect to make possible the use of CONCURRENTLY\n> for DATABASE and SYSTEM only within the parenthesized grammar. Seeing\n> the simplifications this creates, I'd agree with dropping this part of\n> the grammar.\n\nActually, looking at the grammar again I realized that the '('options')'\npart could be refactored; and with that, keeping an extra production for\nREINDEX DATABASE CONCURRENTLY is short enough. It is removed from\nREINDEX SYSTEM, but that's OK because that doesn't work anyway.\n\nI added the new test lines you proposed and amended the docs; the result\nis attached.\n\nInitially I wanted to use the \"optional list of options\" for all\nutilities that have similar constructions, (VACUUM, ANALYZE, CLUSTER,\nEXPLAIN) but it is not possible because their alternative productions\naccept different keywords, so it doesn't look possible.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"If you have nothing to say, maybe you need just the right tool to help you\nnot say it.\" (New York Times, about Microsoft PowerPoint)",
"msg_date": "Fri, 22 Jul 2022 15:06:46 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Make name optional in CREATE STATISTICS"
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 03:06:46PM +0200, Alvaro Herrera wrote:\n> Actually, looking at the grammar again I realized that the '('options')'\n> part could be refactored; and with that, keeping an extra production for\n> REINDEX DATABASE CONCURRENTLY is short enough. It is removed from\n> REINDEX SYSTEM, but that's OK because that doesn't work anyway.\n\nI have just looked at 83011ce, and got what you've done here. You\nhave thrown away reindex_target_multitable and added three parts for\nSCHEMA, DATABASE and SYSTEM instead with their own options, enforcing\nthe restriction on CONCURRENTLY at the end of REINDEX SYSTEM in the\nparser rather than indexcmds.c.\n\n+ | REINDEX opt_reindex_option_list SYSTEM_P opt_single_name\n {\n ReindexStmt *n =\n makeNode(ReindexStmt);\n-\n- n->kind = $5;\n- n->name = $7;\n+ n->kind = REINDEX_OBJECT_SYSTEM;\n+ n->name = NULL;\n\nI think that there is a bug in this logic. ReindexStmt->name is\nalways set to NULL, meaning that a REINDEX command with any database\nname would still pass, but I don't think that we should allow that.\nFor example, with something like these commands, we should complain\nthat the database name specified does not match with the database we\nare connected to:\n=# reindex system popo_foo_bar;\nREINDEX\n=# reindex database popo_foo_bar;\nREINDEX\n\nIt may have been better to wait a bit if you wanted me to look at all\nthat, as our timezones are not compatible, especially on Fridays, but\nthat's fine :D.\n--\nMichael",
"msg_date": "Sat, 23 Jul 2022 12:25:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Make name optional in CREATE STATISTICS"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> I have just looked at 83011ce, and got what you've done here. You\n> have thrown away reindex_target_multitable and added three parts for\n> SCHEMA, DATABASE and SYSTEM instead with their own options, enforcing\n> the restriction on CONCURRENTLY at the end of REINDEX SYSTEM in the\n> parser rather than indexcmds.c.\n\nThat does not seem like an improvement. In v15:\n\nregression=# REINDEX SYSTEM CONCURRENTLY db;\nERROR: cannot reindex system catalogs concurrently\n\nAs of HEAD:\n\nregression=# REINDEX SYSTEM CONCURRENTLY db;\nERROR: syntax error at or near \"CONCURRENTLY\"\nLINE 1: REINDEX SYSTEM CONCURRENTLY db;\n ^\n\nThat is not a very helpful error, not even if the man page\ndoesn't show the syntax as legal.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 22 Jul 2022 23:54:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Make name optional in CREATE STATISTICS"
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 11:54:27PM -0400, Tom Lane wrote:\n> That does not seem like an improvement. In v15:\n> \n> regression=# REINDEX SYSTEM CONCURRENTLY db;\n> ERROR: cannot reindex system catalogs concurrently\n> \n> As of HEAD:\n> \n> regression=# REINDEX SYSTEM CONCURRENTLY db;\n> ERROR: syntax error at or near \"CONCURRENTLY\"\n> LINE 1: REINDEX SYSTEM CONCURRENTLY db;\n> ^\n> \n> That is not a very helpful error, not even if the man page\n> doesn't show the syntax as legal.\n\nAs the problem comes down to the fact that INDEX/TABLE, SCHEMA and\nDATABASE/SYSTEM need to handle names for different object types each,\nI think that we could do something like the attached, removing one\nblock on the way at the cost of an extra parser node.\n\nBy the way, it seems that 83011ce also broke the case of \"REINDEX\nDATABASE CONCURRENTLY\", where the parser missed the addition of a\nDefElem for \"concurrently\" in this case.\n--\nMichael",
"msg_date": "Sat, 23 Jul 2022 14:43:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Make name optional in CREATE STATISTICS"
},
{
"msg_contents": "On 2022-Jul-23, Michael Paquier wrote:\n\n> As the problem comes down to the fact that INDEX/TABLE, SCHEMA and\n> DATABASE/SYSTEM need to handle names for different object types each,\n> I think that we could do something like the attached, removing one\n> block on the way at the cost of an extra parser node.\n\nYeah, looks good. I propose to also test the error for reindexing a\ndifferent database, which is currently uncovered, as attached.\n\n> By the way, it seems that 83011ce also broke the case of \"REINDEX\n> DATABASE CONCURRENTLY\", where the parser missed the addition of a\n> DefElem for \"concurrently\" in this case.\n\nWow.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Escucha y olvidarás; ve y recordarás; haz y entenderás\" (Confucio)",
"msg_date": "Mon, 25 Jul 2022 11:49:50 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Make name optional in CREATE STATISTICS"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 11:49:50AM +0200, Alvaro Herrera wrote:\n> On 2022-Jul-23, Michael Paquier wrote:\n>> As the problem comes down to the fact that INDEX/TABLE, SCHEMA and\n>> DATABASE/SYSTEM need to handle names for different object types each,\n>> I think that we could do something like the attached, removing one\n>> block on the way at the cost of an extra parser node.\n> \n> Yeah, looks good. I propose to also test the error for reindexing a\n> different database, which is currently uncovered, as attached.\n\nGood idea.\n\n>> By the way, it seems that 83011ce also broke the case of \"REINDEX\n>> DATABASE CONCURRENTLY\", where the parser missed the addition of a\n>> DefElem for \"concurrently\" in this case.\n> \n> Wow.\n\nFor this one, we have a gap in the test, actually. It seems to me\nthat we'd better make sure that the OID of the indexes rebuilt\nconcurrently is changed. There is a REINDEX DATABASE CONCURRENTLY\nalready in the TAP tests, and the only thing that would be needed for\nthe job is an extra query that compares the OID saved before the\nreindex with the one in the catalogs after the fact..\n--\nMichael",
"msg_date": "Mon, 25 Jul 2022 19:34:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Make name optional in CREATE STATISTICS"
},
{
"msg_contents": "On 2022-Jul-25, Michael Paquier wrote:\n\n> On Mon, Jul 25, 2022 at 11:49:50AM +0200, Alvaro Herrera wrote:\n> > On 2022-Jul-23, Michael Paquier wrote:\n\n> >> By the way, it seems that 83011ce also broke the case of \"REINDEX\n> >> DATABASE CONCURRENTLY\", where the parser missed the addition of a\n> >> DefElem for \"concurrently\" in this case.\n> > \n> > Wow.\n> \n> For this one, we have a gap in the test, actually. It seems to me\n> that we'd better make sure that the OID of the indexes rebuilt\n> concurrently is changed. There is a REINDEX DATABASE CONCURRENTLY\n> already in the TAP tests, and the only thing that would be needed for\n> the job is an extra query that compares the OID saved before the\n> reindex with the one in the catalogs after the fact..\n\nAgreed. I think you already have the query for that elsewhere in the\ntest, so it's just a matter of copying it from there.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No tengo por qué estar de acuerdo con lo que pienso\"\n (Carlos Caszeli)\n\n\n",
"msg_date": "Mon, 25 Jul 2022 12:55:54 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Make name optional in CREATE STATISTICS"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 12:55:54PM +0200, Alvaro Herrera wrote:\n> Agreed. I think you already have the query for that elsewhere in the\n> test, so it's just a matter of copying it from there.\n\nI actually already wrote most of it in 2cbc3c1, and I just needed to\nextend things a bit to detect the OID changes :)\n\nSo, applied, with all the extra tests.\n--\nMichael",
"msg_date": "Tue, 26 Jul 2022 10:19:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Make name optional in CREATE STATISTICS"
},
{
"msg_contents": "On 2022-Jul-26, Michael Paquier wrote:\n\n> On Mon, Jul 25, 2022 at 12:55:54PM +0200, Alvaro Herrera wrote:\n> > Agreed. I think you already have the query for that elsewhere in the\n> > test, so it's just a matter of copying it from there.\n> \n> I actually already wrote most of it in 2cbc3c1, and I just needed to\n> extend things a bit to detect the OID changes :)\n> \n> So, applied, with all the extra tests.\n\nThank you!\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"I am amazed at [the pgsql-sql] mailing list for the wonderful support, and\nlack of hesitasion in answering a lost soul's question, I just wished the rest\nof the mailing list could be like this.\" (Fotis)\n (http://archives.postgresql.org/pgsql-sql/2006-06/msg00265.php)\n\n\n",
"msg_date": "Tue, 26 Jul 2022 10:17:39 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Make name optional in CREATE STATISTICS"
}
] |
[
{
"msg_contents": "Note: I am not (currently) planning on implementing this rough idea,\njust putting it up to share and document the idea, on request of Tomas\n(cc-ed).\n\nThe excellent pgconf.de presentation on PostgreSQL's extended\nstatistics system by Tomas Vondra [0] talked about how the current\ndefault statistics assume the MCVs of columns to be fully independent,\ni.e. values of column A do not imply any value of columns B and C, and\nthat for accurate data on correllated values the user needs to\nmanually create statistics on the combined columns (by either\nSTATISTICS or by INDEX).\n\nThis is said to be due to limitations in our statistics collector: to\ndetermine the fraction of the table that contains the value, we store\nthe N most common values with the fraction of their occurrance in the\ntable. This value is quite exact, but combining these values proves\ndifficult: there is nothing in the stored value that can confidently\ninclude or exclude parts of the table from a predicate using that MCV,\nso we can only assume that the values of two columns are independent.\n\nAfter the presentation it came to me that if we were to add an\nestimator for the number of rows with that value to the MCV lists in\nthe form of HLL sketches (in addition to or replacing the current\nmost_common_elem_freqs fractions), we would be able to make better\nestimates for multi-column filters by combining the HLL row\ncardinality sketches for filters that filter on these MCVs. This would\nremove the immediate need for manual statistics with an cartesian\nproduct of the MCVs of those columns with their occurrance fractions,\nwhich significantly reduces the need for the creation of manual\nstatistics - the need that exists due to planner mis-estimates in\ncorrelated columns. Custom statistics will still be required for\nexpression statistics, but column correlation estimations _within\nMCVs_ is much improved.\n\nHow I imagine this would work is that for each value in the MCV, an\nHLL is maintained that estimates the amount of distinct tuples\ncontaining that value. This can be h(TID) or h(PK), or anything else\nthat would uniquely identify returned tuples. Because the keyspace of\nall HLLs that are generated are on the same table, you can apply join\nand intersection operations on the HLLs of the MCVs (for OR and\nAND-operations respectively), and provide fairly accurately estimates\nfor the amount of tuples that would be returned by the filter on that\ntable.\n\nThe required size of the HLL sketches can be determined by the amount\nof tuples scanned during analyze, potentially reducing the size\nrequired to store these HLL sketches from the usual 1.5kB per sketch\nto something smaller - we'll only ever need to count nTuples distinct\nvalues, so low values for default_statistics_target would allow for\nsmaller values for m in the HLL sketches, whilst still providing\nfairly accurate result estimates.\n\nKind regards,\n\nMatthias van de Meent\n\nPS: Several later papers correctly point out that HLL can only count\nup to 2^32 due to the use of a hash function that outputs only 32\nbits; which is not enough for large tables. HLL++ solves this by using\na hash function that outputs 64 bits, and can thus be considered a\nbetter alternative which provides the same features. But, any other\nsketch that provides an accurate (but not necessarily: perfect)\ncount-distinct of which results can be combined should be fine as\nwell.\n\n[0] https://www.postgresql.eu/events/pgconfde2022/schedule/session/3704-an-overview-of-extended-statistics-in-postgresql/\n\n\n",
"msg_date": "Sun, 15 May 2022 21:55:04 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "[RFC] Improving multi-column filter cardinality estimation using MCVs\n and HyperLogLog"
},
{
"msg_contents": "On 5/15/22 21:55, Matthias van de Meent wrote:\n> Note: I am not (currently) planning on implementing this rough idea,\n> just putting it up to share and document the idea, on request of Tomas\n> (cc-ed).\n> \n> The excellent pgconf.de presentation on PostgreSQL's extended\n> statistics system by Tomas Vondra [0] talked about how the current\n> default statistics assume the MCVs of columns to be fully independent,\n> i.e. values of column A do not imply any value of columns B and C, and\n> that for accurate data on correllated values the user needs to\n> manually create statistics on the combined columns (by either\n> STATISTICS or by INDEX).\n> \n> This is said to be due to limitations in our statistics collector: to\n> determine the fraction of the table that contains the value, we store\n> the N most common values with the fraction of their occurrance in the\n> table. This value is quite exact, but combining these values proves\n> difficult: there is nothing in the stored value that can confidently\n> include or exclude parts of the table from a predicate using that MCV,\n> so we can only assume that the values of two columns are independent.\n> \n> After the presentation it came to me that if we were to add an\n> estimator for the number of rows with that value to the MCV lists in\n> the form of HLL sketches (in addition to or replacing the current\n> most_common_elem_freqs fractions), we would be able to make better\n> estimates for multi-column filters by combining the HLL row\n> cardinality sketches for filters that filter on these MCVs. This would\n> remove the immediate need for manual statistics with an cartesian\n> product of the MCVs of those columns with their occurrance fractions,\n> which significantly reduces the need for the creation of manual\n> statistics - the need that exists due to planner mis-estimates in\n> correlated columns. Custom statistics will still be required for\n> expression statistics, but column correlation estimations _within\n> MCVs_ is much improved.\n> \n> How I imagine this would work is that for each value in the MCV, an\n> HLL is maintained that estimates the amount of distinct tuples\n> containing that value. This can be h(TID) or h(PK), or anything else\n> that would uniquely identify returned tuples. Because the keyspace of\n> all HLLs that are generated are on the same table, you can apply join\n> and intersection operations on the HLLs of the MCVs (for OR and\n> AND-operations respectively), and provide fairly accurately estimates\n> for the amount of tuples that would be returned by the filter on that\n> table.\n> > The required size of the HLL sketches can be determined by the amount\n> of tuples scanned during analyze, potentially reducing the size\n> required to store these HLL sketches from the usual 1.5kB per sketch\n> to something smaller - we'll only ever need to count nTuples distinct\n> values, so low values for default_statistics_target would allow for\n> smaller values for m in the HLL sketches, whilst still providing\n> fairly accurate result estimates.\n> \n\nI think it's an interesting idea. In principle it allows deducing the\nmulti-column MCV for arbitrary combination of columns, not determined in\nadvance. We'd have the MCV with HLL instead of frequencies for columns\nA, B and C:\n\n(a1, hll(a1))\n(a2, hll(a2))\n(...)\n(aK, hll(aK))\n\n\n(b1, hll(b1))\n(b2, hll(b2))\n(...)\n(bL, hll(bL))\n\n(c1, hll(c1))\n(c2, hll(c2))\n(...)\n(cM, hll(cM))\n\nand from this we'd be able to build MCV for any combination of those\nthree columns.\n\nAnd in some sense it might even be more efficient/accurate, because the\nMCV on (A,B,C) might have up to K*L*M items. if there's 100 items in\neach column, that'd be 1,000,000 combinations, which we can't really\nstore (target is up to 10k). And even if we could, it'd be 1M\ncombinations with frequencies (so ~8-16B per combination).\n\nWhile with the MCV/HLL, we'd have 300 items and HLL. Assuming 256-512B\nHLL would be enough, that's still way smaller than the multi-column MCV.\n\nEven with target=10k it'd still be cheaper to store the separate MCV\nwith HLL values, if I count right, and there'd be no items omitted from\nthe MCV.\n\n> Kind regards,\n> \n> Matthias van de Meent\n> \n> PS: Several later papers correctly point out that HLL can only count\n> up to 2^32 due to the use of a hash function that outputs only 32\n> bits; which is not enough for large tables. HLL++ solves this by using\n> a hash function that outputs 64 bits, and can thus be considered a\n> better alternative which provides the same features. But, any other\n> sketch that provides an accurate (but not necessarily: perfect)\n> count-distinct of which results can be combined should be fine as\n> well.\n> \n\nI don't think the 32-bit limitation is a problem for us, because we'd be\nonly ever build HLL on a sample, not the whole table. And the samples\nare limited to 3M rows (with statistics target = 10k), so we're nowhere\nnear the scale requiring 64-bit hashes.\n\nPresumably the statistics target value would determine the necessary HLL\nparameters (and size), because e.g. with 30k rows we can't possibly see\nmore than 30k distinct values.\n\nOne possible problem is this all works only when all the columns are\nanalyzed at the same time / using the same sample. If you do this:\n\n ANALYZE t(a);\n ANALYZE t(b);\n\nthen HLL filters sketches for the columns would use different ctid/PK\nvalues, and hence can't be combined.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 16 May 2022 00:09:41 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [RFC] Improving multi-column filter cardinality estimation using\n MCVs and HyperLogLog"
},
{
"msg_contents": "On Mon, 16 May 2022 at 00:09, Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n>\n> On 5/15/22 21:55, Matthias van de Meent wrote:\n> > Note: I am not (currently) planning on implementing this rough idea,\n> > just putting it up to share and document the idea, on request of Tomas\n> > (cc-ed).\n> >\n> > The excellent pgconf.de presentation on PostgreSQL's extended\n> > statistics system by Tomas Vondra [0] talked about how the current\n> > default statistics assume the MCVs of columns to be fully independent,\n> > i.e. values of column A do not imply any value of columns B and C, and\n> > that for accurate data on correllated values the user needs to\n> > manually create statistics on the combined columns (by either\n> > STATISTICS or by INDEX).\n> >\n> > This is said to be due to limitations in our statistics collector: to\n> > determine the fraction of the table that contains the value, we store\n> > the N most common values with the fraction of their occurrance in the\n> > table. This value is quite exact, but combining these values proves\n> > difficult: there is nothing in the stored value that can confidently\n> > include or exclude parts of the table from a predicate using that MCV,\n> > so we can only assume that the values of two columns are independent.\n> >\n> > After the presentation it came to me that if we were to add an\n> > estimator for the number of rows with that value to the MCV lists in\n> > the form of HLL sketches (in addition to or replacing the current\n> > most_common_elem_freqs fractions), we would be able to make better\n> > estimates for multi-column filters by combining the HLL row\n> > cardinality sketches for filters that filter on these MCVs. This would\n> > remove the immediate need for manual statistics with an cartesian\n> > product of the MCVs of those columns with their occurrance fractions,\n> > which significantly reduces the need for the creation of manual\n> > statistics - the need that exists due to planner mis-estimates in\n> > correlated columns. Custom statistics will still be required for\n> > expression statistics, but column correlation estimations _within\n> > MCVs_ is much improved.\n> >\n> > How I imagine this would work is that for each value in the MCV, an\n> > HLL is maintained that estimates the amount of distinct tuples\n> > containing that value. This can be h(TID) or h(PK), or anything else\n> > that would uniquely identify returned tuples. Because the keyspace of\n> > all HLLs that are generated are on the same table, you can apply join\n> > and intersection operations on the HLLs of the MCVs (for OR and\n> > AND-operations respectively), and provide fairly accurately estimates\n> > for the amount of tuples that would be returned by the filter on that\n> > table.\n> > > The required size of the HLL sketches can be determined by the amount\n> > of tuples scanned during analyze, potentially reducing the size\n> > required to store these HLL sketches from the usual 1.5kB per sketch\n> > to something smaller - we'll only ever need to count nTuples distinct\n> > values, so low values for default_statistics_target would allow for\n> > smaller values for m in the HLL sketches, whilst still providing\n> > fairly accurate result estimates.\n> >\n>\n> I think it's an interesting idea. In principle it allows deducing the\n> multi-column MCV for arbitrary combination of columns, not determined in\n> advance. We'd have the MCV with HLL instead of frequencies for columns\n> A, B and C:\n>\n> (a1, hll(a1))\n> (a2, hll(a2))\n> (...)\n> (aK, hll(aK))\n>\n>\n> (b1, hll(b1))\n> (b2, hll(b2))\n> (...)\n> (bL, hll(bL))\n>\n> (c1, hll(c1))\n> (c2, hll(c2))\n> (...)\n> (cM, hll(cM))\n>\n> and from this we'd be able to build MCV for any combination of those\n> three columns.\n>\n> And in some sense it might even be more efficient/accurate, because the\n> MCV on (A,B,C) might have up to K*L*M items. if there's 100 items in\n> each column, that'd be 1,000,000 combinations, which we can't really\n> store (target is up to 10k). And even if we could, it'd be 1M\n> combinations with frequencies (so ~8-16B per combination).\n>\n> While with the MCV/HLL, we'd have 300 items and HLL. Assuming 256-512B\n> HLL would be enough, that's still way smaller than the multi-column MCV.\n\nHLLs for statistics_target=100 could use 4 bits per bucket, but any target\nabove 218 should use 5 bits: nbits = ceil(log2(log2(target * 300))), and\nthis saves only 20% on storage.\n\nAccuracy increases with root(m), so while we can shrink the amount of\nbuckets, this is only OK if we're accepting the corresponding decrease in\naccuracy.\n\n> Even with target=10k it'd still be cheaper to store the separate MCV\n> with HLL values, if I count right, and there'd be no items omitted from\n> the MCV.\n\nThere are more options, though: Count-min was proposed as a replacement for\nMCV lists, and they work as guaranteed max count of distinct values. If,\ninstead of an actual counter in each bucket, we would use HLL in each\nbucket, we'd not only have occurance estimates (but not: actual max-count\nlimits) for all values, but we'd also be able to do the cross-column result\ncorrelation estimations.\n\n> > Kind regards,\n> >\n> > Matthias van de Meent\n> >\n> > PS: Several later papers correctly point out that HLL can only count\n> > up to 2^32 due to the use of a hash function that outputs only 32\n> > bits; which is not enough for large tables. HLL++ solves this by using\n> > a hash function that outputs 64 bits, and can thus be considered a\n> > better alternative which provides the same features. But, any other\n> > sketch that provides an accurate (but not necessarily: perfect)\n> > count-distinct of which results can be combined should be fine as\n> > well.\n> >\n>\n> I don't think the 32-bit limitation is a problem for us, because we'd be\n> only ever build HLL on a sample, not the whole table. And the samples\n> are limited to 3M rows (with statistics target = 10k), so we're nowhere\n> near the scale requiring 64-bit hashes.\n\nI was thinking towards incremental statistics with this, which would imply\nthat inserts and updates do trigger updates in the statistics. It is fairly\ninexpensive to append to an HLL, whereas that is not so trivial for\ntablefraction-based statistics.\n\n> Presumably the statistics target value would determine the necessary HLL\n> parameters (and size), because e.g. with 30k rows we can't possibly see\n> more than 30k distinct values.\n>\n> One possible problem is this all works only when all the columns are\n> analyzed at the same time / using the same sample. If you do this:\n>\n> ANALYZE t(a);\n> ANALYZE t(b);\n>\n> then HLL filters sketches for the columns would use different ctid/PK\n> values, and hence can't be combined.\n\nCorrect. This could be improved through deterministic random sample (as\nopposed to the current PRNG seeded with random()), where the sampled set is\nonly pseudo-random and always the same for a given relation of constant\nsize.\n\nThis would result in statistics that always overlap while the relation size\nis constant, and still show a corellation when the relation size changes\n(with correllation = (1 - %delta) * (1 - %delta) ). Only in the worst cases\nthe correlation would be non-existent and the resulting combinations would\nbe no different than random - effectively regressing the estimator function\nto P(A & B) = P(A) * P(B), which is the same as what we currently have.\n\nExample: pages could be sampled in order of increasing value of hash(PageNo\n|| relid). This hash can be anything, but a reversible hash function would\nprobably be best, because it helps us by not having to sort nBlocks hashed\nvalues for large tables (we can run the equivalent of [0], which might be\ncheaper than top-n-sorting an array of relsize blocks). The resulting set\nof scanned data would be stable for any unchanging relation size and would\nthus consistently select the same blocks when you analyze the table.\n\nRegards,\n\nMatthias\n\nPS. I was looking through papers on different types of sketches, and found\nthat (according to some papers) HLL has some severe drawbacks regarding\nsketch intersection estimates. The Apache DataSketches [1] project states\nthat it can provide accurate combined estimates through the Theta sketch\n[2][3] - an adaptation of the KMV-sketch that does seem to provide accurate\nunion, intersection and 'A not B' -estimates, as well as exact results for\nlow input row counts - something that HLL also doesn't work well for.\n\nThere is a patent covering this (US9152691, expected to expire in 2033),\nand the project containing the code is licenced under Apache 2.\n\n[0] SELECT reverse_hash(n) FROM generate_series(0, MaxBlockNumber, 1) t(n)\nWHERE reverse_hash(n) < tablesize LIMIT n_sample_blocks\"\n[1]\nhttps://datasketches.apache.org/docs/Architecture/SketchFeaturesMatrix.html\n[2]\nhttps://raw.githubusercontent.com/apache/datasketches-website/master/docs/pdf/ThetaSketchFramework.pdf\n[3]\nhttps://raw.githubusercontent.com/apache/datasketches-website/master/docs/pdf/ThetaSketchEquations.pdf\n\nOn Mon, 16 May 2022 at 00:09, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:>> On 5/15/22 21:55, Matthias van de Meent wrote:> > Note: I am not (currently) planning on implementing this rough idea,> > just putting it up to share and document the idea, on request of Tomas> > (cc-ed).> >> > The excellent pgconf.de presentation on PostgreSQL's extended> > statistics system by Tomas Vondra [0] talked about how the current> > default statistics assume the MCVs of columns to be fully independent,> > i.e. values of column A do not imply any value of columns B and C, and> > that for accurate data on correllated values the user needs to> > manually create statistics on the combined columns (by either> > STATISTICS or by INDEX).> >> > This is said to be due to limitations in our statistics collector: to> > determine the fraction of the table that contains the value, we store> > the N most common values with the fraction of their occurrance in the> > table. This value is quite exact, but combining these values proves> > difficult: there is nothing in the stored value that can confidently> > include or exclude parts of the table from a predicate using that MCV,> > so we can only assume that the values of two columns are independent.> >> > After the presentation it came to me that if we were to add an> > estimator for the number of rows with that value to the MCV lists in> > the form of HLL sketches (in addition to or replacing the current> > most_common_elem_freqs fractions), we would be able to make better> > estimates for multi-column filters by combining the HLL row> > cardinality sketches for filters that filter on these MCVs. This would> > remove the immediate need for manual statistics with an cartesian> > product of the MCVs of those columns with their occurrance fractions,> > which significantly reduces the need for the creation of manual> > statistics - the need that exists due to planner mis-estimates in> > correlated columns. Custom statistics will still be required for> > expression statistics, but column correlation estimations _within> > MCVs_ is much improved.> >> > How I imagine this would work is that for each value in the MCV, an> > HLL is maintained that estimates the amount of distinct tuples> > containing that value. This can be h(TID) or h(PK), or anything else> > that would uniquely identify returned tuples. Because the keyspace of> > all HLLs that are generated are on the same table, you can apply join> > and intersection operations on the HLLs of the MCVs (for OR and> > AND-operations respectively), and provide fairly accurately estimates> > for the amount of tuples that would be returned by the filter on that> > table.> > > The required size of the HLL sketches can be determined by the amount> > of tuples scanned during analyze, potentially reducing the size> > required to store these HLL sketches from the usual 1.5kB per sketch> > to something smaller - we'll only ever need to count nTuples distinct> > values, so low values for default_statistics_target would allow for> > smaller values for m in the HLL sketches, whilst still providing> > fairly accurate result estimates.> >>> I think it's an interesting idea. In principle it allows deducing the> multi-column MCV for arbitrary combination of columns, not determined in> advance. We'd have the MCV with HLL instead of frequencies for columns> A, B and C:>> (a1, hll(a1))> (a2, hll(a2))> (...)> (aK, hll(aK))>>> (b1, hll(b1))> (b2, hll(b2))> (...)> (bL, hll(bL))>> (c1, hll(c1))> (c2, hll(c2))> (...)> (cM, hll(cM))>> and from this we'd be able to build MCV for any combination of those> three columns.>> And in some sense it might even be more efficient/accurate, because the> MCV on (A,B,C) might have up to K*L*M items. if there's 100 items in> each column, that'd be 1,000,000 combinations, which we can't really> store (target is up to 10k). And even if we could, it'd be 1M> combinations with frequencies (so ~8-16B per combination).>> While with the MCV/HLL, we'd have 300 items and HLL. Assuming 256-512B> HLL would be enough, that's still way smaller than the multi-column MCV.HLLs for statistics_target=100 could use 4 bits per bucket, but any target above 218 should use 5 bits: nbits = ceil(log2(log2(target * 300))), and this saves only 20% on storage.Accuracy increases with root(m), so while we can shrink the amount of buckets, this is only OK if we're accepting the corresponding decrease in accuracy.> Even with target=10k it'd still be cheaper to store the separate MCV> with HLL values, if I count right, and there'd be no items omitted from> the MCV.There are more options, though: Count-min was proposed as a replacement for MCV lists, and they work as guaranteed max count of distinct values. If, instead of an actual counter in each bucket, we would use HLL in each bucket, we'd not only have occurance estimates (but not: actual max-count limits) for all values, but we'd also be able to do the cross-column result correlation estimations.> > Kind regards,> >> > Matthias van de Meent> >> > PS: Several later papers correctly point out that HLL can only count> > up to 2^32 due to the use of a hash function that outputs only 32> > bits; which is not enough for large tables. HLL++ solves this by using> > a hash function that outputs 64 bits, and can thus be considered a> > better alternative which provides the same features. But, any other> > sketch that provides an accurate (but not necessarily: perfect)> > count-distinct of which results can be combined should be fine as> > well.> >>> I don't think the 32-bit limitation is a problem for us, because we'd be> only ever build HLL on a sample, not the whole table. And the samples> are limited to 3M rows (with statistics target = 10k), so we're nowhere> near the scale requiring 64-bit hashes.I was thinking towards incremental statistics with this, which would imply that inserts and updates do trigger updates in the statistics. It is fairly inexpensive to append to an HLL, whereas that is not so trivial for tablefraction-based statistics.> Presumably the statistics target value would determine the necessary HLL> parameters (and size), because e.g. with 30k rows we can't possibly see> more than 30k distinct values.>> One possible problem is this all works only when all the columns are> analyzed at the same time / using the same sample. If you do this:>> ANALYZE t(a);> ANALYZE t(b);>> then HLL filters sketches for the columns would use different ctid/PK> values, and hence can't be combined.Correct. This could be improved through deterministic random sample (as opposed to the current PRNG seeded with random()), where the sampled set is only pseudo-random and always the same for a given relation of constant size.This would result in statistics that always overlap while the relation size is constant, and still show a corellation when the relation size changes (with correllation = (1 - %delta) * (1 - %delta) ). Only in the worst cases the correlation would be non-existent and the resulting combinations would be no different than random - effectively regressing the estimator function to P(A & B) = P(A) * P(B), which is the same as what we currently have.Example: pages could be sampled in order of increasing value of hash(PageNo || relid). This hash can be anything, but a reversible hash function would probably be best, because it helps us by not having to sort nBlocks hashed values for large tables (we can run the equivalent of [0], which might be cheaper than top-n-sorting an array of relsize blocks). The resulting set of scanned data would be stable for any unchanging relation size and would thus consistently select the same blocks when you analyze the table.Regards,MatthiasPS. I was looking through papers on different types of sketches, and found that (according to some papers) HLL has some severe drawbacks regarding sketch intersection estimates. The Apache DataSketches [1] project states that it can provide accurate combined estimates through the Theta sketch [2][3] - an adaptation of the KMV-sketch that does seem to provide accurate union, intersection and 'A not B' -estimates, as well as exact results for low input row counts - something that HLL also doesn't work well for.There is a patent covering this (US9152691, expected to expire in 2033), and the project containing the code is licenced under Apache 2.[0] SELECT reverse_hash(n) FROM generate_series(0, MaxBlockNumber, 1) t(n) WHERE reverse_hash(n) < tablesize LIMIT n_sample_blocks\"[1] https://datasketches.apache.org/docs/Architecture/SketchFeaturesMatrix.html[2] https://raw.githubusercontent.com/apache/datasketches-website/master/docs/pdf/ThetaSketchFramework.pdf[3] https://raw.githubusercontent.com/apache/datasketches-website/master/docs/pdf/ThetaSketchEquations.pdf",
"msg_date": "Thu, 19 May 2022 19:59:21 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [RFC] Improving multi-column filter cardinality estimation using\n MCVs and HyperLogLog"
},
{
"msg_contents": "\n\nOn 5/19/22 19:59, Matthias van de Meent wrote:\n> On Mon, 16 May 2022 at 00:09, Tomas Vondra\n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> wrote:\n>>\n>> On 5/15/22 21:55, Matthias van de Meent wrote:\n>> > Note: I am not (currently) planning on implementing this rough idea,\n>> > just putting it up to share and document the idea, on request of Tomas\n>> > (cc-ed).\n>> >\n>> > The excellent pgconf.de <http://pgconf.de> presentation on\n> PostgreSQL's extended\n>> > statistics system by Tomas Vondra [0] talked about how the current\n>> > default statistics assume the MCVs of columns to be fully independent,\n>> > i.e. values of column A do not imply any value of columns B and C, and\n>> > that for accurate data on correllated values the user needs to\n>> > manually create statistics on the combined columns (by either\n>> > STATISTICS or by INDEX).\n>> >\n>> > This is said to be due to limitations in our statistics collector: to\n>> > determine the fraction of the table that contains the value, we store\n>> > the N most common values with the fraction of their occurrance in the\n>> > table. This value is quite exact, but combining these values proves\n>> > difficult: there is nothing in the stored value that can confidently\n>> > include or exclude parts of the table from a predicate using that MCV,\n>> > so we can only assume that the values of two columns are independent.\n>> >\n>> > After the presentation it came to me that if we were to add an\n>> > estimator for the number of rows with that value to the MCV lists in\n>> > the form of HLL sketches (in addition to or replacing the current\n>> > most_common_elem_freqs fractions), we would be able to make better\n>> > estimates for multi-column filters by combining the HLL row\n>> > cardinality sketches for filters that filter on these MCVs. This would\n>> > remove the immediate need for manual statistics with an cartesian\n>> > product of the MCVs of those columns with their occurrance fractions,\n>> > which significantly reduces the need for the creation of manual\n>> > statistics - the need that exists due to planner mis-estimates in\n>> > correlated columns. Custom statistics will still be required for\n>> > expression statistics, but column correlation estimations _within\n>> > MCVs_ is much improved.\n>> >\n>> > How I imagine this would work is that for each value in the MCV, an\n>> > HLL is maintained that estimates the amount of distinct tuples\n>> > containing that value. This can be h(TID) or h(PK), or anything else\n>> > that would uniquely identify returned tuples. Because the keyspace of\n>> > all HLLs that are generated are on the same table, you can apply join\n>> > and intersection operations on the HLLs of the MCVs (for OR and\n>> > AND-operations respectively), and provide fairly accurately estimates\n>> > for the amount of tuples that would be returned by the filter on that\n>> > table.\n>> > > The required size of the HLL sketches can be determined by the amount\n>> > of tuples scanned during analyze, potentially reducing the size\n>> > required to store these HLL sketches from the usual 1.5kB per sketch\n>> > to something smaller - we'll only ever need to count nTuples distinct\n>> > values, so low values for default_statistics_target would allow for\n>> > smaller values for m in the HLL sketches, whilst still providing\n>> > fairly accurate result estimates.\n>> >\n>>\n>> I think it's an interesting idea. In principle it allows deducing the\n>> multi-column MCV for arbitrary combination of columns, not determined in\n>> advance. We'd have the MCV with HLL instead of frequencies for columns\n>> A, B and C:\n>>\n>> (a1, hll(a1))\n>> (a2, hll(a2))\n>> (...)\n>> (aK, hll(aK))\n>>\n>>\n>> (b1, hll(b1))\n>> (b2, hll(b2))\n>> (...)\n>> (bL, hll(bL))\n>>\n>> (c1, hll(c1))\n>> (c2, hll(c2))\n>> (...)\n>> (cM, hll(cM))\n>>\n>> and from this we'd be able to build MCV for any combination of those\n>> three columns.\n>>\n>> And in some sense it might even be more efficient/accurate, because the\n>> MCV on (A,B,C) might have up to K*L*M items. if there's 100 items in\n>> each column, that'd be 1,000,000 combinations, which we can't really\n>> store (target is up to 10k). And even if we could, it'd be 1M\n>> combinations with frequencies (so ~8-16B per combination).\n>>\n>> While with the MCV/HLL, we'd have 300 items and HLL. Assuming 256-512B\n>> HLL would be enough, that's still way smaller than the multi-column MCV.\n> \n> HLLs for statistics_target=100 could use 4 bits per bucket, but any\n> target above 218 should use 5 bits: nbits = ceil(log2(log2(target *\n> 300))), and this saves only 20% on storage.\n> \n\nI think the size estimate are somewhat misleading, as it ignores how\ncorrelated the columns actually are. If they are strongly correlated,\nthere are probably much fewer combinations than the cartesian product.\nThat is, given two correlated columns with 100 items MCVs, the combined\nMCV is likely much smaller than 10000 items.\n\nAnd for non-correlated columns it doesn't really matter, because people\nwould not need to create the multi-column statistics at all.\n\n\n> Accuracy increases with root(m), so while we can shrink the amount of\n> buckets, this is only OK if we're accepting the corresponding decrease\n> in accuracy.\n> \n\nHmm. So what's the rough size estimate in such case?\n\nFWIW I think we shouldn't be focusing on the size too much - I don't see\nthis as a simple alternative to multi-column MCV. The main benefit is it\ndoesn't require creating such extended statistics in advance, and the\nflexibility of combining arbitrary columns, I think.\n\n>> Even with target=10k it'd still be cheaper to store the separate MCV\n>> with HLL values, if I count right, and there'd be no items omitted from\n>> the MCV.\n> \n> There are more options, though: Count-min was proposed as a replacement\n> for MCV lists, and they work as guaranteed max count of distinct values.\n> If, instead of an actual counter in each bucket, we would use HLL in\n> each bucket, we'd not only have occurance estimates (but not: actual\n> max-count limits) for all values, but we'd also be able to do the\n> cross-column result correlation estimations.\n> \n\nRight. I did actually write a PoC using count-min sketch for join\nestimates [1] about a year ago. My conclusion from that work was that CM\nsketches are more a complement for MCVs than a replacement. That is, it\nmight be useful to have MCV and then a CM on the rows not represented by\nthe MCV.\n\n[1]\nhttps://www.postgresql.org/message-id/a08dda4c-aad4-a6b4-2cec-91363da73183%40enterprisedb.com\n\n\n>> > Kind regards,\n>> >\n>> > Matthias van de Meent\n>> >\n>> > PS: Several later papers correctly point out that HLL can only count\n>> > up to 2^32 due to the use of a hash function that outputs only 32\n>> > bits; which is not enough for large tables. HLL++ solves this by using\n>> > a hash function that outputs 64 bits, and can thus be considered a\n>> > better alternative which provides the same features. But, any other\n>> > sketch that provides an accurate (but not necessarily: perfect)\n>> > count-distinct of which results can be combined should be fine as\n>> > well.\n>> >\n>>\n>> I don't think the 32-bit limitation is a problem for us, because we'd be\n>> only ever build HLL on a sample, not the whole table. And the samples\n>> are limited to 3M rows (with statistics target = 10k), so we're nowhere\n>> near the scale requiring 64-bit hashes.\n> \n> I was thinking towards incremental statistics with this, which would\n> imply that inserts and updates do trigger updates in the statistics. It\n> is fairly inexpensive to append to an HLL, whereas that is not so\n> trivial for tablefraction-based statistics.\n> \n>> Presumably the statistics target value would determine the necessary HLL\n>> parameters (and size), because e.g. with 30k rows we can't possibly see\n>> more than 30k distinct values.\n>>\n>> One possible problem is this all works only when all the columns are\n>> analyzed at the same time / using the same sample. If you do this:\n>>\n>> ANALYZE t(a);\n>> ANALYZE t(b);\n>>\n>> then HLL filters sketches for the columns would use different ctid/PK\n>> values, and hence can't be combined.\n> \n> Correct. This could be improved through deterministic random sample (as\n> opposed to the current PRNG seeded with random()), where the sampled set\n> is only pseudo-random and always the same for a given relation of\n> constant size.\n> \n> This would result in statistics that always overlap while the relation\n> size is constant, and still show a corellation when the relation size\n> changes (with correllation = (1 - %delta) * (1 - %delta) ). Only in the\n> worst cases the correlation would be non-existent and the resulting\n> combinations would be no different than random - effectively regressing\n> the estimator function to P(A & B) = P(A) * P(B), which is the same as\n> what we currently have.\n> \n> Example: pages could be sampled in order of increasing value of\n> hash(PageNo || relid). This hash can be anything, but a reversible hash\n> function would probably be best, because it helps us by not having to\n> sort nBlocks hashed values for large tables (we can run the equivalent\n> of [0], which might be cheaper than top-n-sorting an array of relsize\n> blocks). The resulting set of scanned data would be stable for any\n> unchanging relation size and would thus consistently select the same\n> blocks when you analyze the table.\n> \n\nNot sure such deterministic sampling really solves the issue, because we\ndon't know what happened between the two statistics were built. Rows may\nbe moved due to UPDATE, new rows may be inserted to the pages, etc.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 20 May 2022 14:31:05 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [RFC] Improving multi-column filter cardinality estimation using\n MCVs and HyperLogLog"
},
{
"msg_contents": "On Mon, May 16, 2022 at 12:09:41AM +0200, Tomas Vondra wrote:\n> I think it's an interesting idea. In principle it allows deducing the\n> multi-column MCV for arbitrary combination of columns, not determined in\n> advance. We'd have the MCV with HLL instead of frequencies for columns\n> A, B and C:\n> \n> (a1, hll(a1))\n> (a2, hll(a2))\n> (...)\n> (aK, hll(aK))\n> \n> \n> (b1, hll(b1))\n> (b2, hll(b2))\n> (...)\n> (bL, hll(bL))\n> \n> (c1, hll(c1))\n> (c2, hll(c2))\n> (...)\n> (cM, hll(cM))\n> \n> and from this we'd be able to build MCV for any combination of those\n> three columns.\n\nSorry, but I am lost here. I read about HLL here:\n\n\thttps://towardsdatascience.com/hyperloglog-a-simple-but-powerful-algorithm-for-data-scientists-aed50fe47869\n\nHowever, I don't see how they can be combined for multiple columns. \nAbove, I know A,B,C are columns, but what is a1, a2, etc?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 24 May 2022 18:16:43 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [RFC] Improving multi-column filter cardinality estimation using\n MCVs and HyperLogLog"
},
{
"msg_contents": "\n\nOn 5/25/22 00:16, Bruce Momjian wrote:\n> On Mon, May 16, 2022 at 12:09:41AM +0200, Tomas Vondra wrote:\n>> I think it's an interesting idea. In principle it allows deducing the\n>> multi-column MCV for arbitrary combination of columns, not determined in\n>> advance. We'd have the MCV with HLL instead of frequencies for columns\n>> A, B and C:\n>>\n>> (a1, hll(a1))\n>> (a2, hll(a2))\n>> (...)\n>> (aK, hll(aK))\n>>\n>>\n>> (b1, hll(b1))\n>> (b2, hll(b2))\n>> (...)\n>> (bL, hll(bL))\n>>\n>> (c1, hll(c1))\n>> (c2, hll(c2))\n>> (...)\n>> (cM, hll(cM))\n>>\n>> and from this we'd be able to build MCV for any combination of those\n>> three columns.\n> \n> Sorry, but I am lost here. I read about HLL here:\n> \n> \thttps://towardsdatascience.com/hyperloglog-a-simple-but-powerful-algorithm-for-data-scientists-aed50fe47869\n> \n> However, I don't see how they can be combined for multiple columns. \n\nIt's the same as combining multiple HLL filters. HLL is essentially just\nan array of counters, and to calculate a union (i.e. HLL for elements in\nat least one of the input HLL sketches), you can just do Max() of the\ncounters. For intersection, you have to use inclusion-exclusion\nprinciple, i.e.\n\n intersection(HLL1, HLL2)\n = estimate(HLL1) + estimate(HLL2) - estimate(union(HLL1,HLL2))\n\nwhich is exactly the same as\n\n P(A & B) = P(A) + P(B) - P(A | B)\n\nThere's more in:\n\n https://github.com/citusdata/postgresql-hll\n\n https://agkn.wordpress.com/2012/12/17/hll-intersections-2/\n\nwhich also mentions the weakness - error is proportional to the size of\nthe union, while the intersection may be much smaller. Which might be an\nissue, especially when combining multiple columns.\n\n\n> Above, I know A,B,C are columns, but what is a1, a2, etc?\n\na1 is a value in column A, common enough to make it into the MCV. But\ninstead of just a frequency, we store a HLL tracking unique rows (by\nadding CTID to the HLL).\n\nSo for example for a \"city\" column, you'd have\n\n(\"NY\", HLL of CTIDs for rows with city = NY)\n(\"Philadephia\", HLL of CTIDs for rows with city = Philadelphia)\n...\n\n\netc.\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 25 May 2022 11:55:40 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [RFC] Improving multi-column filter cardinality estimation using\n MCVs and HyperLogLog"
},
{
"msg_contents": "On Wed, May 25, 2022 at 11:55:40AM +0200, Tomas Vondra wrote:\n> It's the same as combining multiple HLL filters. HLL is essentially just\n> an array of counters, and to calculate a union (i.e. HLL for elements in\n> at least one of the input HLL sketches), you can just do Max() of the\n> counters. For intersection, you have to use inclusion-exclusion\n> principle, i.e.\n> \n> intersection(HLL1, HLL2)\n> = estimate(HLL1) + estimate(HLL2) - estimate(union(HLL1,HLL2))\n> \n> which is exactly the same as\n> \n> P(A & B) = P(A) + P(B) - P(A | B)\n> \n> There's more in:\n> \n> https://github.com/citusdata/postgresql-hll\n> \n> https://agkn.wordpress.com/2012/12/17/hll-intersections-2/\n> \n> which also mentions the weakness - error is proportional to the size of\n> the union, while the intersection may be much smaller. Which might be an\n> issue, especially when combining multiple columns.\n> \n> \n> > Above, I know A,B,C are columns, but what is a1, a2, etc?\n> \n> a1 is a value in column A, common enough to make it into the MCV. But\n> instead of just a frequency, we store a HLL tracking unique rows (by\n> adding CTID to the HLL).\n> \n> So for example for a \"city\" column, you'd have\n> \n> (\"NY\", HLL of CTIDs for rows with city = NY)\n> (\"Philadephia\", HLL of CTIDs for rows with city = Philadelphia)\n> ...\n\nI read this email today and participated in an unconference session on\nthis topic today too. Let me explain what I learned.\n\nCurrently we store 100 (default) of the most common values (MCV) for each\ncolumn, and a float4 of the percentage of rows that have that value. \nWhen we restrict multiple columns in a query, we multiply these\npercentages to estimate the number of matching rows, e.g.:\n\n\tPhiladelphia: 0.05\n\tUSA: 0.15\n\t\n\tWHERE city = 'Philadelphia' and country = 'USA'\n\testimate 0.05 * 0.15 = 0.0075\n\nHowever, if we assume every \"Philadelphia\" is in the \"USA\", it should be\n0.05, but we don't know that from the statistics. We do have extended\nstatistics that allows us to create statistics on the combined\ncity/country columns.\n\nThe new idea here is to store a compressed bitmap of all of the sampled\nrows that match the MCV, rather than just a percentage. This would\nallow us to AND the bitmaps of city/country and determine how many rows\nactually match both restrictions.\n\n Philadelphia: 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0\n USA: 0 1 0 0 1 1 1 0 1 1 0 1 1 0 0 1\n\t\n\tWHERE city = 'Philadelphia' and country = 'USA'\n Philadelphia & USA: 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0\n\testimate 0.05\n\nWhile hyper-log-log sketches could be used, a simpler compressed bitmap\nmight be sufficient.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 25 May 2022 21:19:35 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [RFC] Improving multi-column filter cardinality estimation using\n MCVs and HyperLogLog"
},
{
"msg_contents": "Sorry for waking a dead thread, I had this in my drafts folder that I\nwas cleaning, and wanted to share this so anyone interested can reuse\nthese thoughts.\n\nOn Thu, 26 May 2022 at 03:19, Bruce Momjian <bruce@momjian.us> wrote:\n> I read this email today and participated in an unconference session on\n> this topic today too. Let me explain what I learned.\n\nMy takeaways from this thread and that unconference session (other\nnotes from the session: [0]):\n\n- Lossy count-distinct sketches require at least 100 \"buckets\" to get\na RSE of <10%, and 1000 buckets for RSE <2%.\nThe general formula for RSE for the most popular of these sketches is\nwithin a constant factor of 1 / root(m) for m \"buckets\"- which is\ntheorized to be the theoretical limit for count-distinct sketches that\nutilize n \"buckets\".\nRSE is the residual statistical error, i.e. accuracy of the model, so\nwith the popular sketches to double the accuracy you need 4x as many\nbuckets.\nA \"bucket\" is a distinct counting value, e.g. the log-counters in\n(H)LL, and bits in HyperBitBit.\nMost sketches use anywhere from several bits to several bytes per\nbucket: HLL uses 5 and 6 bits for 32- and 64-bits hashes,\nrespectively,\n\n- If we will implement sketches, it would be preferred if they support\nthe common set operations of [join, intersect, imply] while retaining\ntheir properties, so that we don't lose the increased accuracy.\nHLL does not support intersect- or imply-operations directly, which\nmakes it a bad choice as an estimator of rows returned.\n\n- Bitmaps would be a good first implementation for an initial\nsketch-based statistics implementation\nAssuming the implementation would compress these bitmaps, the average\nsize would definitely not be larger than 900 bytes (+ administrative\noverhead) per MCV entry - 3 bytes per sampled row for the index in the\nbitmap * 300 rows on average per MCV entry. Rows would be identified\nby their index in the sampled list of rows.\n\n- It is not clear we need to be able to combine statistics from\nmultiple runs of ANALYZE.\n\\We considered that it is rare for people to analyze only a subset of\ncolumns, or that people otherwise would need to combine analytics from\ndistinct table samples of the same table.\n\n- Accurately combining statistics from two different runs of ANALYZE\nrequires stable sampling, and lossless tuple identification\nThe above-mentioned bitmap-based statistics would not allow this,\nbecause the index of a sampled row will likely shift between runs,\neven assuming that a row is shared in the sample.\n\nSummary:\nWe shouldn't use HLL, (compressed) bitmaps will work fine for an\ninitial implementation of combined sketch-based MCV estimates.\n\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://wiki.postgresql.org/wiki/PgCon_2022_Developer_Unconference#Improving_Statistics_Accuracy\n\n\n",
"msg_date": "Wed, 6 Jul 2022 17:28:55 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [RFC] Improving multi-column filter cardinality estimation using\n MCVs and HyperLogLog"
}
] |
[
{
"msg_contents": "Hi,\n\nSpeaking as someone who regularly trawls through megabytes of build farm output:\n\n1. It seems a bit useless to have a load of \"FATAL: the database\nsystem is in recovery mode\" spam whenever the server crashes under\nsrc/test/regress. Any reason not to just turn that off, as we do for\nthe TAP tests?\n\n2. The TAP test logs are strangely named. Any reason not to call\nthem 001_testname.log, instead of regress_log_001_testname, so they\nappear next to the corresponding\n001_testname_{primary,standby,xxx}.log in directory listings (CI) and\ndumps (build farm, presumably), and have a traditional .log suffix?",
"msg_date": "Mon, 16 May 2022 11:01:51 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Minor improvements to test log navigability"
},
{
"msg_contents": "On 2022-May-16, Thomas Munro wrote:\n\n> 1. It seems a bit useless to have a load of \"FATAL: the database\n> system is in recovery mode\" spam whenever the server crashes under\n> src/test/regress. Any reason not to just turn that off, as we do for\n> the TAP tests?\n\nI don't know of any. Let's.\n\n> 2. The TAP test logs are strangely named. Any reason not to call\n> them 001_testname.log, instead of regress_log_001_testname, so they\n> appear next to the corresponding\n> 001_testname_{primary,standby,xxx}.log in directory listings (CI) and\n> dumps (build farm, presumably), and have a traditional .log suffix?\n\n+1.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nVoy a acabar con todos los humanos / con los humanos yo acabaré\nvoy a acabar con todos (bis) / con todos los humanos acabaré ¡acabaré! (Bender)\n\n\n",
"msg_date": "Mon, 16 May 2022 17:46:41 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Minor improvements to test log navigability"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-May-16, Thomas Munro wrote:\n>> 1. It seems a bit useless to have a load of \"FATAL: the database\n>> system is in recovery mode\" spam whenever the server crashes under\n>> src/test/regress. Any reason not to just turn that off, as we do for\n>> the TAP tests?\n\n> I don't know of any. Let's.\n\nHave you actually tested what happens? I fear this would just\nresult in different spam.\n\n>> 2. The TAP test logs are strangely named. Any reason not to call\n>> them 001_testname.log, instead of regress_log_001_testname, so they\n>> appear next to the corresponding\n>> 001_testname_{primary,standby,xxx}.log in directory listings (CI) and\n>> dumps (build farm, presumably), and have a traditional .log suffix?\n\n> +1.\n\nAndrew would have to weigh in on whether this'd break the buildfarm,\nbut if it doesn't, +1.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 May 2022 12:18:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Minor improvements to test log navigability"
},
{
"msg_contents": "On Mon, May 16, 2022 at 12:18:24PM -0400, Tom Lane wrote:\n> Andrew would have to weigh in on whether this'd break the buildfarm,\n> but if it doesn't, +1.\n\nFWIW, the buildfarm client feeds from the contents of the directory\ntmp_check/log/, so a simple renaming of the main log file would have\nno impact. Here are the parts of the code doing that:\nPGBuild/Modules/TestMyTap.pm: my @logs = glob(\"$self->{where}/tmp_check/log/*\");\nrun_build.pl: my @logs = glob(\"$dir/tmp_check/log/*\");\n--\nMichael",
"msg_date": "Tue, 17 May 2022 09:05:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Minor improvements to test log navigability"
},
{
"msg_contents": "On Tue, May 17, 2022 at 4:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2022-May-16, Thomas Munro wrote:\n> >> 1. It seems a bit useless to have a load of \"FATAL: the database\n> >> system is in recovery mode\" spam whenever the server crashes under\n> >> src/test/regress. Any reason not to just turn that off, as we do for\n> >> the TAP tests?\n>\n> > I don't know of any. Let's.\n>\n> Have you actually tested what happens? I fear this would just\n> result in different spam.\n\nI'd forgotten that we already do this on CI, via\nsrc/tools/ci/pg_ci_base.conf, so we can compare. A CI postmaster.log\nthat ends with \"shutting down because restart_after_crash is off\":\n\nhttps://api.cirrus-ci.com/v1/artifact/task/6537277877780480/log/src/test/regress/log/postmaster.log\n\nThe build farm version has ~350 lines of \"FATAL: the database system\nis in recovery mode\" instead:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=seawasp&dt=2022-05-16%2023%3A17%3A49\n\nAdmittedly that is nothing compared to the huge amount of extra log\nspam caused by regression.diffs filling up with these:\n\n- ...the complete expected output of each test spanning many lines...\n- ...\n- ...\n+psql: error: connection to server on socket\n\"/tmp/pg_regress-ZqXocK/.s.PGSQL.5678\" failed: FATAL: the database\nsystem is in recovery mode\n\nIn the CI version, that looks like:\n\n- ...the complete expected output of each test spanning many lines...\n- ...\n- ...\n+psql: error: connection to server on socket\n\"/tmp/pg_regress-T35Yzi/.s.PGSQL.51696\" failed: No such file or\ndirectory\n+ Is the server running locally and accepting connections on that socket?\n\nI wonder if there would be a good way to filter those \"never managed\nto connect\" cases out... Exit code 2 (EXIT_BADCONN) is not the\nanswer, because you get that also for servers that go away due to a\ncrash where you do want to be able to see the diff, for information\nabout where it crashed.\n\n\n",
"msg_date": "Tue, 17 May 2022 13:56:09 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Minor improvements to test log navigability"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> ... Admittedly that is nothing compared to the huge amount of extra log\n> spam caused by regression.diffs filling up with these:\n\nYeah, that's really the main problem.\n\n> I wonder if there would be a good way to filter those \"never managed\n> to connect\" cases out... Exit code 2 (EXIT_BADCONN) is not the\n> answer, because you get that also for servers that go away due to a\n> crash where you do want to be able to see the diff, for information\n> about where it crashed.\n\nMaybe pg_regress could check that postmaster.pid is still there\nbefore launching each new test script? (Obviously this all applies\nonly to \"make check\" not \"make installcheck\".)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 May 2022 23:40:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Minor improvements to test log navigability"
},
{
"msg_contents": "On 16.05.22 01:01, Thomas Munro wrote:\n> 2. The TAP test logs are strangely named. Any reason not to call\n> them 001_testname.log, instead of regress_log_001_testname, so they\n> appear next to the corresponding\n> 001_testname_{primary,standby,xxx}.log in directory listings (CI) and\n> dumps (build farm, presumably), and have a traditional .log suffix?\n\nI'm in favor of a saner name, but wouldn't something.log be confusable \nwith a server log? Maybe something.out would be clearer. Or \nsomething_output.log, if the .log suffix is somehow desirable.\n\n\n\n",
"msg_date": "Wed, 18 May 2022 15:30:26 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Minor improvements to test log navigability"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed that $subject causes an error in HEAD:\n\n$ pgbench -i --partitions=0\npgbench: error: --partitions must be in range 1..2147483647\n\nHowever, it works in v13 and v14, assuming no partitions.\n\nI think the commit 6f164e6d17 may have unintentionally broken it, by\nintroducing this hunk:\n\n@@ -6135,12 +6116,9 @@ main(int argc, char **argv)\n break;\n case 11: /* partitions */\n initialization_option_set = true;\n- partitions = atoi(optarg);\n- if (partitions < 0)\n- {\n- pg_log_fatal(\"invalid number of partitions:\n\\\"%s\\\"\", optarg);\n+ if (!option_parse_int(optarg, \"--partitions\", 1, INT_MAX,\n+ &partitions))\n exit(1);\n- }\n\nAttached a patch to fix with a test added. cc'ing Michael who\nauthored that commit.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 16 May 2022 11:34:41 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "pgbench --partitions=0"
},
{
"msg_contents": "On Mon, May 16, 2022 at 11:34:41AM +0900, Amit Langote wrote:\n> Attached a patch to fix with a test added. cc'ing Michael who\n> authored that commit.\n\nIndeed, 6f164e6d got that incorrectly. I don't really want to play\nwith the master branch at this stage, even if this is trivial, but\nI'll fix it after the version is tagged. Thanks for the report,\nAmit.\n--\nMichael",
"msg_date": "Mon, 16 May 2022 14:41:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench --partitions=0"
},
{
"msg_contents": "On Mon, May 16, 2022 at 2:41 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, May 16, 2022 at 11:34:41AM +0900, Amit Langote wrote:\n> > Attached a patch to fix with a test added. cc'ing Michael who\n> > authored that commit.\n>\n> Indeed, 6f164e6d got that incorrectly. I don't really want to play\n> with the master branch at this stage, even if this is trivial, but\n> I'll fix it after the version is tagged.\n\nSounds good to me.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 16 May 2022 14:44:47 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pgbench --partitions=0"
},
{
"msg_contents": "On Mon, May 16, 2022 at 02:44:47PM +0900, Amit Langote wrote:\n> Sounds good to me.\n\n(I have added an open item, just in case.)\n--\nMichael",
"msg_date": "Mon, 16 May 2022 15:00:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench --partitions=0"
},
{
"msg_contents": "On Mon, May 16, 2022 at 03:00:51PM +0900, Michael Paquier wrote:\n> (I have added an open item, just in case.)\n\nAnd fixed as of 27f1366.\n--\nMichael",
"msg_date": "Wed, 18 May 2022 09:50:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench --partitions=0"
},
{
"msg_contents": "On Wed, May 18, 2022 at 9:50 Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, May 16, 2022 at 03:00:51PM +0900, Michael Paquier wrote:\n> > (I have added an open item, just in case.)\n>\n> And fixed as of 27f1366\n\nThank you.\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\nOn Wed, May 18, 2022 at 9:50 Michael Paquier <michael@paquier.xyz> wrote:On Mon, May 16, 2022 at 03:00:51PM +0900, Michael Paquier wrote:\n> (I have added an open item, just in case.)\n\nAnd fixed as of 27f1366Thank you.-- Thanks, Amit LangoteEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 18 May 2022 10:06:47 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pgbench --partitions=0"
}
] |
[
{
"msg_contents": "Hi all,\n\nCutting support for now-unsupported versions of Windows is in the air\nfor a couple of months, and while looking at the code a first cleanup\nthat looked rather obvious to me is the removal of support for VS\n2013, as of something to do for v16~.\n\nThe website of Microsoft has only documentation for VS >= 2015 as far\nas I can see. Note also that VS can be downloaded down to 2012 on\ntheir official website, and that the buildfarm members only use VS >=\n2017.\n\nThe patch attached cleans up the following things proper to VS 2013:\n- Locale handling.\n- MIN_WINNT assignment.\n- Some strtof() business, as of win32_port.h.\n- Removal of _set_FMA3_enable() in main.c related to floating-point\noperations.\n- MSVC scripts, but that's less interesting considering the work done\nwith meson.\n\nA nice result is that this completely removes all the checks related\nto the version number of _MSC_VER from the core code, making the code\ndepend only on the definition if the flag.\n\nThanks,\n--\nMichael",
"msg_date": "Mon, 16 May 2022 15:58:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Remove support for Visual Studio 2013"
},
{
"msg_contents": "On Mon, May 16, 2022 at 6:58 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Cutting support for now-unsupported versions of Windows is in the air\n> for a couple of months, and while looking at the code a first cleanup\n> that looked rather obvious to me is the removal of support for VS\n> 2013, as of something to do for v16~.\n\nNot a Windows person so I couldn't comment on the details without more\nresearch, but this general concept seems good to me. That's a nice\nreduction in (practically) untestable/dead code (no CI, no build\nfarm).\n\nFor comparison, I picked 3 random C/C++ (OK, C++) projects I could\nthink of to see how they deal with VS versions, and all require 2017+:\n\n* MariaDB supports the last two major versions, so currently VS 2019\nand VS 2022, 2022 is preferred[1]\n* Chrome requires VS 2017+ but currently 2019 is preferred[2]\n* OpenJDK requires VS 2017+[3]\n\nLooking at the published lifecycle info, 2017 is the oldest still in\n'mainstream' support[4], so it wouldn't be too crazy to drop VS 2015\ntoo, just like those other projects. That said, it sounds like there\nis no practical benefit to being more aggressive than you are\nsuggesting currently (as in, we wouldn't get to delete any more crufty\nuntestable dead code by dropping 2015, right?), so maybe that'd be\nenough for now.\n\n[1] https://mariadb.com/kb/en/Building_MariaDB_on_Windows/\n[2] https://chromium.googlesource.com/chromium/src/+/main/docs/windows_build_instructions.md#Visual-Studio\n[3] https://openjdk.java.net/groups/build/doc/building.html\n[4] https://docs.microsoft.com/en-us/visualstudio/productinfo/vs-servicing\n\n\n",
"msg_date": "Mon, 16 May 2022 20:46:31 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove support for Visual Studio 2013"
},
{
"msg_contents": "On Mon, May 16, 2022 at 08:46:31PM +1200, Thomas Munro wrote:\n> Looking at the published lifecycle info, 2017 is the oldest still in\n> 'mainstream' support[4], so it wouldn't be too crazy to drop VS 2015\n> too, just like those other projects. That said, it sounds like there\n> is no practical benefit to being more aggressive than you are\n> suggesting currently (as in, we wouldn't get to delete any more crufty\n> untestable dead code by dropping 2015, right?), so maybe that'd be\n> enough for now.\n\nFWIW, one of my environments is using VS2015, because I have set it up\nyears ago and I am lazy to do this setup except if I really have to :)\n\nThe code works as far as I know, still I am not really excited about\ncutting support for more versions than necessary, particularly as this\ndoes not simplify the C code more.\n--\nMichael",
"msg_date": "Mon, 16 May 2022 19:34:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Remove support for Visual Studio 2013"
},
{
"msg_contents": "\nOn 2022-05-16 Mo 06:34, Michael Paquier wrote:\n> On Mon, May 16, 2022 at 08:46:31PM +1200, Thomas Munro wrote:\n>> Looking at the published lifecycle info, 2017 is the oldest still in\n>> 'mainstream' support[4], so it wouldn't be too crazy to drop VS 2015\n>> too, just like those other projects. That said, it sounds like there\n>> is no practical benefit to being more aggressive than you are\n>> suggesting currently (as in, we wouldn't get to delete any more crufty\n>> untestable dead code by dropping 2015, right?), so maybe that'd be\n>> enough for now.\n> FWIW, one of my environments is using VS2015, because I have set it up\n> years ago and I am lazy to do this setup except if I really have to :)\n>\n> The code works as far as I know, still I am not really excited about\n> cutting support for more versions than necessary, particularly as this\n> does not simplify the C code more.\n\n\nYeah, I'm ok with this. The only older version I have is currawong, but\nit runs on NT and so only builds release 10 and will probably be retired\naround the end of the year.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 16 May 2022 08:19:28 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Remove support for Visual Studio 2013"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, May 16, 2022 at 08:46:31PM +1200, Thomas Munro wrote:\n>> Looking at the published lifecycle info, 2017 is the oldest still in\n>> 'mainstream' support[4], so it wouldn't be too crazy to drop VS 2015\n>> too, just like those other projects. That said, it sounds like there\n>> is no practical benefit to being more aggressive than you are\n>> suggesting currently (as in, we wouldn't get to delete any more crufty\n>> untestable dead code by dropping 2015, right?), so maybe that'd be\n>> enough for now.\n\n> FWIW, one of my environments is using VS2015, because I have set it up\n> years ago and I am lazy to do this setup except if I really have to :)\n\n> The code works as far as I know, still I am not really excited about\n> cutting support for more versions than necessary, particularly as this\n> does not simplify the C code more.\n\nThe argument about removing untested code doesn't apply if there is\nno code to remove, so it seems like continuing to support VS2015 is\nreasonable. Of course, if anyone came and complained that it's broken,\nwe'd probably just drop the support claim ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 May 2022 11:23:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove support for Visual Studio 2013"
},
{
"msg_contents": "On Mon, May 16, 2022 at 8:58 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n>\n> The patch attached cleans up the following things proper to VS 2013:\n> - Locale handling.\n> - MIN_WINNT assignment.\n> - Some strtof() business, as of win32_port.h.\n> - Removal of _set_FMA3_enable() in main.c related to floating-point\n> operations.\n> - MSVC scripts, but that's less interesting considering the work done\n> with meson.\n>\n> When building on MinGW with NLS enabled I get some errors:\n\nc:/cirrus/src/backend/utils/adt/pg_locale.c: In function\n'search_locale_enum':\nc:/cirrus/src/backend/utils/adt/pg_locale.c:989:13: warning: implicit\ndeclaration of function 'GetLocaleInfoEx'; did you mean 'GetLocaleInfoA'?\n[-Wimplicit-function-declaration]\n 989 | if (GetLocaleInfoEx(pStr, LOCALE_SENGLISHLANGUAGENAME,\n | ^~~~~~~~~~~~~~~\n | GetLocaleInfoA\n\nThis is because current MinGW defaults to Windows 2003 [1], maybe we should\nfix Windows' minimal version to Vista (0x0600) unconditionally also. I have\nseen a couple of compilation warnings while testing that setting on MinGW,\nplease find attached a patch for so.\n\n[1]\nhttps://github.com/mirror/mingw-w64/blob/master/mingw-w64-headers/include/sdkddkver.h\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Tue, 17 May 2022 18:26:20 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove support for Visual Studio 2013"
},
{
"msg_contents": "On Tue, May 17, 2022 at 06:26:20PM +0200, Juan José Santamaría Flecha wrote:\n> This is because current MinGW defaults to Windows 2003 [1], maybe we should\n> fix Windows' minimal version to Vista (0x0600) unconditionally also. I have\n> seen a couple of compilation warnings while testing that setting on MinGW,\n> please find attached a patch for so.\n> \n> [1]\n> https://github.com/mirror/mingw-w64/blob/master/mingw-w64-headers/include/sdkddkver.h\n\nAh, right. I have forgotten about this business with MinGW.\n\n@@ -1757,7 +1757,7 @@ get_collation_actual_version(char collprovider, const char *collcollate)\n collcollate,\n GetLastError())));\n }\n- collversion = psprintf(\"%d.%d,%d.%d\",\n+ collversion = psprintf(\"%ld.%ld,%ld.%ld\",\n (version.dwNLSVersion >> 8) & 0xFFFF,\n version.dwNLSVersion & 0xFF,\n (version.dwDefinedVersion >> 8) & 0xFFFF,\n\nIs this change still required even if we bump MIN_WINNT to 0x0600 for\nall the environments that include win32.h? At the end, this would\nmean dropping support for Windows XP and Windows Server 2003 as\nrun-time environments as listed in [1], which are not supported\nofficially since 2014 (even if there have been some patches for\nsome critical issues). So I'd be fine to raise the bar for v16~,\nparticularly as this would allow us to get rid of this code related to\nlocales.\n\n[1]: https://docs.microsoft.com/en-us/cpp/porting/modifying-winver-and-win32-winnt?view=msvc-170\n--\nMichael",
"msg_date": "Wed, 18 May 2022 09:27:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Remove support for Visual Studio 2013"
},
{
"msg_contents": "On Wed, May 18, 2022 at 2:27 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n>\n> @@ -1757,7 +1757,7 @@ get_collation_actual_version(char collprovider,\n> const char *collcollate)\n> collcollate,\n> GetLastError())));\n> }\n> - collversion = psprintf(\"%d.%d,%d.%d\",\n> + collversion = psprintf(\"%ld.%ld,%ld.%ld\",\n> (version.dwNLSVersion >> 8) & 0xFFFF,\n> version.dwNLSVersion & 0xFF,\n> (version.dwDefinedVersion >> 8) & 0xFFFF,\n>\n> Is this change still required even if we bump MIN_WINNT to 0x0600 for\n> all the environments that include win32.h?\n\n\nRight now we are ifdefing that code out for MinGW, so it's not a visible\nissue, but it'll be when we do.\n\n\n> At the end, this would\n> mean dropping support for Windows XP and Windows Server 2003 as\n> run-time environments as listed in [1], which are not supported\n> officially since 2014 (even if there have been some patches for\n> some critical issues). So I'd be fine to raise the bar for v16~,\n> particularly as this would allow us to get rid of this code related to\n> locales.\n>\n\nEven Windows Server 2008 [1] is at its End of Life, so this should surprise\nno one.\n\n[1]\nhttps://docs.microsoft.com/en-us/troubleshoot/windows-server/windows-server-eos-faq/end-of-support-windows-server-2008-2008r2\n\nRegards,\n\nJuan José Santamaría Flecha\n\n>\n>\n\nOn Wed, May 18, 2022 at 2:27 AM Michael Paquier <michael@paquier.xyz> wrote:\n@@ -1757,7 +1757,7 @@ get_collation_actual_version(char collprovider, const char *collcollate)\n collcollate,\n GetLastError())));\n }\n- collversion = psprintf(\"%d.%d,%d.%d\",\n+ collversion = psprintf(\"%ld.%ld,%ld.%ld\",\n (version.dwNLSVersion >> 8) & 0xFFFF,\n version.dwNLSVersion & 0xFF,\n (version.dwDefinedVersion >> 8) & 0xFFFF,\n\nIs this change still required even if we bump MIN_WINNT to 0x0600 for\nall the environments that include win32.h?Right now we are ifdefing that code out for MinGW, so it's not a visible issue, but it'll be when we do. At the end, this would\nmean dropping support for Windows XP and Windows Server 2003 as\nrun-time environments as listed in [1], which are not supported\nofficially since 2014 (even if there have been some patches for\nsome critical issues). So I'd be fine to raise the bar for v16~,\nparticularly as this would allow us to get rid of this code related to\nlocales. Even Windows Server 2008 [1] is at its End of Life, so this should surprise no one.[1] https://docs.microsoft.com/en-us/troubleshoot/windows-server/windows-server-eos-faq/end-of-support-windows-server-2008-2008r2Regards,Juan José Santamaría Flecha",
"msg_date": "Wed, 18 May 2022 10:06:50 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove support for Visual Studio 2013"
},
{
"msg_contents": "On Wed, May 18, 2022 at 10:06:50AM +0200, Juan José Santamaría Flecha wrote:\n> Right now we are ifdefing that code out for MinGW, so it's not a visible\n> issue, but it'll be when we do.\n\nOK. Thanks, got it.\n--\nMichael",
"msg_date": "Wed, 18 May 2022 18:02:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Remove support for Visual Studio 2013"
},
{
"msg_contents": "On Wed, May 18, 2022 at 10:06:50AM +0200, Juan José Santamaría Flecha wrote:\n> On Wed, May 18, 2022 at 2:27 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> At the end, this would\n>> mean dropping support for Windows XP and Windows Server 2003 as\n>> run-time environments as listed in [1], which are not supported\n>> officially since 2014 (even if there have been some patches for\n>> some critical issues). So I'd be fine to raise the bar for v16~,\n>> particularly as this would allow us to get rid of this code related to\n>> locales.\n> \n> Even Windows Server 2008 [1] is at its End of Life, so this should surprise\n> no one.\n> \n> [1]\n> https://docs.microsoft.com/en-us/troubleshoot/windows-server/windows-server-eos-faq/end-of-support-windows-server-2008-2008r2\n\nBtw, I am going to spawn a new thread for this specific change rather\nthan forcing people to dig into this one as it is independent. Better\nto do that a bit in advance of the development cycle for v16, as it\nis usually good to clean up such things sooner than later..\n--\nMichael",
"msg_date": "Thu, 26 May 2022 10:18:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Remove support for Visual Studio 2013"
},
{
"msg_contents": "Maybe consider removing this workaround? The original problem report indicated\nthat it didn't affect later versions:\n\nsrc/backend/optimizer/path/costsize.c: /* This apparently-useless variable dodges a compiler bug in VS2013: */\n\nI'm not sure if it's worth removing this one, though:\n\nsrc/port/strtof.c: * On Windows, there's a slightly different problem: VS2013 has a strtof()\n\n\n",
"msg_date": "Thu, 26 May 2022 10:43:11 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove support for Visual Studio 2013"
},
{
"msg_contents": "On Thu, May 26, 2022 at 10:43:11AM -0500, Justin Pryzby wrote:\n\nAh, thanks. I forgot to grep for those patterns. Good catches.\n\n> Maybe consider removing this workaround? The original problem report indicated\n> that it didn't affect later versions:\n> \n> src/backend/optimizer/path/costsize.c: /* This apparently-useless variable dodges a compiler bug in VS2013: */\n\nHence reverting 3154e16. Sure!\n\n> I'm not sure if it's worth removing this one, though:\n> \n> src/port/strtof.c: * On Windows, there's a slightly different problem: VS2013 has a strtof()\n\nYeah.. I am not completely sure if all the patterns mentioned for\nVS2013 apply to Cygwin/Mingw, so keeping it around could be more\nbeneficial.\n--\nMichael",
"msg_date": "Fri, 27 May 2022 05:57:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Remove support for Visual Studio 2013"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, May 26, 2022 at 10:43:11AM -0500, Justin Pryzby wrote:\n>> Maybe consider removing this workaround? The original problem report indicated\n>> that it didn't affect later versions:\n>> \n>> src/backend/optimizer/path/costsize.c: /* This apparently-useless variable dodges a compiler bug in VS2013: */\n\n> Hence reverting 3154e16. Sure!\n\n+1\n\n>> I'm not sure if it's worth removing this one, though:\n>> \n>> src/port/strtof.c: * On Windows, there's a slightly different problem: VS2013 has a strtof()\n\n> Yeah.. I am not completely sure if all the patterns mentioned for\n> VS2013 apply to Cygwin/Mingw, so keeping it around could be more\n> beneficial.\n\nThe comments about that in win32_port.h and cygwin.h only date back\nto 2019, so it seems unlikely that the situation has changed much.\nWe could try removing HAVE_BUGGY_STRTOF to see if the buildfarm\ncomplains, but I wouldn't bet money on that succeeding. What we\n*do* need to do is update the #if tests and comments to make clear\nthat HAVE_BUGGY_STRTOF is only needed for Mingw and Cygwin, not\nfor any supported MSVC release.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 May 2022 17:50:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove support for Visual Studio 2013"
},
{
"msg_contents": "On Thu, May 26, 2022 at 05:50:40PM -0400, Tom Lane wrote:\n> The comments about that in win32_port.h and cygwin.h only date back\n> to 2019, so it seems unlikely that the situation has changed much.\n> We could try removing HAVE_BUGGY_STRTOF to see if the buildfarm\n> complains, but I wouldn't bet money on that succeeding. What we\n> *do* need to do is update the #if tests and comments to make clear\n> that HAVE_BUGGY_STRTOF is only needed for Mingw and Cygwin, not\n> for any supported MSVC release.\n\nAfter looking at that again, the whole comment related to VS in\nstrtof.c can be removed. I have noticed while on it more places that\nstill referred to VS2013 in ./configure[.ac] and win32_langinfo() got\nan overall incorrect description. This leads to v2 as of the\nattached.\n--\nMichael",
"msg_date": "Mon, 30 May 2022 16:48:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Remove support for Visual Studio 2013"
},
{
"msg_contents": "On Mon, May 30, 2022 at 04:48:22PM +0900, Michael Paquier wrote:\n> After looking at that again, the whole comment related to VS in\n> strtof.c can be removed. I have noticed while on it more places that\n> still referred to VS2013 in ./configure[.ac] and win32_langinfo() got\n> an overall incorrect description. This leads to v2 as of the\n> attached.\n\nAnd with 495ed0e now in place, attached is a rebased version.\n--\nMichael",
"msg_date": "Fri, 8 Jul 2022 07:38:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Remove support for Visual Studio 2013"
},
{
"msg_contents": "On Fri, Jul 08, 2022 at 07:38:23AM +0900, Michael Paquier wrote:\n> And with 495ed0e now in place, attached is a rebased version.\n\nHearing nothing about this one, and because it is a nice cleanup\noverall, I have gone ahead and applied it:\n14 files changed, 24 insertions(+), 177 deletions(-)\n\nThis removes the dependency on the value of _MSC_VER.\n--\nMichael",
"msg_date": "Thu, 14 Jul 2022 12:03:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Remove support for Visual Studio 2013"
}
] |
[
{
"msg_contents": "Inspired by [0], I looked to convert more macros to inline functions. \nThe attached patches are organized \"bottom up\" in terms of their API \nlayering; some of the later ones depend on some of the earlier ones.\n\n\nNote 1: Some macros that do by-value assignments like\n\n#define PageXLogRecPtrSet(ptr, lsn) \\\n ((ptr).xlogid = (uint32) ((lsn) >> 32), (ptr).xrecoff = (uint32) (lsn))\n\ncan't be converted to functions without changing the API, so I left \nthose alone for now.\n\n\nNote 2: Many macros in htup_details.h operate both on HeapTupleHeader \nand on MinimalTuple, so converting them to a function doesn't work in a \nstraightforward way. I have some in-progress work in that area, but I \nhave not included any of that here.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/202203241021.uts52sczx3al@alvherre.pgsql",
"msg_date": "Mon, 16 May 2022 10:27:58 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Convert macros to static inline functions"
},
{
"msg_contents": "On Mon, May 16, 2022 at 1:58 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n>\n> Inspired by [0], I looked to convert more macros to inline functions.\n> The attached patches are organized \"bottom up\" in terms of their API\n> layering; some of the later ones depend on some of the earlier ones.\n>\n\nAll the patches look good to me, except the following that are minor\nthings that can be ignored if you want.\n\n0002 patch:\n\n+static inline OffsetNumber\n+PageGetMaxOffsetNumber(Page page)\n+{\n+ if (((PageHeader) page)->pd_lower <= SizeOfPageHeaderData)\n+ return 0;\n+ else\n+ return ((((PageHeader) page)->pd_lower - SizeOfPageHeaderData)\n/ sizeof(ItemIdData));\n+}\n\nThe \"else\" is not necessary, we can have the return statement directly\nwhich would save some indentation as well. The Similar pattern can be\nconsidered for 0004 and 0007 patches as well.\n--\n\n0004 patch:\n\n+static inline void\n+XLogFromFileName(const char *fname, TimeLineID *tli, XLogSegNo\n*logSegNo, int wal_segsz_bytes)\n+{\n+ uint32 log;\n+ uint32 seg;\n+ sscanf(fname, \"%08X%08X%08X\", tli, &log, &seg);\n+ *logSegNo = (uint64) log * XLogSegmentsPerXLogId(wal_segsz_bytes) + seg;\n+}\n\nCan we have a blank line after variable declarations that we usually have?\n--\n\n0006 patch:\n+static inline Datum\n+fetch_att(const void *T, bool attbyval, int attlen)\n+{\n+ if (attbyval)\n+ {\n+#if SIZEOF_DATUM == 8\n+ if (attlen == sizeof(Datum))\n+ return *((const Datum *) T);\n+ else\n+#endif\n\nCan we have a switch case like store_att_byval() instead of if-else,\ncode would look more symmetric, IMO.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Mon, 16 May 2022 18:53:04 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Convert macros to static inline functions"
},
{
"msg_contents": "On 2022-May-16, Amul Sul wrote:\n\n> +static inline OffsetNumber\n> +PageGetMaxOffsetNumber(Page page)\n> +{\n> + if (((PageHeader) page)->pd_lower <= SizeOfPageHeaderData)\n> + return 0;\n> + else\n> + return ((((PageHeader) page)->pd_lower - SizeOfPageHeaderData)\n> / sizeof(ItemIdData));\n> +}\n> \n> The \"else\" is not necessary, we can have the return statement directly\n> which would save some indentation as well. The Similar pattern can be\n> considered for 0004 and 0007 patches as well.\n\nYeah. In these cases I propose to also have a local variable so that\nthe cast to PageHeader appears only once.\n\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 16 May 2022 17:48:28 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Convert macros to static inline functions"
},
{
"msg_contents": "On 16.05.22 15:23, Amul Sul wrote:\n> +static inline OffsetNumber\n> +PageGetMaxOffsetNumber(Page page)\n> +{\n> + if (((PageHeader) page)->pd_lower <= SizeOfPageHeaderData)\n> + return 0;\n> + else\n> + return ((((PageHeader) page)->pd_lower - SizeOfPageHeaderData)\n> / sizeof(ItemIdData));\n> +}\n> \n> The \"else\" is not necessary, we can have the return statement directly\n> which would save some indentation as well. The Similar pattern can be\n> considered for 0004 and 0007 patches as well.\n\nI kind of like it better this way. It preserves the functional style of \nthe original macro.\n\n> +static inline void\n> +XLogFromFileName(const char *fname, TimeLineID *tli, XLogSegNo\n> *logSegNo, int wal_segsz_bytes)\n> +{\n> + uint32 log;\n> + uint32 seg;\n> + sscanf(fname, \"%08X%08X%08X\", tli, &log, &seg);\n> + *logSegNo = (uint64) log * XLogSegmentsPerXLogId(wal_segsz_bytes) + seg;\n> +}\n> \n> Can we have a blank line after variable declarations that we usually have?\n\ndone\n\n> 0006 patch:\n> +static inline Datum\n> +fetch_att(const void *T, bool attbyval, int attlen)\n> +{\n> + if (attbyval)\n> + {\n> +#if SIZEOF_DATUM == 8\n> + if (attlen == sizeof(Datum))\n> + return *((const Datum *) T);\n> + else\n> +#endif\n> \n> Can we have a switch case like store_att_byval() instead of if-else,\n> code would look more symmetric, IMO.\n\ndone",
"msg_date": "Mon, 23 May 2022 07:38:32 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Convert macros to static inline functions"
},
{
"msg_contents": "On Mon, May 16, 2022 at 1:28 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> Inspired by [0], I looked to convert more macros to inline functions.\n> The attached patches are organized \"bottom up\" in terms of their API\n> layering; some of the later ones depend on some of the earlier ones.\n\nBig +1 from me.\n\nI converted over most of the nbtree.h function style macros in\nPostgres 13, having put it off in Postgres 12 (there is one remaining\nfunction macro due to an issue with #include dependencies). This\nvastly improved the maintainability of the code, and I wish I'd done\nit sooner.\n\nInline functions made it a lot easier to pepper various B-Tree code\nutility functions with defensive assertions concerning preconditions\nand postconditions. That's something that I am particular about. In\ntheory you can just use AssertMacro() in a function style macro. In\npractice that approach is ugly, and necessitates thinking about\nmultiple evaluation hazards, which is enough to discourage good\ndefensive coding practices.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 8 Jun 2022 19:01:56 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Convert macros to static inline functions"
},
{
"msg_contents": "On 16.05.22 10:27, Peter Eisentraut wrote:\n> Inspired by [0], I looked to convert more macros to inline functions. \n\nHere is another one from the same batch of work that I somehow didn't \nsend in last time.\n\n(IMO it's questionable whether this one should be an inline function or \nmacro at all, rather than a normal external function.)",
"msg_date": "Tue, 4 Oct 2022 08:30:32 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Convert macros to static inline functions"
},
{
"msg_contents": "On Tue, Oct 4, 2022 at 12:00 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 16.05.22 10:27, Peter Eisentraut wrote:\n> > Inspired by [0], I looked to convert more macros to inline functions.\n>\n> Here is another one from the same batch of work that I somehow didn't\n> send in last time.\n>\nI think assertion can be placed outside of the IF-block and braces can\nbe removed.\n\n> (IMO it's questionable whether this one should be an inline function or\n> macro at all, rather than a normal external function.)\nIMO, it should be inlined with RelationGetSmgr().\n\nRegards,\nAmul\n\n\n",
"msg_date": "Tue, 4 Oct 2022 12:27:25 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Convert macros to static inline functions"
},
{
"msg_contents": "On 04.10.22 08:57, Amul Sul wrote:\n> On Tue, Oct 4, 2022 at 12:00 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> On 16.05.22 10:27, Peter Eisentraut wrote:\n>>> Inspired by [0], I looked to convert more macros to inline functions.\n>>\n>> Here is another one from the same batch of work that I somehow didn't\n>> send in last time.\n>>\n> I think assertion can be placed outside of the IF-block and braces can\n> be removed.\n\nCommitted that way, thanks.\n\n\n\n",
"msg_date": "Fri, 7 Oct 2022 16:20:32 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Convert macros to static inline functions"
}
] |
[
{
"msg_contents": "Consider:\n\ntypedef struct Memoize\n{\n Plan plan;\n\n int numKeys; /* size of the two arrays below */\n\n Oid *hashOperators; /* hash operators for each key */\n Oid *collations; /* cache keys */\n List *param_exprs; /* exprs containing parameters */\n ...\n\nI think the comment \"cache keys\" is weird here. Maybe it was copied from\n\ntypedef struct MemoizePath\n{\n Path path;\n Path *subpath; /* outerpath to cache tuples from */\n List *hash_operators; /* hash operators for each key */\n List *param_exprs; /* cache keys */\n ...\n\nbut it's attached to a different field there.\n\nIs this a mistake, or could this be clarified?\n\n\n",
"msg_date": "Mon, 16 May 2022 18:21:09 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "weird comments in Memoize nodes"
},
{
"msg_contents": "On Tue, 17 May 2022 at 04:21, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> Oid *collations; /* cache keys */\n\n> Is this a mistake, or could this be clarified?\n\nYeah, must be a copy-pasto. I'll fix it with the attached after beta1\nis tagged.\n\nDavid",
"msg_date": "Tue, 17 May 2022 08:02:37 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: weird comments in Memoize nodes"
},
{
"msg_contents": "On Tue, 17 May 2022 at 08:02, David Rowley <dgrowleyml@gmail.com> wrote:\n> Yeah, must be a copy-pasto. I'll fix it with the attached after beta1\n> is tagged.\n\nPushed.\n\nDavid\n\n\n",
"msg_date": "Thu, 19 May 2022 17:16:05 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: weird comments in Memoize nodes"
}
] |
[
{
"msg_contents": "HI all! I wanted to share a useful tool that I built during the pandemic.\nOne of my deepest projects I’ve created in postgres, and I’m excited to\nshare it with the community and get ideas and feedback.\n\nI do a lot of functional programming and needed dynamic SQL. My personal\nbelief is that ORMs are the wrong interface to creating migrations, and I\nprefer a pg native, functional approach. This library was actually one of\nthe base extensions of a larger set, that I used this to create higher\nlevel structures for creating migrations. So while this is “low-level” you\ncan definitely create stored procedures that do make higher-level if you\nwish.\n\nI hope some of you find this useful with your projects!\n\npostgres-ast-deparser\nA pure plpgsql AST toolkit and deparser for PostgreSQL, which can be used\nto create ASTs and deparse them back into strings in native Postgres.\nhttps://github.com/pyramation/postgres-ast-deparser\n\nHI all! I wanted to share a useful tool that I built during the pandemic. One of my deepest projects I’ve created in postgres, and I’m excited to share it with the community and get ideas and feedback.I do a lot of functional programming and needed dynamic SQL. My personal belief is that ORMs are the wrong interface to creating migrations, and I prefer a pg native, functional approach. This library was actually one of the base extensions of a larger set, that I used this to create higher level structures for creating migrations. So while this is “low-level” you can definitely create stored procedures that do make higher-level if you wish. I hope some of you find this useful with your projects!postgres-ast-deparserA pure plpgsql AST toolkit and deparser for PostgreSQL, which can be used to create ASTs and deparse them back into strings in native Postgres.https://github.com/pyramation/postgres-ast-deparser",
"msg_date": "Mon, 16 May 2022 09:32:37 -0700",
"msg_from": "Dan Lynch <pyramation@gmail.com>",
"msg_from_op": true,
"msg_subject": "Postgres AST Deparser for Postgres"
}
] |
[
{
"msg_contents": "Bug #17483 points out that postgres_fdw falls down pretty badly when\na potentially shippable clause contains a \"regconfig\" constant [1].\nIt doesn't check whether the constant refers to an object that's\nlikely to exist on the remote side, and it fails to ensure that\nthe printed name is properly schema-qualified. The same flaws apply\nto constants of other OID alias types. Below are some draft patches\nto address this.\n\n0001 deals with the lack-of-schema-qualification issue by forcing\nsearch_path to be just \"pg_catalog\" while we're deparsing constants.\nThis seems straightforward, if annoyingly expensive, and it's enough\nto fix the bug as presented.\n\n0002 tightens deparse.c's rules to only consider an OID alias constant\nas shippable if the object it refers to is shippable. This seems\nobvious in hindsight; I wonder how come we've not realized it before?\nHowever, this has one rather nasty problem for regconfig in particular:\nwith our standard shippability rules none of the built-in text search\nconfigurations would be treated as shippable, because initdb gives them\nnon-fixed OIDs above 9999. That seems like a performance hit we don't\nwant to take. In the attached, I hacked around that by making a special\nexception for OIDs up to 16383, but that seems like a fairly ugly kluge.\nAnybody have a better idea?\n\nWhile using find_expr_references() as a reference for writing the new code\nin 0002, I was dismayed to find that it omitted handling regcollation;\nand a quick search showed that other places that specially process REG*\ntypes hadn't been updated for that addition either. 0003 closes those\noversights.\n\nI've split this into three parts partially because they probably should be\nback-patched differently. It seems like 0001 should go into all branches.\n0003 should go back to v13 where regcollation was added. But I wonder if\n0002 should get back-patched at all: it seems like we're more likely to\nget performance complaints about quals no longer being shipped than we are\nto get kudos for not mistakenly shipping an unportable tsconfig reference.\nPeople could fix such performance issues by putting the config into an\nextension marked safe-to-ship, but they probably won't want to have to\ndeal with that in a minor release.\n\nI've not done anything about a regression test yet.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/17483-795757fa99607659%40postgresql.org",
"msg_date": "Mon, 16 May 2022 13:33:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "postgres_fdw versus regconfig and similar constants"
},
{
"msg_contents": "On Mon, May 16, 2022 at 1:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> 0001 deals with the lack-of-schema-qualification issue by forcing\n> search_path to be just \"pg_catalog\" while we're deparsing constants.\n> This seems straightforward, if annoyingly expensive, and it's enough\n> to fix the bug as presented.\n\nYeah, that does seem like the thing to do. I doubt it will be the last\nproblem setting we need to add to that list, either. It's kind of\nunfortunate that data type output formatting is context-dependent like\nthis, but I don't have an idea.\n\n> 0002 tightens deparse.c's rules to only consider an OID alias constant\n> as shippable if the object it refers to is shippable. This seems\n> obvious in hindsight; I wonder how come we've not realized it before?\n> However, this has one rather nasty problem for regconfig in particular:\n> with our standard shippability rules none of the built-in text search\n> configurations would be treated as shippable, because initdb gives them\n> non-fixed OIDs above 9999. That seems like a performance hit we don't\n> want to take. In the attached, I hacked around that by making a special\n> exception for OIDs up to 16383, but that seems like a fairly ugly kluge.\n> Anybody have a better idea?\n\nNo. It feels to me like there are not likely to be any really\nsatisfying answers here. We have a way of mapping a given local table\nto a given foreign table, but to the best of my knowledge we have no\nsimilar mechanism for any other type of object. So it's just crude\nguesswork. Who is to say whether the fact that we have a local text\nsearch configuration means that there is a remote text search\nconfiguration with the same name, and even if yes, that it has the\nsame semantics? And similarly for any other object types? Every\nrelease adds and occasionally removes SQL objects from the system\ncatalogs, and depending on the object type, it can also vary by\noperating system. There are several multiple forks of PostgreSQL, too.\n\n> While using find_expr_references() as a reference for writing the new code\n> in 0002, I was dismayed to find that it omitted handling regcollation;\n> and a quick search showed that other places that specially process REG*\n> types hadn't been updated for that addition either. 0003 closes those\n> oversights.\n\nMakes sense.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 1 Jul 2022 16:47:00 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw versus regconfig and similar constants"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, May 16, 2022 at 1:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> 0002 tightens deparse.c's rules to only consider an OID alias constant\n>> as shippable if the object it refers to is shippable. This seems\n>> obvious in hindsight; I wonder how come we've not realized it before?\n>> However, this has one rather nasty problem for regconfig in particular:\n>> with our standard shippability rules none of the built-in text search\n>> configurations would be treated as shippable, because initdb gives them\n>> non-fixed OIDs above 9999. That seems like a performance hit we don't\n>> want to take. In the attached, I hacked around that by making a special\n>> exception for OIDs up to 16383, but that seems like a fairly ugly kluge.\n>> Anybody have a better idea?\n\n> No. It feels to me like there are not likely to be any really\n> satisfying answers here.\n\nYeah. Hearing no better ideas from anyone else either, pushed that way.\n\nI noted one interesting factoid while trying to make a test case for the\nmissing-schema-qualification issue. I thought of making a foreign table\nthat maps to pg_class and checking what is shipped for\n\nselect oid, relname from remote_pg_class where oid =\n'information_schema.key_column_usage'::regclass;\n\n(In hindsight, this wouldn't have worked anyway after patch 0002,\nbecause that OID would have been above 9999.) But what I got was\n\n Foreign Scan on public.remote_pg_class (cost=100.00..121.21 rows=4 width=68)\n Output: oid, relname\n Remote SQL: SELECT oid, relname FROM pg_catalog.pg_class WHERE ((oid = 13527::oid))\n\nThe reason for that is that the constant is smashed to type OID so hard\nthat we can no longer tell that it ever was regclass, thus there's no\nhope of deparsing it in a more-symbolic fashion. I'm not sure if there's\nanything we could do about that that wouldn't break more things than\nit fixes (e.g. by making things that should look equal() not be so).\nBut anyway, this effect may help explain the lack of previous complaints\nin this area. regconfig arguments to text search functions might be\npretty nearly the only realistic use-case for shipping symbolic reg*\nvalues to the remote.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 17 Jul 2022 18:25:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw versus regconfig and similar constants"
}
] |
[
{
"msg_contents": "027_stream_regress.pl has:\n\nif (PostgreSQL::Test::Utils::has_wal_read_bug)\n{\n # We'd prefer to use Test::More->builder->todo_start, but the bug causes\n # this test file to die(), not merely to fail.\n plan skip_all => 'filesystem bug';\n}\n\nIs the die() referenced there the one from the system_or_bail() call\nthat commit a096813b got rid of?\n\nHere's a failure in 031_recovery_conflict.pl that smells like\nconcurrent pread() corruption:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tadarida&dt=2022-05-16%2015%3A45%3A54\n\n2022-05-16 18:10:33.375 CEST [52106:1] LOG: started streaming WAL\nfrom primary at 0/3000000 on timeline 1\n2022-05-16 18:10:33.621 CEST [52105:5] LOG: incorrect resource\nmanager data checksum in record at 0/338FDC8\n2022-05-16 18:10:33.622 CEST [52106:2] FATAL: terminating walreceiver\nprocess due to administrator command\n\nPresumably we also need the has_wal_read_bug kludge in all these new\ntests that use replication.\n\n\n",
"msg_date": "Tue, 17 May 2022 11:50:51 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "has_wal_read_bug"
},
{
"msg_contents": "On Tue, May 17, 2022 at 11:50:51AM +1200, Thomas Munro wrote:\n> 027_stream_regress.pl has:\n> \n> if (PostgreSQL::Test::Utils::has_wal_read_bug)\n> {\n> # We'd prefer to use Test::More->builder->todo_start, but the bug causes\n> # this test file to die(), not merely to fail.\n> plan skip_all => 'filesystem bug';\n> }\n> \n> Is the die() referenced there the one from the system_or_bail() call\n> that commit a096813b got rid of?\n\nNo, it was the 'croak \"timed out waiting for catchup\"',\ne.g. https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tadarida&dt=2022-01-25%2016%3A56%3A26\n\n> Here's a failure in 031_recovery_conflict.pl that smells like\n> concurrent pread() corruption:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tadarida&dt=2022-05-16%2015%3A45%3A54\n> \n> 2022-05-16 18:10:33.375 CEST [52106:1] LOG: started streaming WAL\n> from primary at 0/3000000 on timeline 1\n> 2022-05-16 18:10:33.621 CEST [52105:5] LOG: incorrect resource\n> manager data checksum in record at 0/338FDC8\n> 2022-05-16 18:10:33.622 CEST [52106:2] FATAL: terminating walreceiver\n> process due to administrator command\n\nAgreed. Here, too:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tadarida&dt=2022-05-09%2015%3A46%3A03\n\n> Presumably we also need the has_wal_read_bug kludge in all these new\n> tests that use replication.\n\nThat is an option. One alternative is to reconfigure those three animals to\nremove --enable-tap-tests. Another alternative is to make the build system\nskip all files of all TAP suites on affected systems. Handling this on a\nfile-by-file basis seemed reasonable to me when only two files had failed that\nway. Now, five files have failed. We have wait_for_catchup calls in\nfifty-one files, and I wouldn't have chosen the has_wal_read_bug approach if I\nhad expected fifty-one files to eventually call it. I could tolerate it,\nthough.\n\n\n",
"msg_date": "Tue, 17 May 2022 00:15:35 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: has_wal_read_bug"
},
{
"msg_contents": "On Tue, May 17, 2022 at 12:15:35AM -0700, Noah Misch wrote:\n> On Tue, May 17, 2022 at 11:50:51AM +1200, Thomas Munro wrote:\n> > 027_stream_regress.pl has:\n> > \n> > if (PostgreSQL::Test::Utils::has_wal_read_bug)\n> > {\n> > # We'd prefer to use Test::More->builder->todo_start, but the bug causes\n> > # this test file to die(), not merely to fail.\n> > plan skip_all => 'filesystem bug';\n> > }\n> > \n> > Is the die() referenced there the one from the system_or_bail() call\n> > that commit a096813b got rid of?\n> \n> No, it was the 'croak \"timed out waiting for catchup\"',\n> e.g. https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tadarida&dt=2022-01-25%2016%3A56%3A26\n> \n> > Here's a failure in 031_recovery_conflict.pl that smells like\n> > concurrent pread() corruption:\n> > \n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tadarida&dt=2022-05-16%2015%3A45%3A54\n> > \n> > 2022-05-16 18:10:33.375 CEST [52106:1] LOG: started streaming WAL\n> > from primary at 0/3000000 on timeline 1\n> > 2022-05-16 18:10:33.621 CEST [52105:5] LOG: incorrect resource\n> > manager data checksum in record at 0/338FDC8\n> > 2022-05-16 18:10:33.622 CEST [52106:2] FATAL: terminating walreceiver\n> > process due to administrator command\n> \n> Agreed. Here, too:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tadarida&dt=2022-05-09%2015%3A46%3A03\n> \n> > Presumably we also need the has_wal_read_bug kludge in all these new\n> > tests that use replication.\n> \n> That is an option. One alternative is to reconfigure those three animals to\n> remove --enable-tap-tests. Another alternative is to make the build system\n> skip all files of all TAP suites on affected systems. Handling this on a\n> file-by-file basis seemed reasonable to me when only two files had failed that\n> way. Now, five files have failed. We have wait_for_catchup calls in\n> fifty-one files, and I wouldn't have chosen the has_wal_read_bug approach if I\n> had expected fifty-one files to eventually call it. I could tolerate it,\n> though.\n\nSquashing another test that failed multiple times (commit a9f8ca6) led me to\nthink of another option, attached. When wait_for_catchup() fails under\nhas_wal_read_bug(), end the suite with an abrupt success. Thoughts?",
"msg_date": "Sat, 29 Oct 2022 20:16:39 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: has_wal_read_bug"
}
] |
[
{
"msg_contents": "Hi all,\n\nToast compression is supported for LZ4, and thanks to the refactoring\nwork done with compression methods assigned to an attribute, adding\nsupport for more methods is straight-forward, as long as we don't\nsupport more than 4 methods as the compression ID is stored within the\nfirst 2 bits of the raw length.\n\nDo we have an argument against supporting zstd for this stuff?\nZstandard compresses a bit more than LZ4 at the cost of some extra\nCPU, outclassing easily pglz, but those facts are known, and zstd has\nbenefits over LZ4 when one is ready to pay more CPU for the extra\ncompression.\n\nIt took me a couple of hours to get that done. I have not added any\ntests for pg_dump or cross-checks with the default compressions as\nthis is basically what compression.sql already does, so this patch\nincludes a minimum to look after the compression, decompression and\nslice decompression. Another thing is that the errors generated by\nSET default_toast_compression make the output generated\nbuild-dependent, and that becomes annoying once there is more than one\ncompression option. The attached removes those cases for simplicity,\nand perhaps we'd better remove from compression.sql all the LZ4-only\ntests. ZSTD_decompress() does not allow the use of a destination\nbuffer lower than the full decompressed size, but similarly to base\nbackups, streams seem to handle the case of slices fine.\n\nThoughts?\n--\nMichael",
"msg_date": "Tue, 17 May 2022 13:19:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Zstandard support for toast compression"
},
{
"msg_contents": "On Tue, May 17, 2022 at 12:19 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Toast compression is supported for LZ4, and thanks to the refactoring\n> work done with compression methods assigned to an attribute, adding\n> support for more methods is straight-forward, as long as we don't\n> support more than 4 methods as the compression ID is stored within the\n> first 2 bits of the raw length.\n\nYeah - I think we had better reserve the fourth bit pattern for\nsomething extensible e.g. another byte or several to specify the\nactual method, so that we don't have a hard limit of 4 methods. But\neven with such a system, the first 3 methods will always and forever\nbe privileged over all others, so we'd better not make the mistake of\nadding something silly as our third algorithm.\n\nI don't particularly have anything against adding Zstandard\ncompression here, but I wonder whether there's any rush. If we decide\nnot to add this now, we can always change our minds and add it later,\nbut if we decide to add it now, there's no backing it out. I'd\nprobably be inclined to wait and see if our public demands it of us.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 May 2022 14:54:28 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zstandard support for toast compression"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Tue, May 17, 2022 at 12:19 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > Toast compression is supported for LZ4, and thanks to the refactoring\n> > work done with compression methods assigned to an attribute, adding\n> > support for more methods is straight-forward, as long as we don't\n> > support more than 4 methods as the compression ID is stored within the\n> > first 2 bits of the raw length.\n> \n> Yeah - I think we had better reserve the fourth bit pattern for\n> something extensible e.g. another byte or several to specify the\n> actual method, so that we don't have a hard limit of 4 methods. But\n> even with such a system, the first 3 methods will always and forever\n> be privileged over all others, so we'd better not make the mistake of\n> adding something silly as our third algorithm.\n\nIn such a situation, would they really end up being properly distinct\nwhen it comes to what our users see..? I wouldn't really think so.\n\n> I don't particularly have anything against adding Zstandard\n> compression here, but I wonder whether there's any rush. If we decide\n> not to add this now, we can always change our minds and add it later,\n> but if we decide to add it now, there's no backing it out. I'd\n> probably be inclined to wait and see if our public demands it of us.\n\nIf anything, this strikes me as a reason to question using a bit for LZ4\nand not a mark against Zstd. Still tho- there seems like a clear path\nto having more than 4 when we get demand for it, and here's a patch for\nwhat is pretty clearly one of the better compression methods out there\ntoday. As another point, while pgbackrest supports gzip, lz4, zstd, and\nbzip2, where it's supported, zstd seems to be the most used. We had\ngzip first as zstd wasn't really a proper thing at the time, and lz4 for\nspeed. Bzip2 was added more as it was easy to do and of some interest\non systems that didn't have zstd but I wouldn't support adding it to PG\nas I'd hope that nearly all systems where v16 is deployed will have Zstd\nsupport.\n\n+1 for adding Zstd for me.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 17 May 2022 15:29:47 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Zstandard support for toast compression"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Robert Haas (robertmhaas@gmail.com) wrote:\n>> Yeah - I think we had better reserve the fourth bit pattern for\n>> something extensible e.g. another byte or several to specify the\n>> actual method, so that we don't have a hard limit of 4 methods. But\n>> even with such a system, the first 3 methods will always and forever\n>> be privileged over all others, so we'd better not make the mistake of\n>> adding something silly as our third algorithm.\n\n> In such a situation, would they really end up being properly distinct\n> when it comes to what our users see..? I wouldn't really think so.\n\nIt should be transparent to users, sure, but the point is that the\nfirst three methods will have a storage space advantage over others.\nPlus we'd have to do some actual work to create that extension mechanism.\n\nI'm with Robert in that I do not see any urgency to add another method.\nThe fact that Stephen is already questioning whether LZ4 should have\nbeen added first is not making me any more eager to jump here.\nCompression methods come, and they go, and we do not serve anyone's\ninterest by being early adopters.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 May 2022 16:01:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Zstandard support for toast compression"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Robert Haas (robertmhaas@gmail.com) wrote:\n> >> Yeah - I think we had better reserve the fourth bit pattern for\n> >> something extensible e.g. another byte or several to specify the\n> >> actual method, so that we don't have a hard limit of 4 methods. But\n> >> even with such a system, the first 3 methods will always and forever\n> >> be privileged over all others, so we'd better not make the mistake of\n> >> adding something silly as our third algorithm.\n> \n> > In such a situation, would they really end up being properly distinct\n> > when it comes to what our users see..? I wouldn't really think so.\n> \n> It should be transparent to users, sure, but the point is that the\n> first three methods will have a storage space advantage over others.\n> Plus we'd have to do some actual work to create that extension mechanism.\n> \n> I'm with Robert in that I do not see any urgency to add another method.\n> The fact that Stephen is already questioning whether LZ4 should have\n> been added first is not making me any more eager to jump here.\n> Compression methods come, and they go, and we do not serve anyone's\n> interest by being early adopters.\n\nI'm getting a bit of deja-vu here from when I was first trying to add\nTRUNCATE as a GRANT'able option and being told we didn't want to burn\nthose precious bits.\n\nBut, fine, then I'd suggest to Michael that he work on actively solving\nthe problem we've now got where we have such a limited number of bits,\nand then come back and add Zstd after that's done. I disagree that we\nshould be pushing back so hard on adding Zstd in general, but if we are\ngoing to demand that we have a way to support more than these few\ncompression options before ever adding any new ones (considering how\nlong it's taken Zstd to get to the level it is now, we're talking\nabout close to a *decade* from such a new algorithm showing up and\ngetting to a similar level of adoption, and then apparently more because\nwe don't feel it's 'ready' yet), then let's work towards that and not\ncomplain when it shows up that it's not needed yet (as I fear would\nhappen ... and just leave us unable to make useful progress).\n\nThanks,\n\nStephen",
"msg_date": "Tue, 17 May 2022 16:12:14 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Zstandard support for toast compression"
},
{
"msg_contents": "On Tue, May 17, 2022 at 04:12:14PM -0400, Stephen Frost wrote:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> I'm with Robert in that I do not see any urgency to add another method.\n\nOkay.\n\n>> The fact that Stephen is already questioning whether LZ4 should have\n>> been added first is not making me any more eager to jump here.\n>> Compression methods come, and they go, and we do not serve anyone's\n>> interest by being early adopters.\n\nFWIW, I don't really question the choice of LZ4 as an alternative to\npglz. One very easily outclasses the other, guess which one. Perhaps\nwe would have gone with zstd back in the day, but here we are, andx\nthis option is already very good in itself.\n\nZstandard may not be old enough to vote, being only 7, but its use is\nalready quite spread. So I would not be surprised if it remains\npopular for many years. We'll see how it goes.\n\n> But, fine, then I'd suggest to Michael that he work on actively solving\n> the problem we've now got where we have such a limited number of bits,\n> and then come back and add Zstd after that's done. I disagree that we\n> should be pushing back so hard on adding Zstd in general, but if we are\n> going to demand that we have a way to support more than these few\n> compression options before ever adding any new ones (considering how\n> long it's taken Zstd to get to the level it is now, we're talking\n> about close to a *decade* from such a new algorithm showing up and\n> getting to a similar level of adoption, and then apparently more because\n> we don't feel it's 'ready' yet), then let's work towards that and not\n> complain when it shows up that it's not needed yet (as I fear would\n> happen ... and just leave us unable to make useful progress).\n\nSaying that, I agree with the point to not set in stone the 4th bit\nused in the toast compression header, and that is would be better to\nuse it for a more extensible design. Didn't the proposal to introduce\nthe custom compression mechanisms actually touch this area? The set\nof macros we have currently for the toast values in a varlena are\nalready kind of hard to figure out. Making that harder to parse would\nnot be appealing, definitely.\n--\nMichael",
"msg_date": "Wed, 18 May 2022 14:38:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Zstandard support for toast compression"
},
{
"msg_contents": "On Tue, May 17, 2022 at 4:12 PM Stephen Frost <sfrost@snowman.net> wrote:\n> I'm getting a bit of deja-vu here from when I was first trying to add\n> TRUNCATE as a GRANT'able option and being told we didn't want to burn\n> those precious bits.\n\nRight, it's the same issue ... although in that case there are a lot\nmore bits available than we have here.\n\n> But, fine, then I'd suggest to Michael that he work on actively solving\n> the problem we've now got where we have such a limited number of bits,\n> and then come back and add Zstd after that's done. I disagree that we\n> should be pushing back so hard on adding Zstd in general, but if we are\n> going to demand that we have a way to support more than these few\n> compression options before ever adding any new ones (considering how\n> long it's taken Zstd to get to the level it is now, we're talking\n> about close to a *decade* from such a new algorithm showing up and\n> getting to a similar level of adoption, and then apparently more because\n> we don't feel it's 'ready' yet), then let's work towards that and not\n> complain when it shows up that it's not needed yet (as I fear would\n> happen ... and just leave us unable to make useful progress).\n\nIt's kind of ridiculous to talk about \"pushing back so hard on adding\nZstd in general\" when there's like 2 emails expressing only moderate\nskepticism. I clearly said I wasn't 100% against it.\n\nBut I want to point out here that you haven't really offered any kind\nof argument in favor of supporting Zstd. You basically seem to just be\narguing that it's dumb to worry about running out of bit space, and I\nthink that's just obviously false. PostgreSQL is full of things that\nare hard to improve because nearly all of the bit space was gobbled up\nearly on, and there's not much left for future features. The heap\ntuple header format is an excellent example of this. Surely if we were\ndesigning that over again today we wouldn't have expended some of\nthose bits on the things we did.\n\nI do understand that Zstandard is a good compression algorithm, and if\nwe had an extensibility mechanism here where one of the four possible\nbit patterns then indicates that the next byte (or two or four) stores\nthe real algorithm type, then what about adding Zstandard that way\ninstead of consuming one of our four primary bit patterns? That way\nwe'd have this option for people who want it, but we'd have more\noptions for the future instead of fewer.\n\ni.e. something like:\n\n00 = PGLZ\n01 = LZ4\n10 = reserved for future emergencies\n11 = extended header with additional type byte (1 of 256 possible\nvalues reserved for Zstandard)\n\nI wouldn't be worried about getting backed into a corner with that approach.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 18 May 2022 12:17:16 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zstandard support for toast compression"
},
{
"msg_contents": "On Tue, May 17, 2022 at 02:54:28PM -0400, Robert Haas wrote:\n> I don't particularly have anything against adding Zstandard\n> compression here, but I wonder whether there's any rush. If we decide\n> not to add this now, we can always change our minds and add it later,\n> but if we decide to add it now, there's no backing it out. I'd\n> probably be inclined to wait and see if our public demands it of us.\n\n+1\n\nOne consideration is that zstd with negative compression levels is comparable\nto LZ4, and with positive levels gets better compression. It can serve both\npurposes (oltp vs DW or storage-limited vs cpu-limited).\n\nIf zstd is supported, then for sure at least its compression level should be\nconfigurable. default_toast_compression should support it.\nhttps://commitfest.postgresql.org/35/3102/\n\nAlso, zstd is a few years newer than lz4. Which I hope means that the API is a\nbit better/further advanced - but (as we've seen) may still be evolving.\n\nZstd allows some of its options to be set by environment variable - in\nparticular, the number of threads. We should consider explicitly setting\nthat to zero in the toast context unless we're convinced it's no issue for\nevery backend (not just basebackup).\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 18 May 2022 14:02:14 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Zstandard support for toast compression"
},
{
"msg_contents": "On Wed, May 18, 2022 at 9:17 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> But I want to point out here that you haven't really offered any kind\n> of argument in favor of supporting Zstd. You basically seem to just be\n> arguing that it's dumb to worry about running out of bit space, and I\n> think that's just obviously false.\n\n+1\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 18 May 2022 12:28:14 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Zstandard support for toast compression"
},
{
"msg_contents": "On Wed, May 18, 2022 at 12:17:16PM -0400, Robert Haas wrote:\n> i.e. something like:\n> \n> 00 = PGLZ\n> 01 = LZ4\n> 10 = reserved for future emergencies\n> 11 = extended header with additional type byte (1 of 256 possible\n> values reserved for Zstandard)\n\nBtw, shouldn't we have something a bit more, err, extensible for the\ndesign of an extensible varlena header? If we keep it down to some\nbitwise information, we'd be fine for a long time but it would be\nannoying to review again an extended design if we need to extend it\nwith more data.\n--\nMichael",
"msg_date": "Thu, 19 May 2022 17:20:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Zstandard support for toast compression"
},
{
"msg_contents": "On Wed, May 18, 2022 at 9:47 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> I do understand that Zstandard is a good compression algorithm, and if\n> we had an extensibility mechanism here where one of the four possible\n> bit patterns then indicates that the next byte (or two or four) stores\n> the real algorithm type, then what about adding Zstandard that way\n> instead of consuming one of our four primary bit patterns? That way\n> we'd have this option for people who want it, but we'd have more\n> options for the future instead of fewer.\n>\n> i.e. something like:\n>\n> 00 = PGLZ\n> 01 = LZ4\n> 10 = reserved for future emergencies\n> 11 = extended header with additional type byte (1 of 256 possible\n> values reserved for Zstandard)\n>\n\n+1 for such an extensible mechanism if we decide to go with Zstandard\ncompression algorithm. To decide that won't it make sense to see some\nnumbers as Michael already has a patch for the new algorithm?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 19 May 2022 16:10:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zstandard support for toast compression"
},
{
"msg_contents": "В письме от вторник, 17 мая 2022 г. 23:01:07 MSK пользователь Tom Lane \nнаписал:\n\nHi! I came to this branch looking for a patch to review, but I guess I would \njoin the discussion instead of reading the code.\n\n> >> Yeah - I think we had better reserve the fourth bit pattern for\n> >> something extensible e.g. another byte or several to specify the\n> >> actual method, so that we don't have a hard limit of 4 methods. But\n> >> even with such a system, the first 3 methods will always and forever\n> >> be privileged over all others, so we'd better not make the mistake of\n> >> adding something silly as our third algorithm.\n> > \n> > In such a situation, would they really end up being properly distinct\n> > when it comes to what our users see..? I wouldn't really think so.\n> \n> It should be transparent to users, sure, but the point is that the\n> first three methods will have a storage space advantage over others.\n> Plus we'd have to do some actual work to create that extension mechanism.\n\nPostgres is well known for extensiblility. One can write your own \nimplementation of almost everything and make it an extension.\nThough one would hardly need more than one (or two) additional compression \nmethods, but which method one will really need is hard to say. \n\nSo I guess it would be much better to create and API for creating and \nregistering own compression method and create build in Zstd compression method \nthat can be used (or optionally not used) via that API.\n\nMoreover I guess this API (may be with some modification) can be used for \nseamless data encryption, for example. \n\nSo I guess it would be better to make it extensible from the start and use \nthis precious bit for compression method chosen by user, and may be later \nextend it with another byte of compression method bits, if it is needed.\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su",
"msg_date": "Thu, 19 May 2022 20:52:23 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": false,
"msg_subject": "Re: Zstandard support for toast compression"
},
{
"msg_contents": "On Thu, May 19, 2022 at 4:20 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Btw, shouldn't we have something a bit more, err, extensible for the\n> design of an extensible varlena header? If we keep it down to some\n> bitwise information, we'd be fine for a long time but it would be\n> annoying to review again an extended design if we need to extend it\n> with more data.\n\nWhat do you have in mind?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 19 May 2022 16:12:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zstandard support for toast compression"
},
{
"msg_contents": "Greetings,\n\n* Nikolay Shaplov (dhyan@nataraj.su) wrote:\n> В письме от вторник, 17 мая 2022 г. 23:01:07 MSK пользователь Tom Lane \n> написал:\n> \n> Hi! I came to this branch looking for a patch to review, but I guess I would \n> join the discussion instead of reading the code.\n\nSeems that's what would be helpful now thanks for joining the\ndiscussion.\n\n> > >> Yeah - I think we had better reserve the fourth bit pattern for\n> > >> something extensible e.g. another byte or several to specify the\n> > >> actual method, so that we don't have a hard limit of 4 methods. But\n> > >> even with such a system, the first 3 methods will always and forever\n> > >> be privileged over all others, so we'd better not make the mistake of\n> > >> adding something silly as our third algorithm.\n> > > \n> > > In such a situation, would they really end up being properly distinct\n> > > when it comes to what our users see..? I wouldn't really think so.\n> > \n> > It should be transparent to users, sure, but the point is that the\n> > first three methods will have a storage space advantage over others.\n> > Plus we'd have to do some actual work to create that extension mechanism.\n> \n> Postgres is well known for extensiblility. One can write your own \n> implementation of almost everything and make it an extension.\n> Though one would hardly need more than one (or two) additional compression \n> methods, but which method one will really need is hard to say. \n\nA thought I've had before is that it'd be nice to specify a particular\ncompression method on a data type basis. Wasn't the direction that this\nwas taken, for reasons, but I wonder about perhaps still having a data\ntype compression method and perhaps one of these bits might be \"the data\ntype's (default?) compression method\". Even so though, having an\nextensible way to add new compression methods would be a good thing.\n\nFor compression methods that we already support in other parts of the\nsystem, seems clear that we should allow those to be used for column\ncompression too. We should certainly also support a way to specify on a\ncompression-type specific level what the compression level should be\nthough.\n\n> So I guess it would be much better to create and API for creating and \n> registering own compression method and create build in Zstd compression method \n> that can be used (or optionally not used) via that API.\n\nWhile I generally agree that we want to provide extensibility in this\narea, given that we already have zstd as a compile-time option and it\nexists in other parts of the system, I don't think it makes sense to\nrequire users to install an extension to use it.\n\n> Moreover I guess this API (may be with some modification) can be used for \n> seamless data encryption, for example. \n\nPerhaps.. but this kind of encryption wouldn't allow indexing and\ncertainly lots of other metadata would still be unencrypted (the entire\nsystem catalog being a good example..).\n\nThanks,\n\nStephen",
"msg_date": "Fri, 20 May 2022 16:17:42 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Zstandard support for toast compression"
},
{
"msg_contents": "On Thu, May 19, 2022 at 04:12:01PM -0400, Robert Haas wrote:\n> On Thu, May 19, 2022 at 4:20 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Btw, shouldn't we have something a bit more, err, extensible for the\n>> design of an extensible varlena header? If we keep it down to some\n>> bitwise information, we'd be fine for a long time but it would be\n>> annoying to review again an extended design if we need to extend it\n>> with more data.\n> \n> What do you have in mind?\n\nA per-varlena checksum was one thing that came into my mind.\n--\nMichael",
"msg_date": "Mon, 23 May 2022 13:33:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Zstandard support for toast compression"
},
{
"msg_contents": "On Mon, May 23, 2022 at 12:33 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, May 19, 2022 at 04:12:01PM -0400, Robert Haas wrote:\n> > On Thu, May 19, 2022 at 4:20 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >> Btw, shouldn't we have something a bit more, err, extensible for the\n> >> design of an extensible varlena header? If we keep it down to some\n> >> bitwise information, we'd be fine for a long time but it would be\n> >> annoying to review again an extended design if we need to extend it\n> >> with more data.\n> >\n> > What do you have in mind?\n>\n> A per-varlena checksum was one thing that came into my mind.\n\nIt's a bit hard for me to believe that such a thing would be\ndesirable. I think it makes more sense to checksum blocks than datums,\nbecause:\n\n(1) There might be a lot of really small datums, and storing checksums\nfor all of them could be costly, or\n(2) The datums could on the other hand be really big, and then the\nchecksum is pretty non-specific about where the problem has happened.\n\nYMMV, of course.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 23 May 2022 09:32:15 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zstandard support for toast compression"
},
{
"msg_contents": "On Fri, May 20, 2022 at 4:17 PM Stephen Frost <sfrost@snowman.net> wrote:\n> A thought I've had before is that it'd be nice to specify a particular\n> compression method on a data type basis. Wasn't the direction that this\n> was taken, for reasons, but I wonder about perhaps still having a data\n> type compression method and perhaps one of these bits might be \"the data\n> type's (default?) compression method\". Even so though, having an\n> extensible way to add new compression methods would be a good thing.\n\nIf we look at pglz vs. LZ4, there's no argument that it makes more\nsense to use LZ4 for some data types and PGLZ for others. Indeed, it's\nunclear why you would ever use PGLZ if you had LZ4 as an option. Even\nif we imagine a world in which a full spectrum of modern compressors -\nZstandard, bzip2, gzip, and whatever else you want - it's basically a\ntime/space tradeoff. You will either want a fast compressor or a good\none.\n\nThe situation in which this sort of thing might make sense is if we\nhad a compressor that is specifically designed to work well on a\ncertain data type, and especially if the code for that data type could\nperform some operations directly on the compressed representation.\n From what I understand, the ideas that people have in this area around\njsonb require that there be a dictionary available. For instance, you\nmight scan a jsonb column, collect all the keys that occur frequently,\nput them in a dictionary, and then use them to compress the column. I\ncan see that being effective, but the infrastructure to store that\ndictionary someplace is infrastructure we have not got.\n\nIt may be better to try to handle these use cases by building the\ncompression into the data type representation proper, perhaps\ndisabling the general-purpose TOAST compression stuff, rather than by\nmaking it part of TOAST compression. We found during the\nimplementation of LZ4 TOAST compression that it's basically impossible\nto keep a compressed datum from \"leaking out\" into other parts of the\nsystem. We have to assume that any datum we create by TOAST\ncompression may continue to exist somewhere in the system long after\nthe table in which it was originally stored is gone. So, while a\ndictionary could be used for compression, it would have to be done in\na way where that dictionary wasn't required to decompress, unless\nwe're prepared to prohibit ever dropping a dictionary, which sounds\nlike not a lot of fun. If the compression were part of the data type\ninstead of part of TOAST compression, we would dodge this problem.\n\nI think that might be a better way to go.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 23 May 2022 09:44:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Zstandard support for toast compression"
},
{
"msg_contents": "В письме от пятница, 20 мая 2022 г. 23:17:42 MSK пользователь Stephen Frost \nнаписал:\n\n> While I generally agree that we want to provide extensibility in this\n> area, given that we already have zstd as a compile-time option and it\n> exists in other parts of the system, I don't think it makes sense to\n> require users to install an extension to use it.\nI mean that there can be Compression Method Provider, either built in postgres \ncore, or implemented in extension. And one will need to create Compression \nmethod using desired Compression Method Provider. Like (it is just pure \nimagination) CREATE COMPRESSION METHOD MY_COMPRESSION USING \n'my_compression_provider'; This will assign certain bits combination to that \nmethod, and later one can use that method for TOAST compression...\n\n> > Moreover I guess this API (may be with some modification) can be used for\n> > seamless data encryption, for example.\n> \n> Perhaps.. but this kind of encryption wouldn't allow indexing\nYes, this will require some more efforts. But for encrypting pure storage that \nAPI can be quite useful.\n\n> and certainly lots of other metadata would still be unencrypted (the entire\n> system catalog being a good example..).\nIn many it is enough to encrypt only sensible information itself, not whole \nmetadata.\nMy point was not to discuss DB encryption, it is quite complex issue, my point \nwas to point out that API that allows custom compression methods may became \nuseful for other solutions. Encryption was just first example that came in my \nmind. Robert Haas has another example for compression method optimized for \ncertain data type. So it is good if you can have method of your own.\n\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su",
"msg_date": "Tue, 24 May 2022 11:09:45 +0300",
"msg_from": "Nikolay Shaplov <dhyan@nataraj.su>",
"msg_from_op": false,
"msg_subject": "Re: Zstandard support for toast compression"
},
{
"msg_contents": "Hi,\n\n> Yeah - I think we had better reserve the fourth bit pattern for\n> something extensible e.g. another byte or several to specify the\n> actual method, so that we don't have a hard limit of 4 methods.\n\nTWIMC there is an ongoing discussion [1] of making TOAST pointers\nextendable since this is a dependency for several patches that are\ncurrently in development.\n\nTL;DR the consensus so far seems to be using varattrib_1b_e.va_tag as\na sign of an alternative / extended TOAST pointer content. For the\non-disk values va_tag currently stores a constant 18 (VARTAG_ONDISK).\nWhere 18 is sizeof(varatt_external) + /* header size */ 2, which seems\nto be not extremely useful anyway. If you are interested in the topic\nplease consider joining the thread.\n\n[1]: https://postgr.es/m/CAN-LCVMq2X%3Dfhx7KLxfeDyb3P%2BBXuCkHC0g%3D9GF%2BJD4izfVa0Q%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 23 May 2023 17:56:13 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Zstandard support for toast compression"
},
{
"msg_contents": "On Tue, May 23, 2023 at 05:56:13PM +0300, Aleksander Alekseev wrote:\n> TWIMC there is an ongoing discussion [1] of making TOAST pointers\n> extendable since this is a dependency for several patches that are\n> currently in development.\n\nThanks for the ping. I have seen and read the other thread, and yes,\nthat's an exciting proposal, not only for what's specific to the\nthread here.\n--\nMichael",
"msg_date": "Wed, 24 May 2023 07:29:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Zstandard support for toast compression"
}
] |
[
{
"msg_contents": "Hi,\n\nAttaching a tiny patch to fix a typo - replace primary_slotname with\ncorrect parameter name primary_slot_name in walreceiver.c code\ncomments.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Tue, 17 May 2022 10:38:23 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix a typo in walreceiver.c"
},
{
"msg_contents": "On Tue, May 17, 2022 at 10:38:23AM +0530, Bharath Rupireddy wrote:\n> Attaching a tiny patch to fix a typo - replace primary_slotname with\n> correct parameter name primary_slot_name in walreceiver.c code\n> comments.\n\nYep, indeed. Will fix after beta1 is stamped.\n--\nMichael",
"msg_date": "Tue, 17 May 2022 16:52:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix a typo in walreceiver.c"
}
] |
[
{
"msg_contents": "Hello.\n\nWhile I looked into a patch, I noticed that check_tuple_attribute does\nnot run the check for compessed data even if a compressed data is\ngiven.\n\ncheck_tuple_attribute()\n..\n\tstruct varatt_external toast_pointer;\n..\n\tVARATT_EXTERNAL_GET_POINTER(toast_pointer, attr);\n..\n\tif (VARATT_IS_COMPRESSED(&toast_pointer))\n {\n\nSince toast_pointer is a varatt_exteral it should be\nVARATT_EXTERNAL_IS_COMPRESSED instead. Since the just following\ncorss-check is just the reverse of what the macro does, it is useless.\n\nWhat do you think about the attached? The problem code is new in\nPG15.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 17 May 2022 16:27:19 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "amcheck is using a wrong macro to check compressed-ness"
},
{
"msg_contents": "On Tue, May 17, 2022 at 04:27:19PM +0900, Kyotaro Horiguchi wrote:\n> What do you think about the attached? The problem code is new in\n> PG15.\n\nAdding Robert in CC, as this has been added with bd807be. I have\nadded an open item for now.\n--\nMichael",
"msg_date": "Tue, 17 May 2022 16:58:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: amcheck is using a wrong macro to check compressed-ness"
},
{
"msg_contents": "On Tue, May 17, 2022 at 04:58:11PM +0900, Michael Paquier wrote:\n> Adding Robert in CC, as this has been added with bd807be. I have\n> added an open item for now.\n\nWith the individual in CC, that's even better.\n--\nMichael",
"msg_date": "Wed, 18 May 2022 09:55:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: amcheck is using a wrong macro to check compressed-ness"
},
{
"msg_contents": "On Wed, May 18, 2022 at 09:55:07AM +0900, Michael Paquier wrote:\n> On Tue, May 17, 2022 at 04:58:11PM +0900, Michael Paquier wrote:\n>> Adding Robert in CC, as this has been added with bd807be. I have\n>> added an open item for now.\n> \n> With the individual in CC, that's even better.\n\nThree weeks later, ping. Robert, could you look at this thread?\n--\nMichael",
"msg_date": "Thu, 9 Jun 2022 10:48:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: amcheck is using a wrong macro to check compressed-ness"
},
{
"msg_contents": "On Thu, Jun 09, 2022 at 10:48:27AM +0900, Michael Paquier wrote:\n> Three weeks later, ping. Robert, could you look at this thread?\n\nAnd again. beta2 is planned to next week, and this is still an open\nitem. I could look at that by myself, but I always tend to get easily\nconfused with all the VARATT macros when it comes to compressed blobs,\nso it would take a bit of time.\n--\nMichael",
"msg_date": "Wed, 22 Jun 2022 10:56:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: amcheck is using a wrong macro to check compressed-ness"
},
{
"msg_contents": "On Tue, Jun 21, 2022 at 9:56 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Thu, Jun 09, 2022 at 10:48:27AM +0900, Michael Paquier wrote:\n> > Three weeks later, ping. Robert, could you look at this thread?\n>\n> And again. beta2 is planned to next week, and this is still an open\n> item. I could look at that by myself, but I always tend to get easily\n> confused with all the VARATT macros when it comes to compressed blobs,\n> so it would take a bit of time.\n\nOops, I missed this thread. I think the patch is correct, so I have\ncommitted it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 22 Jun 2022 13:14:57 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: amcheck is using a wrong macro to check compressed-ness"
},
{
"msg_contents": "On Wed, Jun 22, 2022 at 01:14:57PM -0400, Robert Haas wrote:\n> Oops, I missed this thread. I think the patch is correct, so I have\n> committed it.\n\nThanks, Robert.\n--\nMichael",
"msg_date": "Fri, 24 Jun 2022 12:59:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: amcheck is using a wrong macro to check compressed-ness"
}
] |
[
{
"msg_contents": "I found it annoying that sql_help.c contains a literal parameter as a\ntranslatable string.\n\nThe cause is that create_help.pl treats <literal>match</> as a\nreplaceable. The attached excludes literals from translatable strings.\n\nBy a quick look it seems to me that the \"match\" in \"COPY.. HEADER\nmatch\" is the first and only instance of a literal parameter as of\nPG15.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 17 May 2022 17:43:42 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "create_help.pl treats <literal> as replaceable"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> I found it annoying that sql_help.c contains a literal parameter as a\n> translatable string.\n\n> The cause is that create_help.pl treats <literal>match</> as a\n> replaceable. The attached excludes literals from translatable strings.\n\n> By a quick look it seems to me that the \"match\" in \"COPY.. HEADER\n> match\" is the first and only instance of a literal parameter as of\n> PG15.\n\nIsn't that a documentation bug rather than a problem with create_help?\nI see what you're talking about:\n\n HEADER [ <replaceable class=\"parameter\">boolean</replaceable> | <literal>match</literal> ]\n\nbut that just seems flat-out wrong. If \"match\" is a keyword it should\nbe rendered like other keywords. I'm not very interested in splitting\nhairs about whether the grammar thinks it is a keyword --- it looks like\none to a user. So I think\n\n HEADER [ <replaceable class=\"parameter\">boolean</replaceable> | MATCH ]\n\nwould be a better solution.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 May 2022 11:09:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: create_help.pl treats <literal> as replaceable"
},
{
"msg_contents": "At Tue, 17 May 2022 11:09:23 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> but that just seems flat-out wrong. If \"match\" is a keyword it should\n> be rendered like other keywords. I'm not very interested in splitting\n> hairs about whether the grammar thinks it is a keyword --- it looks like\n> one to a user. So I think\n> \n> HEADER [ <replaceable class=\"parameter\">boolean</replaceable> | MATCH ]\n> \n> would be a better solution.\n\nOh, agreed. Thanks for the correction. By the way the error message in\ndefGetCopyHeaderChoice is as follows.\n\n\"%s requires a Boolean value or \\\"match\\\"\"\n\nShould it be \"%s requires a boolean value or MATCH\"?\n\nAt least I think \"Boolean\" should be un-capitalized. The second\nattached replaces \"Booean\" with \"boolean\" and the \\\"match\\\" above to\nMATCH.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 18 May 2022 09:58:05 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: create_help.pl treats <literal> as replaceable"
},
{
"msg_contents": "On 17.05.22 17:09, Tom Lane wrote:\n>> By a quick look it seems to me that the \"match\" in \"COPY.. HEADER\n>> match\" is the first and only instance of a literal parameter as of\n>> PG15.\n> Isn't that a documentation bug rather than a problem with create_help?\n\nYeah, there is no need for a <literal> inside a <synopsis>. So I just \nremoved it.\n\n\n",
"msg_date": "Wed, 18 May 2022 18:22:15 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: create_help.pl treats <literal> as replaceable"
},
{
"msg_contents": "On 18.05.22 02:58, Kyotaro Horiguchi wrote:\n> Oh, agreed. Thanks for the correction. By the way the error message in\n> defGetCopyHeaderChoice is as follows.\n> \n> \"%s requires a Boolean value or \\\"match\\\"\"\n> \n> Should it be \"%s requires a boolean value or MATCH\"?\n\nThe documentation of COPY currently appears to use the capitalization\n\n OPTION value\n\nso I left it lower-case.\n\n> At least I think \"Boolean\" should be un-capitalized.\n\n\"Boolean\" is correct; see <https://en.wiktionary.org/wiki/Boolean> for \nexample.\n\n\n",
"msg_date": "Wed, 18 May 2022 18:23:57 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: create_help.pl treats <literal> as replaceable"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 17.05.22 17:09, Tom Lane wrote:\n>> Isn't that a documentation bug rather than a problem with create_help?\n\n> Yeah, there is no need for a <literal> inside a <synopsis>. So I just \n> removed it.\n\nI think you should have upper-cased MATCH while at it, to make it clear\nthat it acts like a keyword in this context. The current situation is\nquite unreadable in plain-ASCII output:\n\nregression=# \\help copy\nCommand: COPY\n...\n HEADER [ boolean | match ]\n...\n\nSince \"boolean\" is a metasyntactic variable here, it's absolutely\nnot obvious that \"match\" isn't.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 May 2022 12:29:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: create_help.pl treats <literal> as replaceable"
},
{
"msg_contents": "At Wed, 18 May 2022 18:23:57 +0200, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote in \n> \"Boolean\" is correct; see <https://en.wiktionary.org/wiki/Boolean> for\n> example.\n\nOk, so, don't we in turn need to replace \"boolean\"s with \"Boolean\"?\n\n\"only boolean operators can have negators\"\n\"only boolean operators can have restriction selectivity\"\n...\n\nAnd I'm not sure how to do with \"bool\". Should it be \"Boolean\" instead\nfrom the point of uniformity?\n\nerrmsg(\"only bool, numeric, and text types could be \"\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 19 May 2022 11:12:33 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: create_help.pl treats <literal> as replaceable"
},
{
"msg_contents": "On 18.05.22 18:29, Tom Lane wrote:\n> I think you should have upper-cased MATCH while at it, to make it clear\n> that it acts like a keyword in this context. The current situation is\n> quite unreadable in plain-ASCII output:\n> \n> regression=# \\help copy\n> Command: COPY\n> ...\n> HEADER [ boolean | match ]\n> ...\n> \n> Since \"boolean\" is a metasyntactic variable here, it's absolutely\n> not obvious that \"match\" isn't.\n\ndone\n\n\n",
"msg_date": "Mon, 23 May 2022 13:11:47 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: create_help.pl treats <literal> as replaceable"
},
{
"msg_contents": "On 19.05.22 04:12, Kyotaro Horiguchi wrote:\n> At Wed, 18 May 2022 18:23:57 +0200, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote in\n>> \"Boolean\" is correct; see <https://en.wiktionary.org/wiki/Boolean> for\n>> example.\n> \n> Ok, so, don't we in turn need to replace \"boolean\"s with \"Boolean\"?\n> \n> \"only boolean operators can have negators\"\n> \"only boolean operators can have restriction selectivity\"\n> ...\n> \n> And I'm not sure how to do with \"bool\". Should it be \"Boolean\" instead\n> from the point of uniformity?\n> \n> errmsg(\"only bool, numeric, and text types could be \"\n\nThe SQL data type is called BOOLEAN, and we typically lower-case type \nnames in PostgreSQL, so messages should be like\n\n column %s should be of type integer\n column %s should be of type boolean\n\nAs an adjective, not a type, it should be spelled Boolean, because \nthat's how it's in the dictionary (cf. Gaussian).\n\n %s should have a string value\n %s should have a Boolean value\n\n\"bool\" should normally not appear in user-facing messages, unless we are \ndealing with internal type names (cf. int4) or C types.\n\nOf course, the lines between all of the above are blurry.\n\n\n",
"msg_date": "Mon, 23 May 2022 13:17:12 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: create_help.pl treats <literal> as replaceable"
}
] |
[
{
"msg_contents": "This adds additional variants of palloc, pg_malloc, etc. that \nencapsulate common usage patterns and provide more type safety.\n\nExamples:\n\n- result = (IndexBuildResult *) palloc(sizeof(IndexBuildResult));\n+ result = palloc_obj(IndexBuildResult);\n\n- collector->tuples = (IndexTuple *) palloc(sizeof(IndexTuple) *\n collector->lentuples);\n+ collector->tuples = palloc_array(IndexTuple, collector->lentuples);\n\nOne common point is that the new interfaces all have a return type that \nautomatically matches what they are allocating, so you don't need any \ncasts nor have to manually make sure the size matches the expected \nresult. Besides the additional safety, the notation is also more \ncompact, as you can see above.\n\nInspired by the talloc library.\n\nThe interesting changes are in fe_memutils.h and palloc.h. The rest of \nthe patch is just randomly sprinkled examples to test/validate the new \nadditions.",
"msg_date": "Tue, 17 May 2022 13:41:03 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Expand palloc/pg_malloc API"
},
{
"msg_contents": "On Tue, May 17, 2022 at 5:11 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> This adds additional variants of palloc, pg_malloc, etc. that\n> encapsulate common usage patterns and provide more type safety.\n>\n> Examples:\n>\n> - result = (IndexBuildResult *) palloc(sizeof(IndexBuildResult));\n> + result = palloc_obj(IndexBuildResult);\n>\n> - collector->tuples = (IndexTuple *) palloc(sizeof(IndexTuple) *\n> collector->lentuples);\n> + collector->tuples = palloc_array(IndexTuple, collector->lentuples);\n>\n> One common point is that the new interfaces all have a return type that\n> automatically matches what they are allocating, so you don't need any\n> casts nor have to manually make sure the size matches the expected\n> result. Besides the additional safety, the notation is also more\n> compact, as you can see above.\n>\n> Inspired by the talloc library.\n>\n> The interesting changes are in fe_memutils.h and palloc.h. The rest of\n> the patch is just randomly sprinkled examples to test/validate the new\n> additions.\n\nIt seems interesting. Are we always type-casting explicitly the output\nof palloc/palloc0? Does this mean the compiler takes care of\ntype-casting the returned void * to the target type?\n\nI see lots of instances where there's no explicit type-casting to the\ntarget variable type -\n retval = palloc(sizeof(GISTENTRY));\n Interval *p = palloc(sizeof(Interval));\n macaddr *v = palloc0(sizeof(macaddr)); and so on.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Tue, 17 May 2022 17:19:19 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Expand palloc/pg_malloc API"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> On Tue, May 17, 2022 at 5:11 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> This adds additional variants of palloc, pg_malloc, etc. that\n>> encapsulate common usage patterns and provide more type safety.\n\n> I see lots of instances where there's no explicit type-casting to the\n> target variable type -\n> retval = palloc(sizeof(GISTENTRY));\n> Interval *p = palloc(sizeof(Interval));\n> macaddr *v = palloc0(sizeof(macaddr)); and so on.\n\nYeah. IMO the first of those is very poor style, because there's\nbasically nothing enforcing that you wrote the right thing in sizeof().\nThe others are a bit safer, in that at least a human can note that\nthe two types mentioned on the same line match --- but I doubt any\ncompiler would detect it if they don't. Our current preferred style\n\n Interval *p = (Interval *) palloc(sizeof(Interval));\n\nis really barely an improvement, because only two of the three types\nmentioned are going to be checked against each other.\n\nSo I think Peter's got a good idea here (I might quibble with the details\nof some of these macros). But it's not really going to move the\nsafety goalposts very far unless we make a concerted effort to make\nthese be the style everywhere. Are we willing to do that? What\nwill it do to back-patching difficulty? Dare I suggest back-patching\nthese changes?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 May 2022 14:43:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Expand palloc/pg_malloc API"
},
{
"msg_contents": "On 17.05.22 20:43, Tom Lane wrote:\n> So I think Peter's got a good idea here (I might quibble with the details\n> of some of these macros). But it's not really going to move the\n> safety goalposts very far unless we make a concerted effort to make\n> these be the style everywhere. Are we willing to do that? What\n> will it do to back-patching difficulty? Dare I suggest back-patching\n> these changes?\n\nI think it could go like the castNode() introduction: first we adopt it \nsporadically for new code, then we change over some larger pieces of \ncode, then we backpatch the API, then someone sends in a big patch to \nchange the rest.\n\n\n",
"msg_date": "Tue, 24 May 2022 16:03:27 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Expand palloc/pg_malloc API"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 17.05.22 20:43, Tom Lane wrote:\n>> So I think Peter's got a good idea here (I might quibble with the details\n>> of some of these macros). But it's not really going to move the\n>> safety goalposts very far unless we make a concerted effort to make\n>> these be the style everywhere. Are we willing to do that? What\n>> will it do to back-patching difficulty? Dare I suggest back-patching\n>> these changes?\n\n> I think it could go like the castNode() introduction: first we adopt it \n> sporadically for new code, then we change over some larger pieces of \n> code, then we backpatch the API, then someone sends in a big patch to \n> change the rest.\n\nOK, that seems like a reasonable plan.\n\nI've now studied this a little more closely, and I think the\nfunctionality is fine, but I have naming quibbles.\n\n1. Do we really want distinct names for the frontend and backend\nversions of the macros? Particularly since you're layering the\nfrontend ones over pg_malloc, which has exit-on-OOM behavior?\nI think we've found that notational discrepancies between frontend\nand backend code are generally more a hindrance than a help,\nso I'm inclined to drop the pg_malloc_xxx macros and just use\n\"palloc\"-based names across the board.\n\n2. I don't like the \"palloc_ptrtype\" name at all. I see that you\nborrowed that name from talloc, but I doubt that's a precedent that\nvery many people are familiar with. To me it sounds like it might\nallocate something that's the size of a pointer, not the size of the\npointed-to object. I have to confess though that I don't have an\nobviously better name to suggest. \"palloc_pointed_to\" would be\nclear perhaps, but it's kind of long.\n\n3. Likewise, \"palloc_obj\" is perhaps less clear than it could be.\nI find palloc_array just fine though. Maybe palloc_object or\npalloc_struct? (If \"_obj\" can be traced to talloc, I'm not\nseeing where.)\n\n\nOne thought that comes to mind is that palloc_ptrtype is almost\nsurely going to be used in the style\n\n\tmyvariable = palloc_ptrtype(myvariable);\n\nand if it's not that it's very likely wrong. So maybe we should cut\nout the middleman and write something like\n\n#define palloc_instantiate(ptr) ((ptr) = (typeof(ptr)) palloc(sizeof(*(ptr))))\n...\n\tpalloc_instantiate(myvariable);\n\nI'm not wedded to \"instantiate\" here, there's probably better names.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 26 Jul 2022 17:32:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Expand palloc/pg_malloc API"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 2:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> 2. I don't like the \"palloc_ptrtype\" name at all. I see that you\n> borrowed that name from talloc, but I doubt that's a precedent that\n> very many people are familiar with.\n\n\n\n> To me it sounds like it might\n> allocate something that's the size of a pointer, not the size of the\n> pointed-to object. I have to confess though that I don't have an\n> obviously better name to suggest. \"palloc_pointed_to\" would be\n> clear perhaps, but it's kind of long.\n>\n\nI agree that ptrtype reads \"the type of a pointer\".\n\nThis may not be a C-idiom but the pointed-to thing is a \"reference\" (hence\npass by value vs pass by reference). So:\n\npalloc_ref(myvariablepointer)\n\nwill allocate using the type of the referenced object. Just like _array\nand _obj, which name the thing being used as a size template as opposed to\ninstantiate which seems more like another word for \"allocate/palloc\".\n\nDavid J.\nP.S.\n\nAdmittedly I'm still getting my head around reading pointer-using code (I\nget the general concept but haven't had to code them)....\n\n- lockrelid = palloc(sizeof(*lockrelid));\n+ lockrelid = palloc_ptrtype(lockrelid);\n\n// This definitely seems like an odd idiom until I remembered about\nshort-lived memory contexts and the lost pointers are soon destroyed there.\n\nSo lockrelid (no star) is a pointer that has an underlying reference that\nthe macro (and the orignal code) resolves via the *\n\nI cannot reason out whether the following would be equivalent to the above:\n\nlockrelid = palloc_obj(*lockrelid);\n\nI assume not because: typeof(lockrelid) != (*lockrelid *)\n\nOn Tue, Jul 26, 2022 at 2:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n2. I don't like the \"palloc_ptrtype\" name at all. I see that you\nborrowed that name from talloc, but I doubt that's a precedent that\nvery many people are familiar with. To me it sounds like it might\nallocate something that's the size of a pointer, not the size of the\npointed-to object. I have to confess though that I don't have an\nobviously better name to suggest. \"palloc_pointed_to\" would be\nclear perhaps, but it's kind of long.I agree that ptrtype reads \"the type of a pointer\".This may not be a C-idiom but the pointed-to thing is a \"reference\" (hence pass by value vs pass by reference). So:palloc_ref(myvariablepointer)will allocate using the type of the referenced object. Just like _array and _obj, which name the thing being used as a size template as opposed to instantiate which seems more like another word for \"allocate/palloc\".David J.P.S. Admittedly I'm still getting my head around reading pointer-using code (I get the general concept but haven't had to code them)....-\t\tlockrelid = palloc(sizeof(*lockrelid));+\t\tlockrelid = palloc_ptrtype(lockrelid);// This definitely seems like an odd idiom until I remembered about short-lived memory contexts and the lost pointers are soon destroyed there.So lockrelid (no star) is a pointer that has an underlying reference that the macro (and the orignal code) resolves via the *I cannot reason out whether the following would be equivalent to the above:lockrelid = palloc_obj(*lockrelid);I assume not because: typeof(lockrelid) != (*lockrelid *)",
"msg_date": "Tue, 26 Jul 2022 16:58:55 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Expand palloc/pg_malloc API"
},
{
"msg_contents": "On 26.07.22 23:32, Tom Lane wrote:\n> 1. Do we really want distinct names for the frontend and backend\n> versions of the macros? Particularly since you're layering the\n> frontend ones over pg_malloc, which has exit-on-OOM behavior?\n> I think we've found that notational discrepancies between frontend\n> and backend code are generally more a hindrance than a help,\n> so I'm inclined to drop the pg_malloc_xxx macros and just use\n> \"palloc\"-based names across the board.\n\nThis seems like a question that is independent of this patch. Given \nthat both pg_malloc() and palloc() do exist in fe_memutils, I think it \nwould be confusing to only extend one part of that and not the other. \nThe amount of code is ultimately not a lot.\n\nIf we wanted to get rid of pg_malloc() altogether, maybe we could talk \nabout that.\n\n(Personally, I have always been a bit suspicious about using the name \npalloc() without memory context semantics in frontend code, but I guess \nthis is wide-spread now.)\n\n> 3. Likewise, \"palloc_obj\" is perhaps less clear than it could be.\n> I find palloc_array just fine though. Maybe palloc_object or\n> palloc_struct? (If \"_obj\" can be traced to talloc, I'm not\n> seeing where.)\n\nIn talloc, the talloc() function itself allocates an object of a given \ntype. To allocate something of a specified size, you'd use \ntalloc_size(). So those names won't map exactly. I'm fine with \npalloc_object() if that is clearer.\n\n> One thought that comes to mind is that palloc_ptrtype is almost\n> surely going to be used in the style\n> \n> \tmyvariable = palloc_ptrtype(myvariable);\n> \n> and if it's not that it's very likely wrong. So maybe we should cut\n> out the middleman and write something like\n> \n> #define palloc_instantiate(ptr) ((ptr) = (typeof(ptr)) palloc(sizeof(*(ptr))))\n> ...\n> \tpalloc_instantiate(myvariable);\n\nRight, this is sort of what you'd want, really. But it looks like \nstrange C code, since you are modifying the variable even though you are \npassing it by value.\n\nI think the _ptrtype variant isn't that useful anyway, so if it's \nconfusing we can leave it out.\n\n\n",
"msg_date": "Fri, 12 Aug 2022 09:31:40 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Expand palloc/pg_malloc API"
},
{
"msg_contents": "\nOn 27.07.22 01:58, David G. Johnston wrote:\n> Admittedly I'm still getting my head around reading pointer-using code \n> (I get the general concept but haven't had to code them)....\n> \n> - lockrelid = palloc(sizeof(*lockrelid));\n> + lockrelid = palloc_ptrtype(lockrelid);\n> \n> // This definitely seems like an odd idiom until I remembered about \n> short-lived memory contexts and the lost pointers are soon destroyed there.\n> \n> So lockrelid (no star) is a pointer that has an underlying reference \n> that the macro (and the orignal code) resolves via the *\n> \n> I cannot reason out whether the following would be equivalent to the above:\n> \n> lockrelid = palloc_obj(*lockrelid);\n\nI think that would also work.\n\nUltimately, it would be more idiomatic (in Postgres), to write this as\n\nlockrelid = palloc(sizeof(LockRelId));\n\nand thus\n\nlockrelid = palloc_obj(LockRelId);\n\n\n",
"msg_date": "Fri, 12 Aug 2022 09:39:43 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Expand palloc/pg_malloc API"
},
{
"msg_contents": "On 12.08.22 09:31, Peter Eisentraut wrote:\n> In talloc, the talloc() function itself allocates an object of a given \n> type. To allocate something of a specified size, you'd use \n> talloc_size(). So those names won't map exactly. I'm fine with \n> palloc_object() if that is clearer.\n\n> I think the _ptrtype variant isn't that useful anyway, so if it's \n> confusing we can leave it out.\n\nI have updated this patch set to rename the _obj() functions to \n_object(), and I have dropped the _ptrtype() variants.\n\nI have also split the patch to put the new API and the example uses into \nseparate patches.",
"msg_date": "Sun, 28 Aug 2022 19:22:39 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Expand palloc/pg_malloc API"
},
{
"msg_contents": "On Fri, Aug 12, 2022 at 3:31 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> (Personally, I have always been a bit suspicious about using the name\n> palloc() without memory context semantics in frontend code, but I guess\n> this is wide-spread now.)\n\nI think it would be a good thing to add memory context support to the\nfrontend. We could just put everything in a single context for\nstarters, and then frontend utilities that wanted to create other\ncontexts could do so.\n\nThere are difficulties, though. For instance, memory contexts are\nnodes, and have a NodeTag. And I'm pretty sure we don't want frontend\ncode to know about all the backend node types. My suspicion is that\nmemory context types really shouldn't be node types, but right now,\nthey are. Whether that's the correct view or not, this kind of problem\nmeans it's not a simple lift-and-shift to move the memory context code\ninto src/common. Someone would need to spend some time thinking about\nhow to engineer it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Aug 2022 10:39:35 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Expand palloc/pg_malloc API"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I have updated this patch set to rename the _obj() functions to \n> _object(), and I have dropped the _ptrtype() variants.\n\n> I have also split the patch to put the new API and the example uses into \n> separate patches.\n\nThis patch set seems fine to me, so I've marked it Ready for Committer.\n\nI think serious consideration should be given to back-patching the\n0001 part (that is, addition of the macros). Otherwise we'll have\nto remember not to use these macros in code intended for back-patch,\nand that'll be mighty annoying once we are used to them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Sep 2022 16:13:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Expand palloc/pg_malloc API"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Aug 12, 2022 at 3:31 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> (Personally, I have always been a bit suspicious about using the name\n>> palloc() without memory context semantics in frontend code, but I guess\n>> this is wide-spread now.)\n\n> I think it would be a good thing to add memory context support to the\n> frontend. We could just put everything in a single context for\n> starters, and then frontend utilities that wanted to create other\n> contexts could do so.\n\nPerhaps, but I think we should have at least one immediate use-case\nfor multiple contexts in frontend. Otherwise it's just useless extra\ncode. The whole point of memory contexts in the backend is that we\nhave well-defined lifespans for certain types of allocations (executor\nstate, function results, etc); but it's not very clear to me that the\nsame concept will be helpful in any of our frontend programs.\n\n> There are difficulties, though. For instance, memory contexts are\n> nodes, and have a NodeTag. And I'm pretty sure we don't want frontend\n> code to know about all the backend node types. My suspicion is that\n> memory context types really shouldn't be node types, but right now,\n> they are. Whether that's the correct view or not, this kind of problem\n> means it's not a simple lift-and-shift to move the memory context code\n> into src/common. Someone would need to spend some time thinking about\n> how to engineer it.\n\nI don't really think that's much of an issue. We could replace the\nnodetag fields with some sort of magic number and have just as much\nwrong-pointer safety as in the backend. What I do take issue with\nis moving the code into src/common. I think we'd be better off\njust writing a distinct implementation for frontend. For one thing,\nit's not apparent to me that aset.c is a good allocator for frontend\n(and the other two surely are not).\n\nThis is all pretty off-topic for Peter's patch, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Sep 2022 16:25:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Expand palloc/pg_malloc API"
},
{
"msg_contents": "On 09.09.22 22:13, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> I have updated this patch set to rename the _obj() functions to\n>> _object(), and I have dropped the _ptrtype() variants.\n> \n>> I have also split the patch to put the new API and the example uses into\n>> separate patches.\n> \n> This patch set seems fine to me, so I've marked it Ready for Committer.\n\ncommitted\n\n> I think serious consideration should be given to back-patching the\n> 0001 part (that is, addition of the macros). Otherwise we'll have\n> to remember not to use these macros in code intended for back-patch,\n> and that'll be mighty annoying once we are used to them.\n\nYes, the 0001 patch is kept separate so that we can do that when we feel \nthe time is right.\n\n\n\n",
"msg_date": "Mon, 12 Sep 2022 08:53:53 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Expand palloc/pg_malloc API"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 09.09.22 22:13, Tom Lane wrote:\n>> I think serious consideration should be given to back-patching the\n>> 0001 part (that is, addition of the macros). Otherwise we'll have\n>> to remember not to use these macros in code intended for back-patch,\n>> and that'll be mighty annoying once we are used to them.\n\n> Yes, the 0001 patch is kept separate so that we can do that when we feel \n> the time is right.\n\nI think the right time is now, or at least as soon as you're\nsatisfied that the buildfarm is happy.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 12 Sep 2022 09:49:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Expand palloc/pg_malloc API"
},
{
"msg_contents": "On 12.09.22 15:49, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 09.09.22 22:13, Tom Lane wrote:\n>>> I think serious consideration should be given to back-patching the\n>>> 0001 part (that is, addition of the macros). Otherwise we'll have\n>>> to remember not to use these macros in code intended for back-patch,\n>>> and that'll be mighty annoying once we are used to them.\n> \n>> Yes, the 0001 patch is kept separate so that we can do that when we feel\n>> the time is right.\n> \n> I think the right time is now, or at least as soon as you're\n> satisfied that the buildfarm is happy.\n\nThis has been done.\n\n\n\n",
"msg_date": "Wed, 14 Sep 2022 06:24:49 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Expand palloc/pg_malloc API"
},
{
"msg_contents": "I have another little idea that fits well here: repalloc0 and \nrepalloc0_array. These zero out the space added by repalloc. This is a \ncommon pattern in the backend code that is quite hairy to code by hand. \nSee attached patch.",
"msg_date": "Wed, 14 Sep 2022 06:26:57 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Expand palloc/pg_malloc API"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I have another little idea that fits well here: repalloc0 and \n> repalloc0_array. These zero out the space added by repalloc. This is a \n> common pattern in the backend code that is quite hairy to code by hand. \n> See attached patch.\n\n+1 in general --- you've put your finger on something I felt was\nmissing, but couldn't quite identify.\n\nHowever, I'm a bit bothered by the proposed API:\n\n+extern pg_nodiscard void *repalloc0(void *pointer, Size size, Size oldsize);\n\nIt kind of feels that the argument order should be pointer, oldsize, size.\nIt feels even more strongly that people will get the ordering wrong,\nwhichever we choose. Is there a way to make that more bulletproof?\n\nThe only thought that comes to mind offhand is that the only plausible\nuse-case is with size >= oldsize, so maybe an assertion or even a\nruntime check would help to catch getting it backwards. (I notice\nthat your proposed coding will fail rather catastrophically if the\ncaller gets it backwards. An assertion failure would be better.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Sep 2022 00:35:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Expand palloc/pg_malloc API"
},
{
"msg_contents": "I wrote:\n> It kind of feels that the argument order should be pointer, oldsize, size.\n> It feels even more strongly that people will get the ordering wrong,\n> whichever we choose. Is there a way to make that more bulletproof?\n\nActually ... an even-more-terrifyingly-plausible misuse is that the\nsupplied oldsize is different from the actual previous allocation.\nWe should try to check that. In MEMORY_CONTEXT_CHECKING builds\nit should be possible to assert that oldsize == requested_size.\nWe don't have that data if !MEMORY_CONTEXT_CHECKING, but we could\nat least assert that oldsize <= allocated chunk size.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Sep 2022 00:53:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Expand palloc/pg_malloc API"
},
{
"msg_contents": "On 14.09.22 06:53, Tom Lane wrote:\n> I wrote:\n>> It kind of feels that the argument order should be pointer, oldsize, size.\n>> It feels even more strongly that people will get the ordering wrong,\n>> whichever we choose. Is there a way to make that more bulletproof?\n> \n> Actually ... an even-more-terrifyingly-plausible misuse is that the\n> supplied oldsize is different from the actual previous allocation.\n> We should try to check that. In MEMORY_CONTEXT_CHECKING builds\n> it should be possible to assert that oldsize == requested_size.\n> We don't have that data if !MEMORY_CONTEXT_CHECKING, but we could\n> at least assert that oldsize <= allocated chunk size.\n\nI'm not very familiar with MEMORY_CONTEXT_CHECKING. Where would one get \nthese values?\n\n\n\n",
"msg_date": "Tue, 11 Oct 2022 17:48:33 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Expand palloc/pg_malloc API"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 14.09.22 06:53, Tom Lane wrote:\n>> Actually ... an even-more-terrifyingly-plausible misuse is that the\n>> supplied oldsize is different from the actual previous allocation.\n>> We should try to check that. In MEMORY_CONTEXT_CHECKING builds\n>> it should be possible to assert that oldsize == requested_size.\n>> We don't have that data if !MEMORY_CONTEXT_CHECKING, but we could\n>> at least assert that oldsize <= allocated chunk size.\n\n> I'm not very familiar with MEMORY_CONTEXT_CHECKING. Where would one get \n> these values?\n\nHmm ... the individual allocators have that info, but mcxt.c doesn't\nhave access to it. I guess we could invent an additional \"method\"\nto return the requested size of a chunk, which is only available in\nMEMORY_CONTEXT_CHECKING builds, or maybe in !MEMORY_CONTEXT_CHECKING\nit returns the allocated size instead.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Oct 2022 12:04:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Expand palloc/pg_malloc API"
},
{
"msg_contents": "On 11.10.22 18:04, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 14.09.22 06:53, Tom Lane wrote:\n>>> Actually ... an even-more-terrifyingly-plausible misuse is that the\n>>> supplied oldsize is different from the actual previous allocation.\n>>> We should try to check that. In MEMORY_CONTEXT_CHECKING builds\n>>> it should be possible to assert that oldsize == requested_size.\n>>> We don't have that data if !MEMORY_CONTEXT_CHECKING, but we could\n>>> at least assert that oldsize <= allocated chunk size.\n> \n>> I'm not very familiar with MEMORY_CONTEXT_CHECKING. Where would one get\n>> these values?\n> \n> Hmm ... the individual allocators have that info, but mcxt.c doesn't\n> have access to it. I guess we could invent an additional \"method\"\n> to return the requested size of a chunk, which is only available in\n> MEMORY_CONTEXT_CHECKING builds, or maybe in !MEMORY_CONTEXT_CHECKING\n> it returns the allocated size instead.\n\nI'm not sure whether that amount of additional work would be useful \nrelative to the size of this patch. Is the patch as it stands now \nmaking the code less robust than what the code is doing now?\n\nIn the meantime, here is an updated patch with the argument order \nswapped and an additional assertion, as previously discussed.",
"msg_date": "Mon, 31 Oct 2022 09:47:05 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Expand palloc/pg_malloc API"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 11.10.22 18:04, Tom Lane wrote:\n>> Hmm ... the individual allocators have that info, but mcxt.c doesn't\n>> have access to it. I guess we could invent an additional \"method\"\n>> to return the requested size of a chunk, which is only available in\n>> MEMORY_CONTEXT_CHECKING builds, or maybe in !MEMORY_CONTEXT_CHECKING\n>> it returns the allocated size instead.\n\n> I'm not sure whether that amount of additional work would be useful \n> relative to the size of this patch. Is the patch as it stands now \n> making the code less robust than what the code is doing now?\n\nNo. It's slightly annoying that the call sites still have to track\nthe old size of the allocation, but I guess that they have to have\nthat information in order to know that they need to repalloc in the\nfirst place. I agree that this patch does make things easier to\nread and a bit less error-prone.\n\nAlso, I realized that what I proposed above doesn't really work\nanyway for this purpose. Consider\n\n\tptr = palloc(size);\n\t... fill all \"size\" bytes ...\n\tptr = repalloc0(ptr, size, newsize);\n\nwhere the initial size request isn't a power of 2. If production builds\nrely on the initial allocated size not requested size to decide where to\nstart zeroing, this would work (no uninitialized holes) in a debug build,\nbut leave some uninitialized bytes in a production build, which is\nabsolutely horrible. So I guess we have to rely on the callers to\ntrack their requests.\n\n> In the meantime, here is an updated patch with the argument order \n> swapped and an additional assertion, as previously discussed.\n\nI think it'd be worth expending an actual runtime test in repalloc0,\nthat is\n\n\tif (unlikely(oldsize > size))\n\t elog(ERROR, \"invalid repalloc0 call: oldsize %zu, new size %zu\",\n\t oldsize, size);\n\nThis seems cheap compared to the cost of the repalloc+memset, and the\nconsequences of not detecting the error seem pretty catastrophic ---\nthe memset would try to zero almost your whole address space.\n\nNo objections beyond that. I've marked this RFC.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 09 Nov 2022 16:05:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Expand palloc/pg_malloc API"
}
] |
[
{
"msg_contents": "Hi Team,\n\nAppreciate your time to look into this.\n\nI have a requirement, where a user has to be provided DDL access on the\nschema (which is provided to the user) and as there is some development\nwork in process the user has to be provided the read only access on system\ncatalog tables (information_schema and pg_catalog)\n\nI have surfed a lot of materials online, but did not get any solution for\nthe same.\n\nRequest you to share some valuable input on this.\n\nThank You.\n\nRegards,\nChirag Karkera\n\nHi Team, Appreciate your time to look into this. I have a requirement, where a user has to be provided DDL access on the schema (which is provided to the user) and as there is some development work in process the user has to be provided the read only access on system catalog tables (information_schema and pg_catalog)I have surfed a lot of materials online, but did not get any solution for the same.Request you to share some valuable input on this. Thank You.Regards,Chirag Karkera",
"msg_date": "Tue, 17 May 2022 18:24:41 +0530",
"msg_from": "Chirag Karkera <chiragkrkr102@gmail.com>",
"msg_from_op": true,
"msg_subject": "Provide read-only access to system catalog tables"
},
{
"msg_contents": "On Tuesday, May 17, 2022, Chirag Karkera <chiragkrkr102@gmail.com> wrote:\n>\n>\n> the user has to be provided the read only access on system catalog tables\n> (information_schema and pg_catalog)\n>\n\nAll roles have this, no action required.\n\nDavid J.\n\nOn Tuesday, May 17, 2022, Chirag Karkera <chiragkrkr102@gmail.com> wrote:the user has to be provided the read only access on system catalog tables (information_schema and pg_catalog)All roles have this, no action required.David J.",
"msg_date": "Tue, 17 May 2022 06:10:30 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Provide read-only access to system catalog tables"
},
{
"msg_contents": "Thanks David for your reply!\n\nBut when i created a role i am not able to view objects under\ninformation_schema.*\n\nI mean I am not able to view the data, I can see only the column names.\n\nThanks.\n\nRegards,\nChirag Karkera\n\nOn Tue, May 17, 2022 at 6:40 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> On Tuesday, May 17, 2022, Chirag Karkera <chiragkrkr102@gmail.com> wrote:\n>>\n>>\n>> the user has to be provided the read only access on system catalog tables\n>> (information_schema and pg_catalog)\n>>\n>\n> All roles have this, no action required.\n>\n> David J.\n>\n>\n\nThanks David for your reply!But when i created a role i am not able to view objects under information_schema.* I mean I am not able to view the data, I can see only the column names. Thanks.Regards,Chirag KarkeraOn Tue, May 17, 2022 at 6:40 PM David G. Johnston <david.g.johnston@gmail.com> wrote:On Tuesday, May 17, 2022, Chirag Karkera <chiragkrkr102@gmail.com> wrote:the user has to be provided the read only access on system catalog tables (information_schema and pg_catalog)All roles have this, no action required.David J.",
"msg_date": "Tue, 17 May 2022 18:51:03 +0530",
"msg_from": "Chirag Karkera <chiragkrkr102@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Provide read-only access to system catalog tables"
},
{
"msg_contents": "On Tue, May 17, 2022 at 6:21 AM Chirag Karkera <chiragkrkr102@gmail.com>\nwrote:\n\n> Thanks David for your reply!\n>\n> But when i created a role i am not able to view objects under\n> information_schema.*\n>\n> I mean I am not able to view the data, I can see only the column names.\n>\n>>\n>>\nWhich goes to demonstrate you have permissions. But information_schema\nuses the permissions of the executing user to decide what to show - it is\npre-filtered (and doesn't address PostgreSQL-only features). If you need\nless restrictive behavior your best bet is to just use the system\ncatalogs. Those give you everything.\n\nDavid J.\n\nOn Tue, May 17, 2022 at 6:21 AM Chirag Karkera <chiragkrkr102@gmail.com> wrote:Thanks David for your reply!But when i created a role i am not able to view objects under information_schema.* I mean I am not able to view the data, I can see only the column names. Which goes to demonstrate you have permissions. But information_schema uses the permissions of the executing user to decide what to show - it is pre-filtered (and doesn't address PostgreSQL-only features). If you need less restrictive behavior your best bet is to just use the system catalogs. Those give you everything.David J.",
"msg_date": "Tue, 17 May 2022 06:29:31 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Provide read-only access to system catalog tables"
},
{
"msg_contents": "Thank you for the clarification.\n\nWill use the system catalogs tables.\n\nThank You.\n\nRegards,\nChirag Karkera\n\nOn Tue, May 17, 2022 at 6:59 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> On Tue, May 17, 2022 at 6:21 AM Chirag Karkera <chiragkrkr102@gmail.com>\n> wrote:\n>\n>> Thanks David for your reply!\n>>\n>> But when i created a role i am not able to view objects under\n>> information_schema.*\n>>\n>> I mean I am not able to view the data, I can see only the column names.\n>>\n>>>\n>>>\n> Which goes to demonstrate you have permissions. But information_schema\n> uses the permissions of the executing user to decide what to show - it is\n> pre-filtered (and doesn't address PostgreSQL-only features). If you need\n> less restrictive behavior your best bet is to just use the system\n> catalogs. Those give you everything.\n>\n> David J.\n>\n\nThank you for the clarification. Will use the system catalogs tables. Thank You.Regards,Chirag KarkeraOn Tue, May 17, 2022 at 6:59 PM David G. Johnston <david.g.johnston@gmail.com> wrote:On Tue, May 17, 2022 at 6:21 AM Chirag Karkera <chiragkrkr102@gmail.com> wrote:Thanks David for your reply!But when i created a role i am not able to view objects under information_schema.* I mean I am not able to view the data, I can see only the column names. Which goes to demonstrate you have permissions. But information_schema uses the permissions of the executing user to decide what to show - it is pre-filtered (and doesn't address PostgreSQL-only features). If you need less restrictive behavior your best bet is to just use the system catalogs. Those give you everything.David J.",
"msg_date": "Tue, 17 May 2022 19:37:34 +0530",
"msg_from": "Chirag Karkera <chiragkrkr102@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Provide read-only access to system catalog tables"
}
] |
[
{
"msg_contents": "Hi,\n\nWe often hear customers/users asking questions like - How much time\ndoes it take for postgres to recover if it crashes now? How much time\ndoes it take for a PITR to finish if it's started now with a specific\nrecovery target? When will the recovery of a postgres server end? It\nwill be nice if the postgres can \"somehow\" answer these questions. I\nknow this is easier said than done. At a bare minimum, the postgres\ncan scan the WAL from the last checkpoint till end of WAL to see how\nmany WAL records need to be replayed and count in \"some\" IO costs,\naverage redo/replay/apply times etc. and provide an estimate something\nlike \"recovery, if started at this moment, will take approximately X\namount of time\". To answer these questions, postgres needs to have\ninformation about the average replay time of WAL records which depends\non the type of WAL record (replay of different WAL records take\ndifferent amount of time at different times; for instance, replay of a\nWAL record with many FPIs or data blocks touched takes different time\nbased on the shared buffers hit and misses, disk IO etc.). The\npostgres can capture and save average replay time of each WAL record\ntype over a period of time and use it for estimates which is of course\na costly thing for the postgres to do while it's actually recovering.\nOr we can feed in some average disk IO, replay costs, postgres can\nscan the WAL records and provide the estimates.\n\nIf postgres has a way to estimate recovery times, it can also trigger\ncheckpoints based on it to keep the RTO/recovery times under limits.\n\nI know there are lots of unclear points for now but I would like to\nstart a discussion and hear more thoughts from the community. Please\nfeel free to provide your inputs.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Tue, 17 May 2022 18:43:52 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Estimation of recovery times in postgres"
}
] |
[
{
"msg_contents": "Greetings,\n\nAn ongoing issue in container environments where Kubernetes is being\nused is that setting the overcommit parameters on the base system will\nimpact all of the processes on that system and not all of them handle\nmalloc failing as gracefully as PG does and may allocate more than what\nthey really need with the expectation that it'll work. Folks have\nalready started hacking around this issue by using things like\nLD_PRELOAD'd libraries to put code between palloc and malloc, but that\ncould cause any malloc to fail and it strikes me that, in an ideal\nworld, we could give users a way to constrain the total amount of memory\nallocated by regular backends (or those where a user is able to control\nhow much memory is allocated, more generally) while not impacting the\nprocesses which keep the server running.\n\nI wanted to bring this general idea up for discussion here to get\nfeedback on the concept before going off to write code for it. Seems\nunlikely that it would be a huge amount of code while there's likely to\nneed to be discussion about how one would configure this and how we\nmight handle parallel query and such.\n\nEmpirically, the LD_PRELOAD hack does, in fact, seem to work, so even a\nsimple approach would be helpful, but certainly there is an angle to\nthis where we might eventually allow certain backends (perhaps certain\nroles, etc) to allocate more and others to not be allowed to allocate as\nmuch, etc. Ideally we would look to go with the simple approch first\nand then we can contemplate making it more complicated in the future and\nnot try to accomplish it all in the first pass. Of course, we should\nkeep in mind such ideas and try to avoid anything which would preclude\nus for adding that flexibility in the future.\n\nThoughts?\n\nThanks,\n\nStephen",
"msg_date": "Tue, 17 May 2022 15:42:23 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Limiting memory allocation"
},
{
"msg_contents": "On 5/17/22 15:42, Stephen Frost wrote:\n> Thoughts?\n\nYes.\n\nThe main and foremost problem is a server that is used for multiple \nservices and they behave differently when it comes to memory allocation. \nOne service just allocates like we have petabytes of RAM, then uses \nlittle of it, while another one is doing precise accounting and uses all \nof that. These two types of services don't coexist well on one system \nwithout intervention.\n\nUnfortunately swap space has been shunned as the ugly stepchild of \nmemory in recent years. It could help in this regard to bring back swap \nspace, but don't really intend to use it.\n\nUsing cgroups one can actually force a certain process (or user, or \nservice) to use swap if and when that service is using more memory than \nit was \"expected\" to use. So I have a server with 64G of RAM. I give 16G \nto Postgres as shared buffers and another 16G to work with. I assume \nanother 16G of OS buffers, so I restrict the Apache-Tomcat stuff running \non it to something like 8-12G. After that, it has to swap. Of course, my \nPostgres processes also will have to swap if they need more than 16G of \noverall workmem ... but that is what I actually intended. I may have to \nreduce workmem, or max_connections, or something else.\n\n\nRegards, Jan\n\n\n",
"msg_date": "Tue, 17 May 2022 16:10:31 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "Jan Wieck <jan@wi3ck.info> writes:\n> On 5/17/22 15:42, Stephen Frost wrote:\n>> Thoughts?\n\n> Using cgroups one can actually force a certain process (or user, or \n> service) to use swap if and when that service is using more memory than \n> it was \"expected\" to use.\n\nI wonder if we shouldn't just provide documentation pointing to OS-level\nfacilities like that one. The kernel has a pretty trivial way to check\nthe total memory used by a process. We don't: it'd require tracking total\nspace used in all our memory contexts, and then extracting some number out\nof our rear ends for allocations made directly from malloc. In short,\nanything we do here will be slow and unreliable, unless you want to depend\non platform-specific things like looking at /proc/self/maps.\n\nulimit might be interesting to check into as well. The last time I\nlooked, it wasn't too helpful for this on Linux, but that was years ago.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 May 2022 18:11:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "Greetings,\n\nOn Tue, May 17, 2022 at 18:12 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Jan Wieck <jan@wi3ck.info> writes:\n> > On 5/17/22 15:42, Stephen Frost wrote:\n> >> Thoughts?\n>\n> > Using cgroups one can actually force a certain process (or user, or\n> > service) to use swap if and when that service is using more memory than\n> > it was \"expected\" to use.\n>\n> I wonder if we shouldn't just provide documentation pointing to OS-level\n> facilities like that one. The kernel has a pretty trivial way to check\n> the total memory used by a process. We don't: it'd require tracking total\n> space used in all our memory contexts, and then extracting some number out\n> of our rear ends for allocations made directly from malloc. In short,\n> anything we do here will be slow and unreliable, unless you want to depend\n> on platform-specific things like looking at /proc/self/maps.\n\n\nThis isn’t actually a solution though and that’s the problem- you end up\nusing swap but if you use more than “expected” the OOM killer comes in and\nhappily blows you up anyway. Cgroups are containers and exactly what kube\nis doing.\n\nI agree with the general statement that it would be better for the kernel\nto do this, and a patch was written for it but then rejected by the kernel\nfolks. I’m hoping to push on that with the kernel developers but they\nseemed pretty against this and that’s quite unfortunate.\n\nAs for the performance concern and other mallocs: For the former, thanks to\nour memory contexts, I don’t expect it to be all that much of an issue as\nthe actual allocations we do aren’t all that frequently done and apparently\na relatively trivial implementation was done and performance was tested and\nit was claimed that there was basically negligible impact. Sadly that code\nisn’t open (yet… this is under discussion, supposedly) but my understanding\nwas that they just used a simple bit of shared memory to keep the count.\nAs for the latter, we could at least review the difference between our\ncount and actual memory allocated and see how big that difference is in\nsome testing (which might be enlightening anyway..) and review our direct\nmallocs and see if there’s a real concern there. Naturally this approach\nwould necessitate some amount less than the total amount of memory\navailable being used by PG anyway, but that could certainly be desirable in\nsome scenarios where there are other processes running and to ensure not\nall of the filesystem cache is ejected.\n\nulimit might be interesting to check into as well. The last time I\n> looked, it wasn't too helpful for this on Linux, but that was years ago.\n\n\nUnfortunately I really don’t think anything here has materially changed in\na way which would help us. This would also apply across all of PG’s\nprocesses and I would think it’d be nice to differentiate between user\nbackends running away and sucking up a ton of memory vs backend processes\nthat shouldn’t be constrained in this way.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Tue, May 17, 2022 at 18:12 Tom Lane <tgl@sss.pgh.pa.us> wrote:Jan Wieck <jan@wi3ck.info> writes:\n> On 5/17/22 15:42, Stephen Frost wrote:\n>> Thoughts?\n\n> Using cgroups one can actually force a certain process (or user, or \n> service) to use swap if and when that service is using more memory than \n> it was \"expected\" to use.\n\nI wonder if we shouldn't just provide documentation pointing to OS-level\nfacilities like that one. The kernel has a pretty trivial way to check\nthe total memory used by a process. We don't: it'd require tracking total\nspace used in all our memory contexts, and then extracting some number out\nof our rear ends for allocations made directly from malloc. In short,\nanything we do here will be slow and unreliable, unless you want to depend\non platform-specific things like looking at /proc/self/maps.This isn’t actually a solution though and that’s the problem- you end up using swap but if you use more than “expected” the OOM killer comes in and happily blows you up anyway. Cgroups are containers and exactly what kube is doing.I agree with the general statement that it would be better for the kernel to do this, and a patch was written for it but then rejected by the kernel folks. I’m hoping to push on that with the kernel developers but they seemed pretty against this and that’s quite unfortunate.As for the performance concern and other mallocs: For the former, thanks to our memory contexts, I don’t expect it to be all that much of an issue as the actual allocations we do aren’t all that frequently done and apparently a relatively trivial implementation was done and performance was tested and it was claimed that there was basically negligible impact. Sadly that code isn’t open (yet… this is under discussion, supposedly) but my understanding was that they just used a simple bit of shared memory to keep the count. As for the latter, we could at least review the difference between our count and actual memory allocated and see how big that difference is in some testing (which might be enlightening anyway..) and review our direct mallocs and see if there’s a real concern there. Naturally this approach would necessitate some amount less than the total amount of memory available being used by PG anyway, but that could certainly be desirable in some scenarios where there are other processes running and to ensure not all of the filesystem cache is ejected.\nulimit might be interesting to check into as well. The last time I\nlooked, it wasn't too helpful for this on Linux, but that was years ago.Unfortunately I really don’t think anything here has materially changed in a way which would help us. This would also apply across all of PG’s processes and I would think it’d be nice to differentiate between user backends running away and sucking up a ton of memory vs backend processes that shouldn’t be constrained in this way.Thanks,Stephen",
"msg_date": "Tue, 17 May 2022 18:30:42 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> On Tue, May 17, 2022 at 18:12 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ulimit might be interesting to check into as well. The last time I\n>> looked, it wasn't too helpful for this on Linux, but that was years ago.\n\n> Unfortunately I really don’t think anything here has materially changed in\n> a way which would help us. This would also apply across all of PG’s\n> processes and I would think it’d be nice to differentiate between user\n> backends running away and sucking up a ton of memory vs backend processes\n> that shouldn’t be constrained in this way.\n\nIt may well be that they've not fixed its shortcomings, but the claim\nthat it couldn't be applied selectively is nonsense. See setrlimit(2),\nwhich we already use successfully (AFAIK) to set stack space on a\nper-process basis.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 May 2022 18:36:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "On 5/17/22 18:30, Stephen Frost wrote:\n> Greetings,\n> \n> On Tue, May 17, 2022 at 18:12 Tom Lane <tgl@sss.pgh.pa.us \n> <mailto:tgl@sss.pgh.pa.us>> wrote:\n> \n> Jan Wieck <jan@wi3ck.info <mailto:jan@wi3ck.info>> writes:\n> > On 5/17/22 15:42, Stephen Frost wrote:\n> >> Thoughts?\n> \n> > Using cgroups one can actually force a certain process (or user, or\n> > service) to use swap if and when that service is using more\n> memory than\n> > it was \"expected\" to use.\n> \n> I wonder if we shouldn't just provide documentation pointing to OS-level\n> facilities like that one. The kernel has a pretty trivial way to check\n> the total memory used by a process. We don't: it'd require tracking\n> total\n> space used in all our memory contexts, and then extracting some\n> number out\n> of our rear ends for allocations made directly from malloc. In short,\n> anything we do here will be slow and unreliable, unless you want to\n> depend\n> on platform-specific things like looking at /proc/self/maps.\n> \n> \n> This isn’t actually a solution though and that’s the problem- you end up \n> using swap but if you use more than “expected” the OOM killer comes in \n> and happily blows you up anyway. Cgroups are containers and exactly what \n> kube is doing.\n\nMaybe I'm missing something, but what is it that you would actually \nconsider a solution? Knowing your current memory consumption doesn't \nmake the need for allocating some right now go away. What do you \nenvision the response of PostgreSQL to be if we had that information \nabout resource pressure? I don't see us using mallopt(3) or \nmalloc_trim(3) anywhere in the code, so I don't think any of our \nprocesses give back unused heap at this point (please correct me if I'm \nwrong). This means that even if we knew about the memory pressure of the \nsystem, adjusting things like work_mem on the fly may not do much at \nall, unless there is a constant turnover of backends.\n\nSo what do you propose PostgreSQL's response to high memory pressure to be?\n\n\nRegards, Jan\n\n\n",
"msg_date": "Wed, 18 May 2022 10:23:34 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "Le mercredi 18 mai 2022, 16:23:34 CEST Jan Wieck a écrit :\n> On 5/17/22 18:30, Stephen Frost wrote:\n> > Greetings,\n> > \n> > On Tue, May 17, 2022 at 18:12 Tom Lane <tgl@sss.pgh.pa.us\n> > \n> > <mailto:tgl@sss.pgh.pa.us>> wrote:\n> > Jan Wieck <jan@wi3ck.info <mailto:jan@wi3ck.info>> writes:\n> > > On 5/17/22 15:42, Stephen Frost wrote:\n> > >> Thoughts?\n> > > \n> > > Using cgroups one can actually force a certain process (or user, or\n> > > service) to use swap if and when that service is using more\n> > \n> > memory than\n> > \n> > > it was \"expected\" to use.\n> > \n> > I wonder if we shouldn't just provide documentation pointing to\n> > OS-level\n> > facilities like that one. The kernel has a pretty trivial way to\n> > check\n> > the total memory used by a process. We don't: it'd require tracking\n> > total\n> > space used in all our memory contexts, and then extracting some\n> > number out\n> > of our rear ends for allocations made directly from malloc. In short,\n> > anything we do here will be slow and unreliable, unless you want to\n> > depend\n> > on platform-specific things like looking at /proc/self/maps.\n> > \n> > This isn’t actually a solution though and that’s the problem- you end up\n> > using swap but if you use more than “expected” the OOM killer comes in\n> > and happily blows you up anyway. Cgroups are containers and exactly what\n> > kube is doing.\n> \n> Maybe I'm missing something, but what is it that you would actually\n> consider a solution? Knowing your current memory consumption doesn't\n> make the need for allocating some right now go away. What do you\n> envision the response of PostgreSQL to be if we had that information\n> about resource pressure? I don't see us using mallopt(3) or\n> malloc_trim(3) anywhere in the code, so I don't think any of our\n> processes give back unused heap at this point (please correct me if I'm\n> wrong). This means that even if we knew about the memory pressure of the\n> system, adjusting things like work_mem on the fly may not do much at\n> all, unless there is a constant turnover of backends.\n\nI'm not sure I understand your point: when we free() a pointer, malloc is \nallowed to release the corresponding memory to the kernel. In the case of \nglibc, it doesn't necessarily do so, but it trims the top of the heap if it is \nin excess of M_TRIM_THRESHOLD. In the default glibc configuration, this \nparameter is dynamically adjusted by mmap itself, to a maximum value of 64MB \nIIRC. So any memory freed on the top of the heap totalling more than that \nthreshold ends up actually freed.\n\nIn another thread, I proposed to take control over this tuning instead of \nletting malloc do it itself, as we may have better knowledge of the memory \nallocations pattern than what malloc empirically discovers: in particular, we \ncould lower work_mem, adjust the threshold and maybe even call malloc_trim \nourselves when work_mem is lowered, to reduce the padding we may keep.\n\n> \n> So what do you propose PostgreSQL's response to high memory pressure to be?\n> \n> \n> Regards, Jan\n\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Wed, 18 May 2022 16:40:32 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "On 2022-May-18, Jan Wieck wrote:\n\n> Maybe I'm missing something, but what is it that you would actually consider\n> a solution? Knowing your current memory consumption doesn't make the need\n> for allocating some right now go away. What do you envision the response of\n> PostgreSQL to be if we had that information about resource pressure?\n\nWhat was mentioned in the talk where this issue was presented, is that\npeople would like malloc() to return NULL when there's memory pressure,\neven if Linux has been configured indicating that memory overcommit is\nOK. The reason they can't set overcommit off is that it prevents other\nservices in the same system from running properly.\n\nAs I understand, setrlimit() sets the memory limit for any single\nprocess. But that isn't useful -- the limit needed is for the whole set\nof processes under postmaster. Limiting any individual process does no\ngood.\n\nNow that's where cgroup's memory limiting features would prove useful,\nif they weren't totally braindead:\nhttps://www.kernel.org/doc/Documentation/cgroup-v2.txt\nApparently, if the cgroup goes over the \"high\" limit, the processes are\n*throttled*. Then if the group goes over the \"max\" limit, OOM-killer is\ninvoked.\n\n(I can't see any way to make this even more counterproductive to the\ndatabase use case. Making the database work more slowly doesn't fix\nanything.)\n\nSo ditch cgroups.\n\n\nWhat they (Timescale) do, is have a LD_PRELOAD library that checks\nstatus of memory pressure, and return NULL from malloc(). This then\nleads to clean abort of transactions and all is well. There's nothing\nthat Postgres needs to do different than today.\n\nI suppose that what they would like, is a way to inquire into the memory\npressure status at MemoryContextAlloc() time and return NULL if it is\ntoo high. How exactly this would work is unclear to me; maybe one\nprocess keeps an eye on it in an OS-specific manner, and if it does get\nnear the maximum, set a bit in shared memory that other processes can\nexamine when MemoryContextAlloc is called. It doesn't have to be\nexactly accurate; an approximation is probably okay.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 18 May 2022 17:11:34 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > On Tue, May 17, 2022 at 18:12 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> ulimit might be interesting to check into as well. The last time I\n> >> looked, it wasn't too helpful for this on Linux, but that was years ago.\n> \n> > Unfortunately I really don’t think anything here has materially changed in\n> > a way which would help us. This would also apply across all of PG’s\n> > processes and I would think it’d be nice to differentiate between user\n> > backends running away and sucking up a ton of memory vs backend processes\n> > that shouldn’t be constrained in this way.\n> \n> It may well be that they've not fixed its shortcomings, but the claim\n> that it couldn't be applied selectively is nonsense. See setrlimit(2),\n> which we already use successfully (AFAIK) to set stack space on a\n> per-process basis.\n\nYeah, that thought was quite properly formed, sorry for the confusion.\n\nThat it's per-process is actually the issue, unless we were to split\nup what we're given evenly across max_connections or such, which might\nwork but would surely end up wasting an unfortunate amount of memory.\n\nConsider:\n\nshared_buffers = 8G\nmax_memory = 8G\nmax_connections = 1000 (for easy math)\n\nWith setrlimit(2), we could at process start of all user backends set\nRLIMIT_AS to 8G + 8G/1000 (8M) + some fudge for code, stack, etc, \nmeaning each process would only be allowed about 8M of memory for\nwork space, even though there's perhaps only 10 processes running,\nresulting in over 7G of memory that PG should be able to use, but isn't.\n\nMaybe we could do some tracking of per-process actual memory usage of\nalready running processes and consider that when starting new ones and\neven allow processes to change their limit if they hit it, depending on\nwhat else is going on in the system, but I'm really not sure that all of\nthis would end up being that much more efficient than just directly\ntracking allocations and failing when we hit them ourselves, and it sure\nseems like it'd be a lot more complicated.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 18 May 2022 11:19:35 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "On 5/18/22 11:11, Alvaro Herrera wrote:\n> On 2022-May-18, Jan Wieck wrote:\n> \n>> Maybe I'm missing something, but what is it that you would actually consider\n>> a solution? Knowing your current memory consumption doesn't make the need\n>> for allocating some right now go away. What do you envision the response of\n>> PostgreSQL to be if we had that information about resource pressure?\n> \n> What was mentioned in the talk where this issue was presented, is that\n> people would like malloc() to return NULL when there's memory pressure,\n> even if Linux has been configured indicating that memory overcommit is\n> OK. The reason they can't set overcommit off is that it prevents other\n> services in the same system from running properly.\n\nThank you Alvaro, that was the missing piece. Now I understand what we \nare trying to do.\n\n> As I understand, setrlimit() sets the memory limit for any single\n> process. But that isn't useful -- the limit needed is for the whole set\n> of processes under postmaster. Limiting any individual process does no\n> good.\n> \n> Now that's where cgroup's memory limiting features would prove useful,\n> if they weren't totally braindead:\n> https://www.kernel.org/doc/Documentation/cgroup-v2.txt\n> Apparently, if the cgroup goes over the \"high\" limit, the processes are\n> *throttled*. Then if the group goes over the \"max\" limit, OOM-killer is\n> invoked.\n> \n> (I can't see any way to make this even more counterproductive to the\n> database use case. Making the database work more slowly doesn't fix\n> anything.)\n> \n> So ditch cgroups.\n\nAgreed.\n\n> What they (Timescale) do, is have a LD_PRELOAD library that checks\n> status of memory pressure, and return NULL from malloc(). This then\n> leads to clean abort of transactions and all is well. There's nothing\n> that Postgres needs to do different than today.\n> \n> I suppose that what they would like, is a way to inquire into the memory\n> pressure status at MemoryContextAlloc() time and return NULL if it is\n> too high. How exactly this would work is unclear to me; maybe one\n> process keeps an eye on it in an OS-specific manner, and if it does get\n> near the maximum, set a bit in shared memory that other processes can\n> examine when MemoryContextAlloc is called. It doesn't have to be\n> exactly accurate; an approximation is probably okay.\n\nCorrect, it doesn't have to be accurate. Something /proc based setting a \nflag in shared memory WOULD be good enough, IF MemoryContextAlloc() had \nsome way of figuring out that its process is actually the right one to \nabort.\n\nOn a high transaction throughput system, having such a background \nprocess being the only one setting and clearing a flag in shared memory \ncould prove disastrous. Let it check and set/clear the flag every second \n... the whole system would throw malloc(3) failures for a whole second \non every session. Not the system I would like to benchmark ... although \nthe result charts would look hilarious.\n\nHowever, once we are under memory pressure to the point of aborting \ntransactions, it may be reasonable to have MemoryContextAlloc() calls \nwork through a queue and return NULL one by one until the pressure is \nlow enough again.\n\nI'll roll this problem around in my head for a little longer. There \ncertainly is a way to do this a bit more intelligent.\n\n\nThanks again, Jan\n\n\n",
"msg_date": "Wed, 18 May 2022 11:41:54 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "Greetings,\n\n* Jan Wieck (jan@wi3ck.info) wrote:\n> On 5/17/22 18:30, Stephen Frost wrote:\n> >This isn’t actually a solution though and that’s the problem- you end up\n> >using swap but if you use more than “expected” the OOM killer comes in and\n> >happily blows you up anyway. Cgroups are containers and exactly what kube\n> >is doing.\n> \n> Maybe I'm missing something, but what is it that you would actually consider\n> a solution? Knowing your current memory consumption doesn't make the need\n> for allocating some right now go away. What do you envision the response of\n> PostgreSQL to be if we had that information about resource pressure? I don't\n> see us using mallopt(3) or malloc_trim(3) anywhere in the code, so I don't\n> think any of our processes give back unused heap at this point (please\n> correct me if I'm wrong). This means that even if we knew about the memory\n> pressure of the system, adjusting things like work_mem on the fly may not do\n> much at all, unless there is a constant turnover of backends.\n> \n> So what do you propose PostgreSQL's response to high memory pressure to be?\n\nFail the allocation, just how most PG systems are set up to do. In such\na case, PG will almost always be able to fail the transaction, free up\nthe memory used, and continue running *without* ending up with a crash.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 18 May 2022 12:43:19 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "On 5/18/22 11:11, Alvaro Herrera wrote:\n> Now that's where cgroup's memory limiting features would prove useful,\n> if they weren't totally braindead:\n> https://www.kernel.org/doc/Documentation/cgroup-v2.txt\n> Apparently, if the cgroup goes over the \"high\" limit, the processes are\n> *throttled*. Then if the group goes over the \"max\" limit, OOM-killer is\n> invoked.\n> \n> (I can't see any way to make this even more counterproductive to the\n> database use case. Making the database work more slowly doesn't fix\n> anything.)\n\nYou may be misinterpreting \"throttle\" in this context. From [1]:\n\n The memory.high boundary on the other hand can be set\n much more conservatively. When hit, it throttles\n allocations by forcing them into direct reclaim to\n work off the excess, but it never invokes the OOM\n killer.\n\n> So ditch cgroups.\n\nYou cannot ditch cgroups if you are running in a container. And in fact \nmost non-container installations these days are also running in a cgroup \nunder systemd.\n\nThe only difference is that you are more likely to see a memory limit \nset in a container than under systemd.\n\n[1] \nhttps://github.com/torvalds/linux/blob/master/Documentation/admin-guide/cgroup-v2.rst\n\n-- \nJoe Conway\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 18 May 2022 15:38:21 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "On 2022-May-18, Joe Conway wrote:\n\n> On 5/18/22 11:11, Alvaro Herrera wrote:\n\n> > Apparently, if the cgroup goes over the \"high\" limit, the processes are\n> > *throttled*. Then if the group goes over the \"max\" limit, OOM-killer is\n> > invoked.\n\n> You may be misinterpreting \"throttle\" in this context. From [1]:\n> \n> The memory.high boundary on the other hand can be set\n> much more conservatively. When hit, it throttles\n> allocations by forcing them into direct reclaim to\n> work off the excess, but it never invokes the OOM\n> killer.\n\nWell, that means the backend processes don't do their expected task\n(process some query) but instead they have to do \"direct reclaim\". I\ndon't know what that is, but it sounds like we'd need to add\nLinux-specific code in order for this to fix anything. And what would\nwe do in such a situation anyway? Seems like our best hope would be to\nget malloc() to return NULL and have the resulting transaction abort\nfree enough memory that things in other backends can continue to run.\n\n*If* there is a way to have cgroups make Postgres do that, then that\nwould be useful enough.\n\n> > So ditch cgroups.\n> \n> You cannot ditch cgroups if you are running in a container. And in fact most\n> non-container installations these days are also running in a cgroup under\n> systemd.\n\nI just meant that the cgroup abstraction doesn't offer any interfaces\nthat we can use to improve this, not that we would be running without\nthem.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"How strange it is to find the words \"Perl\" and \"saner\" in such close\nproximity, with no apparent sense of irony. I doubt that Larry himself\ncould have managed it.\" (ncm, http://lwn.net/Articles/174769/)\n\n\n",
"msg_date": "Wed, 18 May 2022 22:20:58 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "On 5/18/22 16:20, Alvaro Herrera wrote:\n> On 2022-May-18, Joe Conway wrote:\n> \n>> On 5/18/22 11:11, Alvaro Herrera wrote:\n> \n>> > Apparently, if the cgroup goes over the \"high\" limit, the processes are\n>> > *throttled*. Then if the group goes over the \"max\" limit, OOM-killer is\n>> > invoked.\n> \n>> You may be misinterpreting \"throttle\" in this context. From [1]:\n>> \n>> The memory.high boundary on the other hand can be set\n>> much more conservatively. When hit, it throttles\n>> allocations by forcing them into direct reclaim to\n>> work off the excess, but it never invokes the OOM\n>> killer.\n> \n> Well, that means the backend processes don't do their expected task\n> (process some query) but instead they have to do \"direct reclaim\". I\n> don't know what that is, but it sounds like we'd need to add\n> Linux-specific code in order for this to fix anything. \n\nPostgres does not need to do anything. The kernel just does its thing \n(e.g. clearing page cache or swapping out anon memory) more aggressively \nthan normal to clear up some space for the impending allocation.\n\n> And what would we do in such a situation anyway? Seems like our\n> best hope would be to> get malloc() to return NULL and have the\n> resulting transaction abort free enough memory that things in other\n> backends can continue to run.\n\nWith the right hooks an extension could detect the memory pressure in an \nOS specific way and return null.\n\n> *If* there is a way to have cgroups make Postgres do that, then that\n> would be useful enough.\n\nMemory accounting under cgroups (particularly v2) can provide the signal \nneeded for a Linux specific extension to do that.\n\n>> > So ditch cgroups.\n>> \n>> You cannot ditch cgroups if you are running in a container. And in fact most\n>> non-container installations these days are also running in a cgroup under\n>> systemd.\n> \n> I just meant that the cgroup abstraction doesn't offer any interfaces\n> that we can use to improve this, not that we would be running without\n> them.\n\nI agree that cgroups is very Linux specific, so likely we would not want \nsuch code in core.\n\n\n-- \nJoe Conway\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 18 May 2022 16:49:24 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "> On Wed, May 18, 2022 at 04:49:24PM -0400, Joe Conway wrote:\n> On 5/18/22 16:20, Alvaro Herrera wrote:\n> > On 2022-May-18, Joe Conway wrote:\n> >\n> > > On 5/18/22 11:11, Alvaro Herrera wrote:\n> >\n> > > > Apparently, if the cgroup goes over the \"high\" limit, the processes are\n> > > > *throttled*. Then if the group goes over the \"max\" limit, OOM-killer is\n> > > > invoked.\n> >\n> > > You may be misinterpreting \"throttle\" in this context. From [1]:\n> > >\n> > > The memory.high boundary on the other hand can be set\n> > > much more conservatively. When hit, it throttles\n> > > allocations by forcing them into direct reclaim to\n> > > work off the excess, but it never invokes the OOM\n> > > killer.\n> >\n> > Well, that means the backend processes don't do their expected task\n> > (process some query) but instead they have to do \"direct reclaim\". I\n> > don't know what that is, but it sounds like we'd need to add\n> > Linux-specific code in order for this to fix anything.\n>\n> Postgres does not need to do anything. The kernel just does its thing (e.g.\n> clearing page cache or swapping out anon memory) more aggressively than\n> normal to clear up some space for the impending allocation.\n>\n> > And what would we do in such a situation anyway? Seems like our\n> > best hope would be to> get malloc() to return NULL and have the\n> > resulting transaction abort free enough memory that things in other\n> > backends can continue to run.\n>\n> With the right hooks an extension could detect the memory pressure in an OS\n> specific way and return null.\n>\n> > *If* there is a way to have cgroups make Postgres do that, then that\n> > would be useful enough.\n>\n> Memory accounting under cgroups (particularly v2) can provide the signal\n> needed for a Linux specific extension to do that.\n\nTo elaborate a bit on this, Linux PSI feature (in the context of\ncontainers, cgroups v2 only) [1] would allow a userspace application to\nregister a trigger on memory pressure exceeding some threshold. The\npressure here is not exactly how much memory is allocated, but rather\nmemory stall, and the whole machinery would involve polling -- but still\nsounds interesting in the context of this thread.\n\n[1]: https://www.kernel.org/doc/Documentation/accounting/psi.rst\n\n\n",
"msg_date": "Thu, 19 May 2022 09:43:38 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "Hi,\n\n> On 18. May 2022, at 17:11, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> On 2022-May-18, Jan Wieck wrote:\n> \n>> Maybe I'm missing something, but what is it that you would actually consider\n>> a solution? Knowing your current memory consumption doesn't make the need\n>> for allocating some right now go away. What do you envision the response of\n>> PostgreSQL to be if we had that information about resource pressure?\n> \n> \n> What they (Timescale) do, is have a LD_PRELOAD library that checks\n> status of memory pressure, and return NULL from malloc(). This then\n> leads to clean abort of transactions and all is well. There's nothing\n> that Postgres needs to do different than today.\n\nCorrect. The library we have reads a limit supplied in an environment variable\nand stores per-process and total memory usage values in shared memory counters,\nupdated after each call to malloc/free/calloc/realloc by the process making the\ncall. When updating totals, a process picks one of N counters to update\natomically with the difference between its old and new memory usage, avoiding\ncongested ones; those are summed to determine current allocations for all\nprocesses and to compare against the limit.\n\n> \n> I suppose that what they would like, is a way to inquire into the memory\n> pressure status at MemoryContextAlloc() time and return NULL if it is\n> too high.\n\nIf we call user code just before malloc (and, presumably free and realloc), the\ncode would have to do just as much work as when it is called from the\nmalloc/free/realloc wrappers inside a preloaded library. Furthermore, I don’t\nsee why the user would want to customize that logic: a single Linux-specific\nimplementation would solve the problem for everyone.\n\n\n> How exactly this would work is unclear to me; maybe one\n> process keeps an eye on it in an OS-specific manner,\n\nWe don’t need to do anything for non-Linux systems, as cgroups and OOM\nkiller doesn’t exist there.\n\n\n> and if it does get\n> near the maximum, set a bit in shared memory that other processes can\n> examine when MemoryContextAlloc is called. It doesn't have to be\n> exactly accurate; an approximation is probably okay.\n\nWhat would be a purpose of setting a bit in shared memory when the maximum Is\nabout to be reached?\n\nWhat would be useful is a way for Postgres to count the amount of memory\nallocated by each backend. This could be advantageous for giving per-backend\nmemory usage to the user, as well as for enforcing a limit on the total amount\nof memory allocated by the backends.\n\n—\nOleksii Kliukin\n\n",
"msg_date": "Fri, 20 May 2022 19:36:38 +0200",
"msg_from": "Oleksii Kliukin <alexk@hintbits.com>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "Greetings,\n\n* Oleksii Kliukin (alexk@hintbits.com) wrote:\n> > On 18. May 2022, at 17:11, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > On 2022-May-18, Jan Wieck wrote:\n> >> Maybe I'm missing something, but what is it that you would actually consider\n> >> a solution? Knowing your current memory consumption doesn't make the need\n> >> for allocating some right now go away. What do you envision the response of\n> >> PostgreSQL to be if we had that information about resource pressure?\n> > \n> > What they (Timescale) do, is have a LD_PRELOAD library that checks\n> > status of memory pressure, and return NULL from malloc(). This then\n> > leads to clean abort of transactions and all is well. There's nothing\n> > that Postgres needs to do different than today.\n\nI'm not super fixated on exactly what this one implementation does, but\nrather that the kernel is evidently not interested in trying to solve\nthis problem and therefore it's something which we need to address. I\nagree in general that we don't need to do much different except to have\na way to effectively have a limit where we treat an allocation attempt\nas failing and then the rest of our existing machinery will handle\nfailing the transaction and doing cleanup and such just fine.\n\n> Correct. The library we have reads a limit supplied in an environment variable\n> and stores per-process and total memory usage values in shared memory counters,\n> updated after each call to malloc/free/calloc/realloc by the process making the\n> call. When updating totals, a process picks one of N counters to update\n> atomically with the difference between its old and new memory usage, avoiding\n> congested ones; those are summed to determine current allocations for all\n> processes and to compare against the limit.\n\nWould be interesting to know just how many of these counters are used\nand how 'congested' ones are avoided. Though, would certainly be easier\nif one could simply review this library.\n\n> > I suppose that what they would like, is a way to inquire into the memory\n> > pressure status at MemoryContextAlloc() time and return NULL if it is\n> > too high.\n\nNot really concerned with what one specific implementation that's been\ndone would like but rather with solving the larger issue that exists,\nwhich is that we aren't able to cap our memory usage today and that can\nlead to the OOM killer coming into play, or excessive swap usage, or\ncausing issue for other processes running. While I started this with\nthe crash case as the main concern, and I do feel it's still a big case\nto consider, there are other valuable use-cases to consider where this\nwould help.\n\n> If we call user code just before malloc (and, presumably free and realloc), the\n> code would have to do just as much work as when it is called from the\n> malloc/free/realloc wrappers inside a preloaded library. Furthermore, I don’t\n> see why the user would want to customize that logic: a single Linux-specific\n> implementation would solve the problem for everyone.\n\nIf the problem is explicitly defined as \"deal with the Linux OOM killer\"\nthen, yes, a Linux-specific fix would address that. I do think that's\ncertainly an important, and perhaps the most important, issue that this\nsolves, but there's other cases where this would be really helpful.\n\n> > How exactly this would work is unclear to me; maybe one\n> > process keeps an eye on it in an OS-specific manner,\n\nThere seems to be a lot of focus on trying to implement this as \"get the\namount of free memory from the OS and make sure we don't go over that\nlimit\" and that adds a lot of OS-specific logic which complicates things\nand also ignores the use-cases where an admin wishes to limit PG's\nmemory usage due to other processes running on the same system. I'll\npoint out that the LD_PRELOAD library doesn't even attempt to do this,\neven though it's explicitly for Linux, but uses an environment variable\ninstead.\n\nIn PG, we'd have that be a GUC that an admin is able to set and then we\ntrack the memory usage (perhaps per-process, perhaps using some set of\nbuckets, perhaps locally per-process and then in a smaller number of\nbuckets in shared memory, or something else) and fail an allocation when\nit would go over that limit, perhaps only when it's a regular user\nbackend or with other conditions around it.\n\n> What would be useful is a way for Postgres to count the amount of memory\n> allocated by each backend. This could be advantageous for giving per-backend\n> memory usage to the user, as well as for enforcing a limit on the total amount\n> of memory allocated by the backends.\n\nI agree that this would be independently useful. \n\nThanks,\n\nStephen",
"msg_date": "Fri, 20 May 2022 15:50:52 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "On 5/20/22 21:50, Stephen Frost wrote:\n> Greetings,\n> \n> ...\n>\n>>> How exactly this would work is unclear to me; maybe one\n>>> process keeps an eye on it in an OS-specific manner,\n> \n> There seems to be a lot of focus on trying to implement this as \"get the\n> amount of free memory from the OS and make sure we don't go over that\n> limit\" and that adds a lot of OS-specific logic which complicates things\n> and also ignores the use-cases where an admin wishes to limit PG's\n> memory usage due to other processes running on the same system. I'll\n> point out that the LD_PRELOAD library doesn't even attempt to do this,\n> even though it's explicitly for Linux, but uses an environment variable\n> instead.\n> \n> In PG, we'd have that be a GUC that an admin is able to set and then we\n> track the memory usage (perhaps per-process, perhaps using some set of\n> buckets, perhaps locally per-process and then in a smaller number of\n> buckets in shared memory, or something else) and fail an allocation when\n> it would go over that limit, perhaps only when it's a regular user\n> backend or with other conditions around it.\n> \n\nI agree a GUC setting a memory target is a sensible starting point.\n\nI wonder if we might eventually use this to define memory budgets. One\nof the common questions I get is how do you restrict the user from\nsetting work_mem too high or doing too much memory-hungry things.\nCurrently there's no way to do that, because we have no way to limit\nwork_mem values, and even if we had the user could construct a more\ncomplex query with more memory-hungry operations.\n\nBut I think it's also that we weren't sure what to do after hitting a\nlimit - should we try replanning the query with lower work_mem value, or\nwhat?\n\nHowever, if just failing the malloc() is acceptable, maybe we could use\nthis to achieve something like this?\n\n>> What would be useful is a way for Postgres to count the amount of memory\n>> allocated by each backend. This could be advantageous for giving per-backend\n>> memory usage to the user, as well as for enforcing a limit on the total amount\n>> of memory allocated by the backends.\n> \n> I agree that this would be independently useful.\n> \n\nWell, we already have the memory-accounting built into the memory\ncontext infrastructure. It kinda does the same thing as the malloc()\nwrapper, except that it does not publish the information anywhere and\nit's per-context (so we have to walk the context recursively).\n\nSo maybe we could make this on-request somehow? Say, we'd a signal to\nthe process, and it'd run MemoryContextMemAllocated() on the top memory\ncontext and store the result somewhere.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 21 May 2022 01:08:49 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "On 5/20/22 19:08, Tomas Vondra wrote:\n> Well, we already have the memory-accounting built into the memory\n> context infrastructure. It kinda does the same thing as the malloc()\n> wrapper, except that it does not publish the information anywhere and\n> it's per-context (so we have to walk the context recursively).\n> \n> So maybe we could make this on-request somehow? Say, we'd a signal to\n> the process, and it'd run MemoryContextMemAllocated() on the top memory\n> context and store the result somewhere.\n\nOne remaining problem with all this is that we don't get any feedback \nfrom calling free() telling if any memory has been returned to the OS or \nnot.\n\nHow portable would using sbrk() with a zero size be? If that is an \noption then I could envision counting actual calls to malloc() and \nwhenever a GUC configurable amount is reached, use sbrk() to find out, \ndo the accounting in shared memory and react accordingly.\n\nThis way we are not creating another highly contended lock and use \nauthoritative information.\n\n\nRegards, Jan\n\n\n",
"msg_date": "Mon, 23 May 2022 08:59:33 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "On Fri, May 20, 2022 at 7:09 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> I wonder if we might eventually use this to define memory budgets. One\n> of the common questions I get is how do you restrict the user from\n> setting work_mem too high or doing too much memory-hungry things.\n> Currently there's no way to do that, because we have no way to limit\n> work_mem values, and even if we had the user could construct a more\n> complex query with more memory-hungry operations.\n>\n> But I think it's also that we weren't sure what to do after hitting a\n> limit - should we try replanning the query with lower work_mem value, or\n> what?\n\nIt's always seemed to me that the principled thing to do would be to\nmake work_mem a per-query budget rather than a per-node budget, and\nhave add_path() treat memory usage as an independent figure of merit\n-- and also discard any paths that went over the memory budget. Thus\nwe might keep more expensive paths if they use less memory to produce\nthe result. For this to work well, memory-hungry nodes would probably\nneed to add multiple paths - especially nodes that do hashing, which\nis likely to have breakpoints where the estimated cost changes sharply\n(and the actual cost does too, if the row counts are accurate).\n\nI've also wondered whether we could maybe do something unprincipled\ninstead, because that all sounds not only complicated but also\npotentially expensive, if it results in us keeping extra paths around\ncompared to what we keep today. It might be worth it, though.\nGenerating query plans infinitely fast is no good if the plans suck,\nand running the machine out of RAM definitely counts as sucking.\n\nMy general feeling about this topic is that, in cases where PostgreSQL\ntoday uses more memory than is desirable, it's probably only\nmoderately difficult to make it fail with a nice error message\ninstead. Making it succeed by altering its behavior to use less memory\nseems likely to be a lot harder -- which is not to say that we\nshouldn't try to do it. It's an important problem. Just not an easy\none.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 May 2022 11:49:27 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "On Tue, May 24, 2022 at 11:49:27AM -0400, Robert Haas wrote:\n> It's always seemed to me that the principled thing to do would be to\n> make work_mem a per-query budget rather than a per-node budget, and\n> have add_path() treat memory usage as an independent figure of merit\n> -- and also discard any paths that went over the memory budget. Thus\n> we might keep more expensive paths if they use less memory to produce\n> the result. For this to work well, memory-hungry nodes would probably\n> need to add multiple paths - especially nodes that do hashing, which\n> is likely to have breakpoints where the estimated cost changes sharply\n> (and the actual cost does too, if the row counts are accurate).\n> \n> I've also wondered whether we could maybe do something unprincipled\n> instead, because that all sounds not only complicated but also\n> potentially expensive, if it results in us keeping extra paths around\n> compared to what we keep today. It might be worth it, though.\n> Generating query plans infinitely fast is no good if the plans suck,\n> and running the machine out of RAM definitely counts as sucking.\n\nIf the plan output is independent of work_mem, I always wondered why we\ndidn't just determine the number of simultaneous memory requests in the\nplan and just allocate accordingly, e.g. if there are four simultaneous\nmemory requests in the plan, each gets work_mem/4.\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 24 May 2022 19:04:58 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> If the plan output is independent of work_mem,\n\n... it isn't ...\n\n> I always wondered why we\n> didn't just determine the number of simultaneous memory requests in the\n> plan and just allocate accordingly, e.g. if there are four simultaneous\n> memory requests in the plan, each gets work_mem/4.\n\n(1) There are not a predetermined number of allocations. For example,\nif we do a given join as nestloop+inner index scan, that doesn't require\nany large amount of memory; but if we do it as merge or hash join then\nit will consume memory.\n\n(2) They may not all need the same amount of memory, eg joins might\nbe working on different amounts of data.\n\nIf this were an easy problem to solve, we'd have solved it decades\nago.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 May 2022 19:40:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "On Tue, May 24, 2022 at 07:40:45PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > If the plan output is independent of work_mem,\n> \n> ... it isn't ...\n\nGood.\n\n> > I always wondered why we\n> > didn't just determine the number of simultaneous memory requests in the\n> > plan and just allocate accordingly, e.g. if there are four simultaneous\n> > memory requests in the plan, each gets work_mem/4.\n> \n> (1) There are not a predetermined number of allocations. For example,\n> if we do a given join as nestloop+inner index scan, that doesn't require\n> any large amount of memory; but if we do it as merge or hash join then\n> it will consume memory.\n\nUh, we know from the plan whether we are doing a nestloop+inner or merge\nor hash join, right? I was suggesting we look at the plan before\nexecution and set the proper percentage of work_mem for each node.\n\n> (2) They may not all need the same amount of memory, eg joins might\n> be working on different amounts of data.\n\nTrue. but we could cap it like we do now for work_mem, but as a\npercentage of a GUC work_mem total.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 24 May 2022 20:08:49 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Tue, May 24, 2022 at 07:40:45PM -0400, Tom Lane wrote:\n>> (1) There are not a predetermined number of allocations. For example,\n>> if we do a given join as nestloop+inner index scan, that doesn't require\n>> any large amount of memory; but if we do it as merge or hash join then\n>> it will consume memory.\n\n> Uh, we know from the plan whether we are doing a nestloop+inner or merge\n> or hash join, right? I was suggesting we look at the plan before\n> execution and set the proper percentage of work_mem for each node.\n\nThen you just rendered all the planner's estimates fantasies.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 May 2022 21:20:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "On Tue, May 24, 2022 at 09:20:43PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Tue, May 24, 2022 at 07:40:45PM -0400, Tom Lane wrote:\n> >> (1) There are not a predetermined number of allocations. For example,\n> >> if we do a given join as nestloop+inner index scan, that doesn't require\n> >> any large amount of memory; but if we do it as merge or hash join then\n> >> it will consume memory.\n> \n> > Uh, we know from the plan whether we are doing a nestloop+inner or merge\n> > or hash join, right? I was suggesting we look at the plan before\n> > execution and set the proper percentage of work_mem for each node.\n> \n> Then you just rendered all the planner's estimates fantasies.\n\nThat's what I was asking --- if the planner's estimates are based on the\nsize of work_mem --- I thought you said it is not.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 24 May 2022 21:35:19 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Tue, May 24, 2022 at 09:20:43PM -0400, Tom Lane wrote:\n>> Then you just rendered all the planner's estimates fantasies.\n\n> That's what I was asking --- if the planner's estimates are based on the\n> size of work_mem --- I thought you said it is not.\n\nThe planner's estimates certainly vary with work_mem ... I was responding\nto something you said that seemed to be asserting they didn't.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 May 2022 21:55:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
},
{
"msg_contents": "On Tue, May 24, 2022 at 09:55:16PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > On Tue, May 24, 2022 at 09:20:43PM -0400, Tom Lane wrote:\n> >> Then you just rendered all the planner's estimates fantasies.\n> \n> > That's what I was asking --- if the planner's estimates are based on the\n> > size of work_mem --- I thought you said it is not.\n> \n> The planner's estimates certainly vary with work_mem ... I was responding\n> to something you said that seemed to be asserting they didn't.\n\nI see where I got confused:\n\n\t> If the plan output is independent of work_mem,\n\t... it isn't ...\n\nOkay, then my idea will not work.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 24 May 2022 22:00:14 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Limiting memory allocation"
}
] |
[
{
"msg_contents": "Hello,\n\nThis is a fresh thread to continue the discussion on ALTER TABLE SET\nACCESS METHOD when applied to partition roots, as requested.\n\nCurrent behavior (HEAD):\n\nCREATE TABLE am_partitioned(x INT, y INT)\n PARTITION BY hash (x);\nALTER TABLE am_partitioned SET ACCESS METHOD heap2;\nERROR: cannot change access method of a partitioned table\n\nPotential behavior options:\n\n(A) Don't recurse to existing children and ensure that the new am gets\ninherited by any new children. (ALTER TABLE SET TABLESPACE behavior)\n\n(B) Recurse to existing children and modify their am. Also, ensure that\nany new children inherit the new am.\n\nA patch [1] was introduced earlier by Justin to implement\n(A). v1-0001-Allow-ATSETAM-on-partition-roots.patch contains a rebase\nof that patch against latest HEAD, with minor updates on comments and\nsome additional test coverage.\n\nI think that (B) is necessary for partition hierarchies with a high\nnumber of partitions. One typical use case in Greenplum, for\ninstance, is to convert heap tables containing cold data to append-only\nstorage at the root or subroot level of partition hierarchies consisting\nof thousands of partitions. Asking users to ALTER individual partitions\nis cumbersome and error-prone.\n\nFurthermore, I believe that (B) should be the default and (A) can be\nchosen by using the ONLY clause. This would give us the best of both\nworlds and would make the use of ONLY consistent. The patch\nv1-0002-Make-ATSETAM-recurse-by-default.patch achieves that.\n\nThoughts?\n\nRegards,\nSoumyadeep (VMware)\n\n\n[1]\nhttps://www.postgresql.org/message-id/20210308010707.GA29832%40telsasoft.com",
"msg_date": "Tue, 17 May 2022 17:10:27 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "Thanks for copying me.\n\nI didn't look closely yet, but this comment is wrong:\n\n+ * Since these have no storage the tablespace can be updated with a simple \n+ * metadata only operation to update the tablespace. \n\nAs I see it, AMs are a strong parallel to tablespaces. The default tablespace\nis convenient: 1) explicitly specified tablespace; 2) tablespace of parent,\npartitioned table; 3) DB tablespace; 4) default_tablespace:\nhttps://www.postgresql.org/message-id/20190423222633.GA8364%40alvherre.pgsql\n\nIt'd be convenient if AMs worked the same way (and a bit odd that they don't).\nNote that in v15, pg_dump/restore now allow --no-table-am, an exact parallel to\n--no-tablespace.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 18 May 2022 18:14:14 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Wed, May 18, 2022 at 4:14 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> I didn't look closely yet, but this comment is wrong:\n>\n> + * Since these have no storage the tablespace can be updated with a\nsimple\n\n\n> + * metadata only operation to update the tablespace.\n\n\n\nGood catch. Fixed.\n\n> It'd be convenient if AMs worked the same way (and a bit odd that they\ndon't).\n> Note that in v15, pg_dump/restore now allow --no-table-am, an exact\nparallel to\n> --no-tablespace.\n\nI agree that ATSET AM should behave in a similar fashion to ATSET\ntablespaces.\nHowever, the way that ATSET tablespace currently behaves is not consistent\nwith\nthe ONLY clause.\n\nOn a given partition root:\nALTER TABLE ONLY am_partitioned SET TABLESPACE ts;\nhas the same effect as:\nALTER TABLE am_partitioned SET TABLESPACE ts;\n\nWe are missing out on the feature to set the AM/tablespace throughout the\npartition hierarchy, with one command.\n\nRegards,\nSoumyadeep (VMware)",
"msg_date": "Wed, 18 May 2022 17:48:45 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Wed, May 18, 2022 at 5:49 PM Soumyadeep Chakraborty <\nsoumyadeep2007@gmail.com> wrote:\n\n>\n> On Wed, May 18, 2022 at 4:14 PM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n>\n> > I didn't look closely yet, but this comment is wrong:\n> >\n> > + * Since these have no storage the tablespace can be updated with a\n> simple\n>\n>\n> > + * metadata only operation to update the tablespace.\n>\n>\n>\n>\n> Good catch. Fixed.\n>\n> > It'd be convenient if AMs worked the same way (and a bit odd that they\n> don't).\n> > Note that in v15, pg_dump/restore now allow --no-table-am, an exact\n> parallel to\n> > --no-tablespace.\n>\n> I agree that ATSET AM should behave in a similar fashion to ATSET\n> tablespaces.\n> However, the way that ATSET tablespace currently behaves is not consistent\n> with\n> the ONLY clause.\n>\n> On a given partition root:\n> ALTER TABLE ONLY am_partitioned SET TABLESPACE ts;\n> has the same effect as:\n> ALTER TABLE am_partitioned SET TABLESPACE ts;\n>\n> We are missing out on the feature to set the AM/tablespace throughout the\n> partition hierarchy, with one command.\n>\n> Regards,\n> Soumyadeep (VMware)\n>\n> Hi,\n\n+ accessMethodId = ((Form_pg_class) GETSTRUCT(tup))->relam;\n\n- /* look up the access method, verify it is for a table */\n- if (accessMethod != NULL)\n- accessMethodId = get_table_am_oid(accessMethod, false);\n+ if (!HeapTupleIsValid(tup))\n+ elog(ERROR, \"cache lookup failed for relation %u\", relid);\n\nThe validity check of tup should be done before fetching the value of\nrelam field.\n\nCheers\n\nOn Wed, May 18, 2022 at 5:49 PM Soumyadeep Chakraborty <soumyadeep2007@gmail.com> wrote:On Wed, May 18, 2022 at 4:14 PM Justin Pryzby <pryzby@telsasoft.com> wrote:> I didn't look closely yet, but this comment is wrong:> > + * Since these have no storage the tablespace can be updated with a simple > + * metadata only operation to update the tablespace. Good catch. Fixed.> It'd be convenient if AMs worked the same way (and a bit odd that they don't).> Note that in v15, pg_dump/restore now allow --no-table-am, an exact parallel to> --no-tablespace.I agree that ATSET AM should behave in a similar fashion to ATSET tablespaces.However, the way that ATSET tablespace currently behaves is not consistent withthe ONLY clause.On a given partition root:ALTER TABLE ONLY am_partitioned SET TABLESPACE ts;has the same effect as:ALTER TABLE am_partitioned SET TABLESPACE ts;We are missing out on the feature to set the AM/tablespace throughout thepartition hierarchy, with one command.Regards,Soumyadeep (VMware)Hi,+ accessMethodId = ((Form_pg_class) GETSTRUCT(tup))->relam;- /* look up the access method, verify it is for a table */- if (accessMethod != NULL)- accessMethodId = get_table_am_oid(accessMethod, false);+ if (!HeapTupleIsValid(tup))+ elog(ERROR, \"cache lookup failed for relation %u\", relid);The validity check of tup should be done before fetching the value of relam field.Cheers",
"msg_date": "Wed, 18 May 2022 18:31:46 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Wed, May 18, 2022 at 6:26 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n> + accessMethodId = ((Form_pg_class) GETSTRUCT(tup))->relam;\n>\n> - /* look up the access method, verify it is for a table */\n> - if (accessMethod != NULL)\n> - accessMethodId = get_table_am_oid(accessMethod, false);\n> + if (!HeapTupleIsValid(tup))\n> + elog(ERROR, \"cache lookup failed for relation %u\", relid);\n>\n> The validity check of tup should be done before fetching the value of\nrelam field.\n\nThanks. Fixed and rebased.\n\nRegards,\nSoumyadeep (VMware)",
"msg_date": "Thu, 9 Jun 2022 11:21:58 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Thu, Jun 09, 2022 at 11:21:58AM -0700, Soumyadeep Chakraborty wrote:\n> Thanks. Fixed and rebased.\n\nI think that I am OK with the concept of this patch to use a\npartitioned table's relam as a reference when creating a partition\nrather than relying on the default GUC, in a way similar to\ntablespaces.\n\nOne worry I see is that forcing a recursion on the leaves on ALTER\nTABLE could silently break partitions where multiple table AMs are\nused across separate partitions if ALTER TABLE SET ACCESS METHOD is\nused on one of the parents, though it seems like this is not something\nI would much worry about as now the command is an error.\n\nA second worry is that we would just break existing creation flows\nthat rely on the GUC defining the default AM. This is worth a close\nlookup at pg_dump to make sure that we do things correctly with this\npatch in place.. Did you check dump and restore flows with partition\ntrees and --no-table-access-method? Perhaps there should be\nsome regression tests with partitioned tables?\n\n+ /*\n+ * For partitioned tables, when no access method is specified, we\n+ * default to the parent table's AM.\n+ */\n+ Assert(list_length(inheritOids) == 1);\n+ /* XXX: should implement get_rel_relam? */\n+ relid = linitial_oid(inheritOids);\n+ tup = SearchSysCache1(RELOID, ObjectIdGetDatum(relid));\n+ if (!HeapTupleIsValid(tup))\n+ elog(ERROR, \"cache lookup failed for relation %u\", relid);\n\nHaving a wrapper for that could be useful, yes. We don't have any\ncode paths that would directly need that now, from what I can see,\nthough. This patch gives one reason to have one.\n--\nMichael",
"msg_date": "Mon, 20 Mar 2023 09:30:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 09:30:50AM +0900, Michael Paquier wrote:\n> Did you check dump and restore flows with partition\n> trees and --no-table-access-method? Perhaps there should be\n> some regression tests with partitioned tables?\n\nI was looking at the patch, and as I suspected the dumps generated\nare forgetting to apply the AM to the partitioned tables. For\nexample, assuming that the default AM is heap, the patch creates\nchild_0_10 with heap2 as AM, which is what we want:\nCREATE ACCESS METHOD heap2 TYPE TABLE HANDLER heap_tableam_handler;\nCREATE TABLE parent_tab (id int) PARTITION BY RANGE (id) USING heap2;\nCREATE TABLE child_0_10 PARTITION OF parent_tab\n FOR VALUES FROM (0) TO (10);\n\nHowever a dump produces that (content cut except for its most relevant\nparts):\nCREATE ACCESS METHOD heap2 TYPE TABLE HANDLER heap_tableam_handler;\nSET default_tablespace = '';\nCREATE TABLE public.parent_tab (\n id integer\n)\nPARTITION BY RANGE (id);\nSET default_table_access_method = heap2;\nCREATE TABLE public.child_0_10 (\n id integer\n);\n\nThis would restore the previous contents incorrectly, where parent_tab\nwould use heap and child_0_10 would use heap2, causing any partitions\ncreated after the restore to use silently heap. This is going to\nrequire a logic similar to tablespaces, where generate SET commands\non default_table_access_method so as --no-table-access-method in\npg_dump and pg_restore are able to work correctly. Having tests to\ncover all that is a must-have.\n--\nMichael",
"msg_date": "Tue, 28 Mar 2023 09:13:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 09:13:10AM +0900, Michael Paquier wrote:\n> On Mon, Mar 20, 2023 at 09:30:50AM +0900, Michael Paquier wrote:\n> > Did you check dump and restore flows with partition\n> > trees and --no-table-access-method? Perhaps there should be\n> > some regression tests with partitioned tables?\n> \n> I was looking at the patch, and as I suspected the dumps generated\n> are forgetting to apply the AM to the partitioned tables.\n\nThe patch said:\n\n+ else if (RELKIND_HAS_TABLE_AM(relkind) || relkind == RELKIND_PARTITIONED_TABLE)\n\npg_dump was missing a similar change that's conditional on\nRELKIND_HAS_TABLE_AM(). It looks like those are the only two places\nthat need be be specially handled.\n\nI dug up my latest patch from 2021 and incorporated the changes from the\n0001 patch here, and added a test case.\n\nI realized that one difference with tablespaces is that, as written,\npartitioned tables will *always* have an AM specified, and partitions\nwill never use default_table_access_method. Is that what's intended ?\n\nOr do we need logic similar tablespaces, such that the relam of a\npartitioned table is set only if it differs from default_table_am ?\n\n-- \nJustin",
"msg_date": "Mon, 27 Mar 2023 23:34:35 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 11:34:35PM -0500, Justin Pryzby wrote:\n> I realized that one difference with tablespaces is that, as written,\n> partitioned tables will *always* have an AM specified, and partitions\n> will never use default_table_access_method. Is that what's intended ?\n>\n> Or do we need logic similar tablespaces, such that the relam of a\n> partitioned table is set only if it differs from default_table_am ?\n\nHmm. This is a good point. It is true that the patch feels\nincomplete on this side. I don't see why we could not be flexible,\nand allow a value of 0 in a partitioned table's relam to mean that we\nwould pick up the database default in this case when a partition is\nis created on it. This would have the advantage to be consistent with\nolder versions where we fallback on the default. We cannot be\ncompletely consistent with the reltablespace of the leaf partitions\nunfortunately, as relam should always be set if a relation has\nstorage. And allowing a value of 0 means that there are likely other\ntricky cases with dumps?\n\nAnother thing: would it make sense to allow an empty string in\ndefault_table_access_method so as we'd always fallback to a database\ndefault?\n--\nMichael",
"msg_date": "Tue, 28 Mar 2023 14:56:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 02:56:28PM +0900, Michael Paquier wrote:\n> Hmm. This is a good point. It is true that the patch feels\n> incomplete on this side. I don't see why we could not be flexible,\n> and allow a value of 0 in a partitioned table's relam to mean that we\n> would pick up the database default in this case when a partition is\n> is created on it. This would have the advantage to be consistent with\n> older versions where we fallback on the default. We cannot be\n> completely consistent with the reltablespace of the leaf partitions\n> unfortunately, as relam should always be set if a relation has\n> storage. And allowing a value of 0 means that there are likely other\n> tricky cases with dumps?\n> \n> Another thing: would it make sense to allow an empty string in\n> default_table_access_method so as we'd always fallback to a database\n> default?\n\nFYI, I am not sure that I will be able to look more at this patch by\nthe end of the commit fest, and there are quite more points to\nconsider. Perhaps at this stage we'd better mark it as returned with\nfeedback? I understand that I've arrived late at this party :/\n--\nMichael",
"msg_date": "Wed, 29 Mar 2023 09:18:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 11:34:36PM -0500, Justin Pryzby wrote:\n> On Tue, Mar 28, 2023 at 09:13:10AM +0900, Michael Paquier wrote:\n> > On Mon, Mar 20, 2023 at 09:30:50AM +0900, Michael Paquier wrote:\n> > > Did you check dump and restore flows with partition\n> > > trees and --no-table-access-method? Perhaps there should be\n> > > some regression tests with partitioned tables?\n> > \n> > I was looking at the patch, and as I suspected the dumps generated\n> > are forgetting to apply the AM to the partitioned tables.\n> \n> The patch said:\n> \n> + else if (RELKIND_HAS_TABLE_AM(relkind) || relkind == RELKIND_PARTITIONED_TABLE)\n> \n> pg_dump was missing a similar change that's conditional on\n> RELKIND_HAS_TABLE_AM(). It looks like those are the only two places\n> that need be be specially handled.\n> \n> I dug up my latest patch from 2021 and incorporated the changes from the\n> 0001 patch here, and added a test case.\n> \n> I realized that one difference with tablespaces is that, as written,\n> partitioned tables will *always* have an AM specified, and partitions\n> will never use default_table_access_method. Is that what's intended ?\n> \n> Or do we need logic similar tablespaces, such that the relam of a\n> partitioned table is set only if it differs from default_table_am ?\n\nActually .. I think it'd be a mistake if the relam needed to be\ninterpretted differently for partitioned tables than other relkinds.\n\nI updated the patch to allow intermediate partitioned tables to inherit\nrelam from their parent.\n\nMichael wrote:\n> .. and there are quite more points to consider.\n\nWhat more points ? There's nothing that's been raised here. In fact,\nyour message last week is the first comment since last June ..\n\n-- \nJustin",
"msg_date": "Thu, 30 Mar 2023 00:07:58 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 12:07:58AM -0500, Justin Pryzby wrote:\n> On Mon, Mar 27, 2023 at 11:34:36PM -0500, Justin Pryzby wrote:\n> > On Tue, Mar 28, 2023 at 09:13:10AM +0900, Michael Paquier wrote:\n> > > On Mon, Mar 20, 2023 at 09:30:50AM +0900, Michael Paquier wrote:\n> > > > Did you check dump and restore flows with partition\n> > > > trees and --no-table-access-method? Perhaps there should be\n> > > > some regression tests with partitioned tables?\n> > > \n> > > I was looking at the patch, and as I suspected the dumps generated\n> > > are forgetting to apply the AM to the partitioned tables.\n> > \n> > The patch said:\n> > \n> > + else if (RELKIND_HAS_TABLE_AM(relkind) || relkind == RELKIND_PARTITIONED_TABLE)\n> > \n> > pg_dump was missing a similar change that's conditional on\n> > RELKIND_HAS_TABLE_AM(). It looks like those are the only two places\n> > that need be be specially handled.\n> > \n> > I dug up my latest patch from 2021 and incorporated the changes from the\n> > 0001 patch here, and added a test case.\n> > \n> > I realized that one difference with tablespaces is that, as written,\n> > partitioned tables will *always* have an AM specified, and partitions\n> > will never use default_table_access_method. Is that what's intended ?\n> > \n> > Or do we need logic similar tablespaces, such that the relam of a\n> > partitioned table is set only if it differs from default_table_am ?\n> \n> Actually .. I think it'd be a mistake if the relam needed to be\n> interpretted differently for partitioned tables than other relkinds.\n> \n> I updated the patch to allow intermediate partitioned tables to inherit\n> relam from their parent.\n> \n> Michael wrote:\n> > .. and there are quite more points to consider.\n> \n> What more points ? There's nothing that's been raised here. In fact,\n> your message last week is the first comment since last June ..\n\nMichael ?\n\n\n",
"msg_date": "Mon, 24 Apr 2023 19:18:54 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 12:07:58AM -0500, Justin Pryzby wrote:\n> What more points ? There's nothing that's been raised here. In fact,\n> your message last week is the first comment since last June ..\n\nWhen I wrote this message, I felt like this may still be missing\nsomething in the area of dump/restore. Perhaps my feeling on the\nmatter is wrong, so consider this as a self-reminder not to be taken\nseriously until I can have a closer look at what's proposed here for\nv17. :p\n--\nMichael",
"msg_date": "Tue, 25 Apr 2023 14:17:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Mon, Apr 24, 2023 at 07:18:54PM -0500, Justin Pryzby wrote:\n> On Thu, Mar 30, 2023 at 12:07:58AM -0500, Justin Pryzby wrote:\n>> Actually .. I think it'd be a mistake if the relam needed to be\n>> interpretted differently for partitioned tables than other relkinds.\n>>\n>> I updated the patch to allow intermediate partitioned tables to inherit\n>> relam from their parent.\n>\n> Michael ?\n\nSorry for dropping the subject for so long. I have spent some time\nlooking at the patch. Here are a few comments.\n\npg_class.h includes the following:\n\n/*\n * Relation kinds that support tablespaces: All relation kinds with storage\n * support tablespaces, except that we don't support moving sequences around\n * into different tablespaces. Partitioned tables and indexes don't have\n * physical storage, but they have a tablespace settings so that their\n * children can inherit it.\n */\n#define RELKIND_HAS_TABLESPACE(relkind) \\\n ((RELKIND_HAS_STORAGE(relkind) || RELKIND_HAS_PARTITIONS(relkind)) \\\n && (relkind) != RELKIND_SEQUENCE)\n[...]\n/*\n * Relation kinds with a table access method (rd_tableam). Although sequences\n * use the heap table AM, they are enough of a special case in most uses that\n * they are not included here.\n */\n#define RELKIND_HAS_TABLE_AM(relkind) \\\n ((relkind) == RELKIND_RELATION || \\\n (relkind) == RELKIND_TOASTVALUE || \\\n (relkind) == RELKIND_MATVIEW)\n\nIt would look much more consistent with the tablespace case if we\nincluded partitioned tables in this case, but this refers to\nrd_tableam for the relcache which we surely don't want to fill for\npartitioned tables. I guess that at minimum a comment is in order?\nRELKIND_HAS_TABLE_AM() is much more spread than\nRELKIND_HAS_TABLESPACE().\n\n * No need to add an explicit dependency for the toast table, as the\n * main table depends on it.\n */\n- if (RELKIND_HAS_TABLE_AM(relkind) && relkind != RELKIND_TOASTVALUE)\n+ if ((RELKIND_HAS_TABLE_AM(relkind) && relkind != RELKIND_TOASTVALUE) ||\n+ relkind == RELKIND_PARTITIONED_TABLE)\n\nThe comment at the top of this code block needs an update.\n\n if (stmt->accessMethod != NULL)\n+ accessMethodId = get_table_am_oid(stmt->accessMethod, false);\n else if (stmt->partbound &&\n+ (RELKIND_HAS_TABLE_AM(relkind) || relkind == RELKIND_PARTITIONED_TABLE))\n {\n+ /*\n+ * For partitions, if no access method is specified, default to the AM\n+ * of the parent table.\n+ */\n+ Assert(list_length(inheritOids) == 1);\n+ accessMethodId = get_rel_relam(linitial_oid(inheritOids));\n+ if (!OidIsValid(accessMethodId))\n+ accessMethodId = get_table_am_oid(default_table_access_method, false);\n }\n+ else if (RELKIND_HAS_TABLE_AM(relkind) || relkind == RELKIND_PARTITIONED_TABLE)\n+ accessMethodId = get_table_am_oid(default_table_access_method, false);\n\nThis structure seems a bit weird. Could it be cleaner to group the\nsecond and third blocks together? I would imagine:\nif (accessMethod != NULL)\n{\n //Extract the AM defined in the statement\n}\nelse\n{\n //This is a relkind that can use a default table AM.\n if (RELKIND_HAS_TABLE_AM(relkind) || relkind == RELKIND_PARTITIONED_TABLE)\n {\n if (stmt->partbound)\n\t{\n\t //This is a partition, so look at what its partitioned\n\t //table holds.\n\t}\n\telse\n\t{\n\t //No partition, grab the default.\n\t}\n }\n}\n\n+ /*\n+ * Only do this for partitioned tables, for which this is just a\n+ * catalog change. Tables with storage are handled by Phase 3.\n+ */\n+ if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n+ ATExecSetAccessMethodNoStorage(rel, tab->newAccessMethod);\n\nOkay, there is a parallel with tablespaces in this logic.\n\nSpeaking of which, ATExecSetAccessMethodNoStorage() does a catalog\nupdate even if ALTER TABLE is defined to use the same table AM as what\nis currently set. There is no need to update the relation's pg_class\nentry in this case. Be careful that InvokeObjectPostAlterHook() still\nneeds to be checked in this case. Perhaps some tests should be added\nin test_oat_hooks.sql? It would be tempted to add a new SQL file for\nthat.\n\n+ else if (relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n+ {\n+ /* Do nothing: it's a catalog settings for partitions to inherit */\n+ }\nActually, we could add an assertion telling that rd_rel->relam will\nalways be valid.\n\n- if (RELKIND_HAS_TABLE_AM(tbinfo->relkind))\n+ if (RELKIND_HAS_TABLE_AM(tbinfo->relkind) ||\n+ tbinfo->relkind == RELKIND_PARTITIONED_TABLE)\n tableam = tbinfo->amname;\nI have spent some time pondering on this particular change, concluding\nthat it should be OK. It passes my tests, as well.\n\n+-- partition hierarchies\n+-- upon ALTER, new children will inherit the new am, whereas the existing\n+-- children will remain untouched\n CREATE TABLE am_partitioned(x INT, y INT)\n PARTITION BY hash (x);\n+CREATE TABLE am_partitioned_1 PARTITION OF am_partitioned FOR VALUES WITH (MODULUS 3,REMAINDER 0);\n+CREATE TABLE am_partitioned_2 PARTITION OF am_partitioned FOR VALUES WITH (MODULUS 3,REMAINDER 1);\n+ALTER TABLE am_partitioned_1 SET ACCESS METHOD heap2;\n ALTER TABLE am_partitioned SET ACCESS METHOD heap2;\n\nHmm. I think that we should rewrite a bit this test rather than just\nadding contents on top of it. There is also an extra test I would be\ninteresting in here: a partition tree with 2 levels of depth, an ALTER\nTABLE SET ACCESS METHOD run at level 1 on a partitioned table, and\nsome new partitions attached to it to check that the new partitions\ninherit from the level 1 partitioned table, not the top-parent.\n\nalter_table.sgml should be updated to explain what happens when SET\nACCESS METHOD is applied on a partitioned table. See for example SET\nTABLESPACE that explains what happens to partitions created\nafterwards, telling that there is no rewrite in this case.\n\nThe regression test added to check pg_dump with a partition tree and\nthe two table AMs was mixed with an existing one, but it seems to me\nthat it should be independent of the rest? I have tweaked that as in\nthe attached, on the way, using one partition that relies on the\ndefault defined by the parent, and a second that has a USING clause on\nheap. I did not touch the rest of the code.\n--\nMichael",
"msg_date": "Thu, 25 May 2023 15:49:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Thu, May 25, 2023 at 03:49:12PM +0900, Michael Paquier wrote:\n> looking at the patch. Here are a few comments.\n\n...\n> * No need to add an explicit dependency for the toast table, as the\n> * main table depends on it.\n> */\n> - if (RELKIND_HAS_TABLE_AM(relkind) && relkind != RELKIND_TOASTVALUE)\n> + if ((RELKIND_HAS_TABLE_AM(relkind) && relkind != RELKIND_TOASTVALUE) ||\n> + relkind == RELKIND_PARTITIONED_TABLE)\n> \n> The comment at the top of this code block needs an update.\n\nWhat do you think the comment ought to say ? It already says:\n\nsrc/backend/catalog/heap.c- * Make a dependency link to force the relation to be deleted if its\nsrc/backend/catalog/heap.c- * access method is.\n\n> Speaking of which, ATExecSetAccessMethodNoStorage() does a catalog\n> update even if ALTER TABLE is defined to use the same table AM as what\n> is currently set. There is no need to update the relation's pg_class\n> entry in this case. Be careful that InvokeObjectPostAlterHook() still\n> needs to be checked in this case. Perhaps some tests should be added\n> in test_oat_hooks.sql? It would be tempted to add a new SQL file for\n> that.\n\nAre you suggesting to put this in a conditional: if oldrelam!=newAccessMethod ?\n\n+ ((Form_pg_class) GETSTRUCT(tuple))->relam = newAccessMethod;\n+ CatalogTupleUpdate(pg_class, &tuple->t_self, tuple);\n+\n+ /* Update dependency on new AM */\n+ changeDependencyFor(RelationRelationId, relid, AccessMethodRelationId,\n+ oldrelam, newAccessMethod);\n\nWhy is that desirable ? I'd prefer to see it written with fewer\nconditionals, not more; then, it hits the same path every time.\n\nThis ought to address your other comments.\n\n-- \nJustin",
"msg_date": "Wed, 31 May 2023 18:35:34 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Wed, May 31, 2023 at 06:35:34PM -0500, Justin Pryzby wrote:\n> What do you think the comment ought to say ? It already says:\n>\n> src/backend/catalog/heap.c- * Make a dependency link to force the relation to be deleted if its\n> src/backend/catalog/heap.c- * access method is.\n\nThis is the third location where we rely on the fact that\nRELKIND_HAS_TABLE_AM() does not include RELKIND_PARTITIONED_TABLE, so\nit seems worth documenting what we are relying on as a comment? Say:\n * Make a dependency link to force the relation to be deleted if its\n * access method is.\n *\n * No need to add an explicit dependency for the toast table, as the\n * main table depends on it. Partitioned tables have a table access\n * method defined, and RELKIND_HAS_TABLE_AM ignores them.\n\n>> Speaking of which, ATExecSetAccessMethodNoStorage() does a catalog\n>> update even if ALTER TABLE is defined to use the same table AM as what\n>> is currently set. There is no need to update the relation's pg_class\n>> entry in this case. Be careful that InvokeObjectPostAlterHook() still\n>> needs to be checked in this case. Perhaps some tests should be added\n>> in test_oat_hooks.sql? It would be tempted to add a new SQL file for\n>> that.\n>\n> Are you suggesting to put this in a conditional: if oldrelam!=newAccessMethod ?\n\nYes, that's what I would add with a few lines close to the beginning\nof ATExecSetTableSpaceNoStorage() to save from catalog updates if\nthese are not needed.\n--\nMichael",
"msg_date": "Thu, 1 Jun 2023 08:50:50 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "Have you had a chance to address the comments raised by Michael in his last\nreview such that a new patch revision can be submitted?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 4 Jul 2023 10:54:04 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Thu, Jun 01, 2023 at 08:50:50AM -0400, Michael Paquier wrote:\n> >> Speaking of which, ATExecSetAccessMethodNoStorage() does a catalog\n> >> update even if ALTER TABLE is defined to use the same table AM as what\n> >> is currently set. There is no need to update the relation's pg_class\n> >> entry in this case. Be careful that InvokeObjectPostAlterHook() still\n> >> needs to be checked in this case. Perhaps some tests should be added\n> >> in test_oat_hooks.sql? It would be tempted to add a new SQL file for\n> >> that.\n> >\n> > Are you suggesting to put this in a conditional: if oldrelam!=newAccessMethod ?\n> \n> Yes, that's what I would add with a few lines close to the beginning\n> of ATExecSetTableSpaceNoStorage() to save from catalog updates if\n> these are not needed.\n\nI understand that it's possible for it to be conditional, but I don't\nundertand why it's desirable/important ?\n\nIt still seems preferable to be unconditional.\n\nOn Wed, May 31, 2023 at 06:35:34PM -0500, Justin Pryzby wrote:\n>> Why is that desirable ? I'd prefer to see it written with fewer\n>> conditionals, not more; then, it hits the same path every time.\n\nIt's not conditional in the tablespace code that this follows, nor\nothers like ATExecSetStatistics(), ATExecForceNoForceRowSecurity(),\nATExecSetCompression(), etc, some of which are recently-added. If it's\nimportant to do here, isn't it also important to do everywhere else ?\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 16 Jul 2023 21:37:28 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Sun, Jul 16, 2023 at 09:37:28PM -0500, Justin Pryzby wrote:\n> I understand that it's possible for it to be conditional, but I don't\n> undertand why it's desirable/important ?\n\nBecause it's cheaper on repeated commands, like no CCI necessary.\n\n> It's not conditional in the tablespace code that this follows, nor\n> others like ATExecSetStatistics(), ATExecForceNoForceRowSecurity(),\n> ATExecSetCompression(), etc, some of which are recently-added. If it's\n> important to do here, isn't it also important to do everywhere else ?\n\nGood point here. I am OK to discard this point.\n--\nMichael",
"msg_date": "Wed, 19 Jul 2023 11:49:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Thu, Jun 01, 2023 at 08:50:50AM -0400, Michael Paquier wrote:\n> On Wed, May 31, 2023 at 06:35:34PM -0500, Justin Pryzby wrote:\n> > What do you think the comment ought to say ? It already says:\n> >\n> > src/backend/catalog/heap.c- * Make a dependency link to force the relation to be deleted if its\n> > src/backend/catalog/heap.c- * access method is.\n> \n> This is the third location where we rely on the fact that\n> RELKIND_HAS_TABLE_AM() does not include RELKIND_PARTITIONED_TABLE, so\n> it seems worth documenting what we are relying on as a comment? Say:\n> * Make a dependency link to force the relation to be deleted if its\n> * access method is.\n> *\n> * No need to add an explicit dependency for the toast table, as the\n> * main table depends on it. Partitioned tables have a table access\n> * method defined, and RELKIND_HAS_TABLE_AM ignores them.\n\nYou said that this location \"relies on\" the macro not including\npartitioned tables, but I would say the opposite: the places that use\nRELKIND_HAS_TABLE_AM() and do *not* say \"or relkind==PARTITIONED_TABLE\"\nare the ones that \"rely on\" that...\n\nAnyway, this updates various comments. No other changes.\n\n-- \nJustin",
"msg_date": "Wed, 19 Jul 2023 13:13:48 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, this patch was marked in CF as \"Needs Review\" [1], but there has\nbeen no activity on this thread for 6+ months.\n\nIs anything else planned, or can you post something to elicit more\ninterest in reviews for the latest patch? Otherwise, if nothing\nhappens then the CF entry will be closed (\"Returned with feedback\") at\nthe end of this CF.\n\n======\n[1] https://commitfest.postgresql.org/46/3727/\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 22 Jan 2024 13:25:13 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "> @@ -947,20 +947,22 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,\n> \t * a type of relation that needs one, use the default.\n> \t */\n> \tif (stmt->accessMethod != NULL)\n> +\t\taccessMethodId = get_table_am_oid(stmt->accessMethod, false);\n> +\telse if (RELKIND_HAS_TABLE_AM(relkind) || relkind == RELKIND_PARTITIONED_TABLE)\n> \t{\n> -\t\taccessMethod = stmt->accessMethod;\n> -\n> -\t\tif (partitioned)\n> -\t\t\tereport(ERROR,\n> -\t\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> -\t\t\t\t\t errmsg(\"specifying a table access method is not supported on a partitioned table\")));\n> +\t\tif (stmt->partbound)\n> +\t\t{\n> +\t\t\t/*\n> +\t\t\t * For partitions, if no access method is specified, use the AM of\n> +\t\t\t * the parent table.\n> +\t\t\t */\n> +\t\t\tAssert(list_length(inheritOids) == 1);\n> +\t\t\taccessMethodId = get_rel_relam(linitial_oid(inheritOids));\n> +\t\t\tAssert(OidIsValid(accessMethodId));\n> +\t\t}\n> +\t\telse\n> +\t\t\taccessMethodId = get_table_am_oid(default_table_access_method, false);\n> \t}\n\nI think this works similarly but not identically to tablespace defaults,\nand the difference could be confusing. You seem to have made it so that\nthe partitioned table _always_ have a table AM, so the partitions can\nalways inherit from it. I think it would be more sensible to _allow_\npartitioned tables to have one, but not mandatory; if they don't have\nit, then a partition created from it would use default_table_access_method.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nBob [Floyd] used to say that he was planning to get a Ph.D. by the \"green\nstamp method,\" namely by saving envelopes addressed to him as 'Dr. Floyd'.\nAfter collecting 500 such letters, he mused, a university somewhere in\nArizona would probably grant him a degree. (Don Knuth)\n\n\n",
"msg_date": "Thu, 1 Feb 2024 16:50:49 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Thu, Feb 01, 2024 at 04:50:49PM +0100, Alvaro Herrera wrote:\n> I think this works similarly but not identically to tablespace defaults,\n> and the difference could be confusing. You seem to have made it so that\n> the partitioned table _always_ have a table AM, so the partitions can\n> always inherit from it. I think it would be more sensible to _allow_\n> partitioned tables to have one, but not mandatory; if they don't have\n> it, then a partition created from it would use default_table_access_method.\n\nYou mean to allow a value of 0 in pg_class.relam on a partitioned\ntable to allow any partitions created on it to use the default AM in\nthe GUC when the partition is created? Yes, this inconsistency was\nbothering me as well in the patch.\n--\nMichael",
"msg_date": "Fri, 2 Feb 2024 06:46:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Thu, Feb 1, 2024 at 10:51 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> I think this works similarly but not identically to tablespace defaults,\n> and the difference could be confusing. You seem to have made it so that\n> the partitioned table _always_ have a table AM, so the partitions can\n> always inherit from it. I think it would be more sensible to _allow_\n> partitioned tables to have one, but not mandatory; if they don't have\n> it, then a partition created from it would use default_table_access_method.\n\nI agree that we don't want this feature to invent any new behavior. If\nit's clearly and fully parallel to what we do for tablespaces, then I\nthink it's probably OK, but anything less than that would be a cause\nfor concern for me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 2 Feb 2024 12:04:47 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On 19.07.23 20:13, Justin Pryzby wrote:\n> On Thu, Jun 01, 2023 at 08:50:50AM -0400, Michael Paquier wrote:\n>> On Wed, May 31, 2023 at 06:35:34PM -0500, Justin Pryzby wrote:\n>>> What do you think the comment ought to say ? It already says:\n>>>\n>>> src/backend/catalog/heap.c- * Make a dependency link to force the relation to be deleted if its\n>>> src/backend/catalog/heap.c- * access method is.\n>>\n>> This is the third location where we rely on the fact that\n>> RELKIND_HAS_TABLE_AM() does not include RELKIND_PARTITIONED_TABLE, so\n>> it seems worth documenting what we are relying on as a comment? Say:\n>> * Make a dependency link to force the relation to be deleted if its\n>> * access method is.\n>> *\n>> * No need to add an explicit dependency for the toast table, as the\n>> * main table depends on it. Partitioned tables have a table access\n>> * method defined, and RELKIND_HAS_TABLE_AM ignores them.\n> \n> You said that this location \"relies on\" the macro not including\n> partitioned tables, but I would say the opposite: the places that use\n> RELKIND_HAS_TABLE_AM() and do *not* say \"or relkind==PARTITIONED_TABLE\"\n> are the ones that \"rely on\" that...\n> \n> Anyway, this updates various comments. No other changes.\n\nIt would be helpful if this patch could more extensively document in its \ncommit message what semantic changes it makes. Various options of \npossible behaviors were discussed in this thread, but it's not clear \nwhich behaviors were chosen in this particular patch version.\n\nThe general idea is that you can set an access method on a partitioned \ntable. That much seems very agreeable. But then what happens with this \nsetting, how can you override it, how can you change it, what happens \nwhen you change it, what happens with existing partitions and new \npartitions, etc. -- and which of these behaviors are new and old. Many \nthings to specify.\n\n\n\n",
"msg_date": "Tue, 20 Feb 2024 15:47:46 +0100",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 03:47:46PM +0100, Peter Eisentraut wrote:\n> It would be helpful if this patch could more extensively document in its\n> commit message what semantic changes it makes. Various options of possible\n> behaviors were discussed in this thread, but it's not clear which behaviors\n> were chosen in this particular patch version.\n> \n> The general idea is that you can set an access method on a partitioned\n> table. That much seems very agreeable. But then what happens with this\n> setting, how can you override it, how can you change it, what happens when\n> you change it, what happens with existing partitions and new partitions,\n> etc. -- and which of these behaviors are new and old. Many things to\n> specify.\n\nThe main point in this patch is the following code block in\nDefineRelation(), that defines the semantics about the AM set for a\npartitioned table:\n+ else if (RELKIND_HAS_TABLE_AM(relkind) || relkind == RELKIND_PARTITIONED_TABLE)\n {\n+ if (stmt->partbound)\n+ {\n+ /*\n+ * For partitions, if no access method is specified, use the AM of\n+ * the parent table.\n+ */\n+ Assert(list_length(inheritOids) == 1);\n+ accessMethodId = get_rel_relam(linitial_oid(inheritOids));\n+ Assert(OidIsValid(accessMethodId));\n+ }\n+ else\n+ accessMethodId = get_table_am_oid(default_table_access_method, false);\n }\n\nThis means that all partitioned tables would have pg_class.relam set,\nand that relam would never be 0:\n- The USING clause takes priority over default_table_access_method.\n- If no USING clause, default_table_access_method is the AM used\n\nAny partitions created from this partitioned table would inherit the\nAM set, ignoring default_table_access_method. \n\nAlvaro has made a very good point a couple of days ago at [1] where we\nshould try to make the behavior stick closer to tablespaces, where it\ncould be possible to set relam to 0 for a partitioned table, where a\npartition would inherit the AM set in the GUC when a USING clause is\nnot defined (if USING specifies the AM, we'd just use it).\n\nExisting partitions should not be changed if the AM of their\npartitioned table changes, so you can think of the AM as a hint for\nthe creation of new partitions.\n\n[1]: https://www.postgresql.org/message-id/202402011550.sfszd46247zi@alvherre.pgsql\n--\nMichael",
"msg_date": "Wed, 21 Feb 2024 15:40:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On 21.02.24 07:40, Michael Paquier wrote:\n> This means that all partitioned tables would have pg_class.relam set,\n> and that relam would never be 0:\n> - The USING clause takes priority over default_table_access_method.\n> - If no USING clause, default_table_access_method is the AM used\n> \n> Any partitions created from this partitioned table would inherit the\n> AM set, ignoring default_table_access_method.\n> \n> Alvaro has made a very good point a couple of days ago at [1] where we\n> should try to make the behavior stick closer to tablespaces, where it\n> could be possible to set relam to 0 for a partitioned table, where a\n> partition would inherit the AM set in the GUC when a USING clause is\n> not defined (if USING specifies the AM, we'd just use it).\n\nYes, I think most people agreed that that would be the preferred behavior.\n\n\n\n",
"msg_date": "Wed, 21 Feb 2024 08:46:48 +0100",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 08:46:48AM +0100, Peter Eisentraut wrote:\n> Yes, I think most people agreed that that would be the preferred behavior.\n\nChallenge accepted. As of the patch attached.\n\nTablespaces rely MyDatabaseTableSpace to fallback to the database's\ndefault if not specified, but we cannot do that for table AMs as there\nis no equivalent to dattablespace.\n\nI have implemented that so as we keep the default, historical\nbehavior: if pg_class.relam is 0 for a partitioned table, use the AM\ndefined by default_table_access_method. The patch only adds a path to\nswitch to a different AM than the GUC when creating a new partition if\nand only if a partitioned table has been manipulated with ALTER TABLE\nSET ACCESS METHOD to update its AM to something else than the GUC.\nSimilarly to tablespaces, CREATE TABLE USING is *not* supported for\npartitioned tables, same behavior as previously.\n\nThere is a bit more regarding the handling of the entries in\npg_depend, but nothing really complicated, knowing that there can be\nthree possible patterns:\n- Add a new dependency if changing the AM to be something different\nthan the GUC.\n- Remove the dependency if changing the AM to the value of the GUC,\nwhen something existing previously.\n- Update the dependency if switching between AMs that don't refer to\nthe GUC at all.\n\nIf the AM of a partitioned table is not changed, there is no need to\nupdate the catalogs at all. The prep phase of the sub-command is\nalready aware of that, setting the new AM OID to InvalidOid in this\ncase.\n\nThe attached includes regression tests that check all the dependency\nentries, the contents of pg_class for partitioned tables, as well as\nthe creation of partitions when pg_class.relam is not 0. I'd welcome\nmore eyes regarding these changes. pg_dump needs to be tweaked to\nsave the AM information of a partitioned table, like the previous\nversions. There are tests for these dump patterns, that needed a\nslight tweak to work. Docs have been refreshed.\n\nThoughts, comments?\n--\nMichael",
"msg_date": "Wed, 28 Feb 2024 17:08:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Wed, Feb 28, 2024 at 05:08:49PM +0900, Michael Paquier wrote:\n> On Wed, Feb 21, 2024 at 08:46:48AM +0100, Peter Eisentraut wrote:\n> > Yes, I think most people agreed that that would be the preferred behavior.\n> \n> Challenge accepted. As of the patch attached.\n\nThanks for picking it up. I find it pretty hard to switch back to\nput the needed effort into a patch after a long period.\n\n> I have implemented that so as we keep the default, historical\n> behavior: if pg_class.relam is 0 for a partitioned table, use the AM\n> defined by default_table_access_method. The patch only adds a path to\n> switch to a different AM than the GUC when creating a new partition if\n> and only if a partitioned table has been manipulated with ALTER TABLE\n> SET ACCESS METHOD to update its AM to something else than the GUC.\n> Similarly to tablespaces, CREATE TABLE USING is *not* supported for\n> partitioned tables, same behavior as previously.\n\nThis patch allows resetting relam=0 by running ALTER TABLE SET AM to the\nsame value as the GUC. Maybe it'd be better to have an explicit SET\nDEFAULT (as in b9424d01 and 4f622503).\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 29 Feb 2024 08:51:31 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Thu, Feb 29, 2024 at 08:51:31AM -0600, Justin Pryzby wrote:\n> On Wed, Feb 28, 2024 at 05:08:49PM +0900, Michael Paquier wrote:\n>> I have implemented that so as we keep the default, historical\n>> behavior: if pg_class.relam is 0 for a partitioned table, use the AM\n>> defined by default_table_access_method. The patch only adds a path to\n>> switch to a different AM than the GUC when creating a new partition if\n>> and only if a partitioned table has been manipulated with ALTER TABLE\n>> SET ACCESS METHOD to update its AM to something else than the GUC.\n>> Similarly to tablespaces, CREATE TABLE USING is *not* supported for\n>> partitioned tables, same behavior as previously.\n> \n> This patch allows resetting relam=0 by running ALTER TABLE SET AM to the\n> same value as the GUC. Maybe it'd be better to have an explicit SET\n> DEFAULT (as in b9424d01 and 4f622503).\n\nOutside the scope of this patch's thread, this looks like a good idea\neven for tables/matviews. And the semantics are pretty easy: if DEFAULT\nis specified, just set the access method to NULL in the parser and let\ntablecmds.c go the AM OID lookup in the prep phase if set to NULL.\nSee 0001 attached. This one looks pretty good taken as an independent\npiece.\n\nWhen it comes to partitioned tables, there is a still a tricky case:\nwhat should we do when a user specifies a non-default value in the SET\nACCESS METHOD clause and it matches default_table_access_method?\nShould the relam be 0 or should we force relam to be the OID of the\ngiven value given by the query? Implementation-wise, forcing the\nvalue to 0 is simpler, but I can get why it could be confusing as\nwell, because the state of the catalogs does not reflect what was\nprovided in the query. At the same time, the user has explicitly set\nthe access method to be the same as the default, so perhaps 0 makes\nsense anyway in this case.\n\n0002 does that, as that's simpler. I'm not sure if there is a case\nfor forcing a value in relam if the query has the same value as the\ndefault. Thoughts?\n--\nMichael",
"msg_date": "Fri, 1 Mar 2024 10:56:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Fri, 1 Mar 2024 at 02:57, Michael Paquier <michael@paquier.xyz> wrote:\n> When it comes to partitioned tables, there is a still a tricky case:\n> what should we do when a user specifies a non-default value in the SET\n> ACCESS METHOD clause and it matches default_table_access_method?\n> Should the relam be 0 or should we force relam to be the OID of the\n> given value given by the query? Implementation-wise, forcing the\n> value to 0 is simpler, but I can get why it could be confusing as\n> well, because the state of the catalogs does not reflect what was\n> provided in the query. At the same time, the user has explicitly set\n> the access method to be the same as the default, so perhaps 0 makes\n> sense anyway in this case.\n\nI think we should set the AM OID explicitly. Because an important\nthing to consider is: What behaviour makes sense when later\ndefault_table_access_method is changed?\nI think if someone sets it explicitly on the partitioned table, they\nwould want the AM of the partitioned table to stay the same when\ndefault_table_access_method is changed. Which requires storing the AM\nOID afaict.\n\n\n",
"msg_date": "Fri, 1 Mar 2024 05:43:25 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Fri, Mar 01, 2024 at 05:43:25AM +0100, Jelte Fennema-Nio wrote:\n> I think we should set the AM OID explicitly. Because an important\n> thing to consider is: What behaviour makes sense when later\n> default_table_access_method is changed?\n\nPer the latest discussion of the thread, we've kind of reached a\nconsensus that we should keep the current historical bevahior on\ndefault, where relam remains at 0, causing new partitions to grab the\nGUC as AM. If we create a partitioned table attached to a partitioned\ntable, it should be 0 as well. If the partitioned table has a non-0\nrelam, a new partitioned table created on it will inherit the same\nnon-0 value. \n\n> I think if someone sets it explicitly on the partitioned table, they\n> would want the AM of the partitioned table to stay the same when\n> default_table_access_method is changed. Which requires storing the AM\n> OID afaict.\n\nIf we allow relam to be non-0 for a partitioned table, it is equally\nimportant to give users a way to reset it at will. My point was a bit\nmore subtle than that. For example, this sequence is clear to me:\nSET default_table_access_method = 'foo';\nALTER TABLE part SET ACCESS METHOD DEFAULT;\n\nThe user wants to rely on the GUC, so relam should be 0, new\npartitions created on it will use the GUC.\n\nNow, what should this sequence mean? See:\nSET default_table_access_method = 'foo';\nALTER TABLE part SET ACCESS METHOD foo;\n\nShould the relam be 0 because the user requested a match with the GUC,\nor use the OID of the AM? There has to be some difference with\ntablespaces, because relations with physical storage (tables,\nmatviews) can use a reltablespace of 0, but AMs have to be set for\ntables and matviews.\n\nFun topic, especially once coupled with the internals of tablecmds.c\nthat uses InvalidOid for the new access AM as a special value to work\nas a no-op.\n--\nMichael",
"msg_date": "Fri, 1 Mar 2024 14:03:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Fri, Mar 01, 2024 at 05:43:25AM +0100, Jelte Fennema-Nio wrote:\n> I think we should set the AM OID explicitly. Because an important\n> thing to consider is: What behaviour makes sense when later\n> default_table_access_method is changed?\n> I think if someone sets it explicitly on the partitioned table, they\n> would want the AM of the partitioned table to stay the same when\n> default_table_access_method is changed. Which requires storing the AM\n> OID afaict.\n\nOops, I think I misread that. You just mean to always set relam when\nusing an AM in the SET ACCESS METHOD clause. Apologies for the noise.\n--\nMichael",
"msg_date": "Fri, 1 Mar 2024 14:14:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Fri, 1 Mar 2024 at 06:15, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Mar 01, 2024 at 05:43:25AM +0100, Jelte Fennema-Nio wrote:\n> > I think we should set the AM OID explicitly. Because an important\n> > thing to consider is: What behaviour makes sense when later\n> > default_table_access_method is changed?\n> > I think if someone sets it explicitly on the partitioned table, they\n> > would want the AM of the partitioned table to stay the same when\n> > default_table_access_method is changed. Which requires storing the AM\n> > OID afaict.\n>\n> Oops, I think I misread that. You just mean to always set relam when\n> using an AM in the SET ACCESS METHOD clause. Apologies for the noise.\n\nCorrect, I intended to say that \"SET ACCESS METHOD heap\" on a\npartitioned table should store heap its OID. Because while storing 0\nmight be simpler, it will result in (imho) wrong behaviour when later\nthe default_table_access_method is changed. behavior won't result in\nthe (imho) intended. i.e. it's not simply a small detail in what the\ncatolog looks like, but there's an actual behavioural change.\n\n\n",
"msg_date": "Fri, 1 Mar 2024 06:25:27 +0100",
"msg_from": "Jelte Fennema-Nio <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Fri, Mar 01, 2024 at 10:56:50AM +0900, Michael Paquier wrote:\n> On Thu, Feb 29, 2024 at 08:51:31AM -0600, Justin Pryzby wrote:\n> > On Wed, Feb 28, 2024 at 05:08:49PM +0900, Michael Paquier wrote:\n> >> I have implemented that so as we keep the default, historical\n> >> behavior: if pg_class.relam is 0 for a partitioned table, use the AM\n> >> defined by default_table_access_method. The patch only adds a path to\n> >> switch to a different AM than the GUC when creating a new partition if\n> >> and only if a partitioned table has been manipulated with ALTER TABLE\n> >> SET ACCESS METHOD to update its AM to something else than the GUC.\n> >> Similarly to tablespaces, CREATE TABLE USING is *not* supported for\n> >> partitioned tables, same behavior as previously.\n> > \n> > This patch allows resetting relam=0 by running ALTER TABLE SET AM to the\n> > same value as the GUC. Maybe it'd be better to have an explicit SET\n> > DEFAULT (as in b9424d01 and 4f622503).\n> \n> Outside the scope of this patch's thread, this looks like a good idea\n> even for tables/matviews. And the semantics are pretty easy: if DEFAULT\n> is specified, just set the access method to NULL in the parser and let\n> tablecmds.c go the AM OID lookup in the prep phase if set to NULL.\n> See 0001 attached. This one looks pretty good taken as an independent\n> piece.\n> \n> When it comes to partitioned tables, there is a still a tricky case:\n> what should we do when a user specifies a non-default value in the SET\n> ACCESS METHOD clause and it matches default_table_access_method?\n\nI don't think it's tricky - it seems more like a weird hack in the\nprevious patch version to make AMs behave like tablespaces, despite not\nbeing completely parallel, due to the absence of a pg_default AM.\n\nWith the new 001, the hack can go away, and so it should.\n\n> Should the relam be 0 or should we force relam to be the OID of the\n> given value given by the query?\n\nYou said \"force\" it to be the user-specified value, but I think that's\nnot \"forcing\", it's respecting (but to take the user's desired value,\nand conditionally store 0 instead, that could be described as\n\"forcing.\")\n\n> Implementation-wise, forcing the value to 0 is simpler, but I can get\n> why it could be confusing as well, because the state of the catalogs\n> does not reflect what was provided in the query.\n\n> At the same time, the user has explicitly set the access method to be\n> the same as the default, so perhaps 0 makes sense anyway in this case.\n\nI think if the user sets something \"explicitly\", the catalog should\nreflect what they set. Tablespaces have dattablespace, but AMs don't --\nit's a simpler case.\n\nFor 001: we don't *need* to support \"ALTER SET AM default\" for leaf\ntables. It doesn't do anything that's not already possible. But, if\nAMs for partitioned tables are optional rather than required, then seems\nto be needed to allow (re)settinng relam=0.\n\nBut for partitioned tables, I think it should set relam=0 directly.\nCurrently it 1) falls through to default_table_am; and 2) detects that\nit's the default, so then sets relam to 0. Since InvalidOid \n\nOn Fri, Mar 01, 2024 at 02:03:48PM +0900, Michael Paquier wrote:\n> Fun topic, especially once coupled with the internals of tablecmds.c\n> that uses InvalidOid for the new access AM as a special value to work\n> as a no-op.\n\nSince InvalidOid is already taken, I guess you might need to introduce a\nboolean flag, like set_relam, indicating that the statement has an\nACCESS METHOD clause.\n\n> + * method defined so as their children can inherit it; however, this is handled\n\nso that\n\n> +\t\t * Do nothing: access methods is a setting that partitions can\n\nmethod (singular), or s/is/are/\n\nOn Wed, Feb 28, 2024 at 05:08:49PM +0900, Michael Paquier wrote:\n> Similarly to tablespaces, CREATE TABLE USING is *not* supported for\n> partitioned tables, same behavior as previously.\n\nMaybe I misunderstood what you're trying to say, but CREATE..TABLESPACE\n*is* supported for partitioned tables. I'm not sure why it wouldn't be\nsupported to set the AM, too.\n\nIn any case, it'd be a bit confusing for the error message to still say:\n\npostgres=# CREATE TABLE a(i int) PARTITION BY RANGE(a) USING heap2;\nERROR: specifying a table access method is not supported on a partitioned table\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 1 Mar 2024 15:03:14 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Fri, Mar 01, 2024 at 03:03:14PM -0600, Justin Pryzby wrote:\n> I think if the user sets something \"explicitly\", the catalog should\n> reflect what they set. Tablespaces have dattablespace, but AMs don't --\n> it's a simpler case.\n\nOkay.\n\n> For 001: we don't *need* to support \"ALTER SET AM default\" for leaf\n> tables. It doesn't do anything that's not already possible. But, if\n> AMs for partitioned tables are optional rather than required, then seems\n> to be needed to allow (re)settinng relam=0.\n\nIndeed, for non-partitioned tables DEFAULT is a sugar flavor. Not\nmandatory, still it's nice to have to not have to type an AM.\n\n> But for partitioned tables, I think it should set relam=0 directly.\n> Currently it 1) falls through to default_table_am; and 2) detects that\n> it's the default, so then sets relam to 0.\n>\n> Since InvalidOid is already taken, I guess you might need to introduce a\n> boolean flag, like set_relam, indicating that the statement has an\n> ACCESS METHOD clause.\n\nYes, I don't see an alternative. The default needs a different field\nto be tracked down to the execution.\n\n>> + * method defined so as their children can inherit it; however, this is handled\n> \n> so that\n> \n>> +\t\t * Do nothing: access methods is a setting that partitions can\n> \n> method (singular), or s/is/are/\n\nIndeed. Fixed both.\n\n> In any case, it'd be a bit confusing for the error message to still say:\n> \n> postgres=# CREATE TABLE a(i int) PARTITION BY RANGE(a) USING heap2;\n> ERROR: specifying a table access method is not supported on a partitioned table\n\nI was looking at this one as well and I don't see why we could not\nremove it, so you are right (missed the tablespace part last week). A\npartitioned table created as a partition of a partitioned table would\ninherit the relam of its parent (0 if default is set, or non-0 is\nsomething is set). I have added some regression tests for that.\n\nAnd I'm finishing with the attached. To summarize SET ACCESS METHOD\non a partitioned table, the semantics are:\n- DEFAULT sets the relam to 0, any partitions with storage would use\nthe GUC at creation time. Partitioned tables use a relam of 0.\n- If a value is set for the am, relam becomes non-0. Any partitions\ncreated on it inherit it (partitioned as well as non-partitioned\ntables).\n- No USING clause means to set its relam to 0.\n\n0001 seems OK here, 0002 needs more eyes. The bulk of the changes is\nin the regression tests to cover all the cases I could think of.\n--\nMichael",
"msg_date": "Mon, 4 Mar 2024 17:46:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Mon, Mar 04, 2024 at 05:46:56PM +0900, Michael Paquier wrote:\n> > Since InvalidOid is already taken, I guess you might need to introduce a\n> > boolean flag, like set_relam, indicating that the statement has an\n> > ACCESS METHOD clause.\n> \n> Yes, I don't see an alternative. The default needs a different field\n> to be tracked down to the execution.\n\nThe data structure you used (defaultAccessMethod) allows this, which\nis intended to be prohibited:\n\npostgres=# ALTER TABLE a SET access method default, SET access method default;\nALTER TABLE\n\nAs you wrote it, you pass the \"defaultAccessMethod\" bool to\nATExecSetAccessMethodNoStorage(), which seems odd. Why not just pass\nthe target amoid as newAccessMethod ?\n\nWhen I fooled with this on my side, I called it \"chgAccessMethod\" to\nfollow \"chgPersistence\". I think \"is default\" isn't the right data\nstructure.\n\nAttached a relative patch with my version.\n\nAlso: I just realized that instead of adding a bool, we could test\n(tab->rewrite & AT_REWRITE_ACCESS_METHOD) != 0\n\n+-- Default and AM set in in clause are the same, relam should be set.\n\nin in?\n\n-- \nJustin",
"msg_date": "Thu, 7 Mar 2024 20:02:00 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Thu, Mar 07, 2024 at 08:02:00PM -0600, Justin Pryzby wrote:\n> As you wrote it, you pass the \"defaultAccessMethod\" bool to\n> ATExecSetAccessMethodNoStorage(), which seems odd. Why not just pass\n> the target amoid as newAccessMethod ?\n\n+\t/*\n+\t * Check that the table access method exists.\n+\t * Use the access method, if specified, otherwise (when not specified) 0\n+\t * for partitioned tables or the configured default AM.\n+\t */\n+\tif (amname != NULL)\n+\t\tamoid = get_table_am_oid(amname, false);\n+\telse if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n+\t\tamoid = 0;\n+\telse\n+\t\tamoid = get_table_am_oid(default_table_access_method, false);\n\n.. While using this flow to choose the AM oid, that's neater than the\nversions I could come up with, pretty cool.\n\n> When I fooled with this on my side, I called it \"chgAccessMethod\" to\n> follow \"chgPersistence\". I think \"is default\" isn't the right data\n> structure.\n> \n> Attached a relative patch with my version.\n\nThanks. I've applied the patch to add the DEFAULT clause for now, to\nease the work on this patch.\n\n> Also: I just realized that instead of adding a bool, we could test\n> (tab->rewrite & AT_REWRITE_ACCESS_METHOD) != 0\n\nHmm. I've considered that, but the boolean style is able to do the\nwork, while being more consistent, so I'm OK with what you are doing\nin your 0002.\n\n> +-- Default and AM set in in clause are the same, relam should be set.\n> \n> in in?\n\nOops, fixed.\n\nI have spent more time reviewing the whole and the tests (I didn't see\nmuch value in testing the DEFAULT clause twice for the partitioned\ntable case and there is a test in d61a6cad6418), tweaked a few\ncomments and the documentation, did an indentation and a commit\nmessage draft.\n\nHow does that look to you? The test coverage and the semantics do\nwhat we want them to do, so that looks rather reasonable here. A\nsecond or even third pair of eyes would not hurt.\n--\nMichael",
"msg_date": "Fri, 8 Mar 2024 13:32:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On 2024-Mar-08, Michael Paquier wrote:\n\n> I have spent more time reviewing the whole and the tests (I didn't see\n> much value in testing the DEFAULT clause twice for the partitioned\n> table case and there is a test in d61a6cad6418), tweaked a few\n> comments and the documentation, did an indentation and a commit\n> message draft.\n> \n> How does that look to you? The test coverage and the semantics do\n> what we want them to do, so that looks rather reasonable here. A\n> second or even third pair of eyes would not hurt.\n\nI gave this a look. I found some of the comments a bit confusing or\noverly long, so I propose to reword them. I also propose a small doc\nchange (during writing which I noticed that the docs for tablespace had\nbeen neglected and one comment too many; patch to be committed\nseparately soon). I ended up also moving code in tablecmds.c so that\nall the AT*SetAccessMethod* routines appear together rather than mixed\nwith the ones for tablespaces, and removing one CCI that seems\nunnecessary, at the bottom of ATExecSetAccessMethodNoStorage.\n\n0001 is Michaël's patch, 0002 are my proposed changes.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Hay que recordar que la existencia en el cosmos, y particularmente la\nelaboración de civilizaciones dentro de él no son, por desgracia,\nnada idílicas\" (Ijon Tichy)",
"msg_date": "Tue, 19 Mar 2024 11:13:21 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On 2024-Mar-19, Alvaro Herrera wrote:\n\n> 0001 is Michaël's patch, 0002 are my proposed changes.\n\nDoh, I sent the wrong set of attachments. But I see no reason to post\nagain: what I attached as 0001 is what I wrote was going to be 0002,\nMichaël's patch is already in archives, and the CI tests with both\napplied on current master are running here:\nhttps://cirrus-ci.com/build/6404370015715328\n\nMichaël, I'll leave this for you to push ...\n\nThanks!\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 19 Mar 2024 11:20:28 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "Given that Michaël is temporarily gone, I propose to push the attached\ntomorrow.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n<inflex> really, I see PHP as like a strange amalgamation of C, Perl, Shell\n<crab> inflex: you know that \"amalgam\" means \"mixture with mercury\",\n more or less, right?\n<crab> i.e., \"deadly poison\"",
"msg_date": "Thu, 21 Mar 2024 13:07:01 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Mar 21, 2024, at 13:07, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Given that Michaël is temporarily gone, I propose to push the attached\n> tomorrow.\n\nThanks for doing so. I’m wondering whether I should be an author of this patch at this stage, tbh. I wrote all the tests and rewrote most of the internals to adjust with the consensus reached ;)\n--\nMichael\n\n\n",
"msg_date": "Thu, 21 Mar 2024 17:12:40 +0100",
"msg_from": "Michael P <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "Hello Alvaro,\n\n21.03.2024 15:07, Alvaro Herrera wrote:\n> Given that Michaël is temporarily gone, I propose to push the attached\n> tomorrow.\n\nPlease look at a new anomaly introduced with 374c7a229.\nStarting from that commit, the following erroneous query:\nCREATE FOREIGN TABLE fp PARTITION OF pg_am DEFAULT SERVER x;\n\ntriggers an assertion failure:\nTRAP: failed Assert(\"relation->rd_rel->relam == InvalidOid\"), File: \"relcache.c\", Line: 1219, PID: 3706301\n\nwith the stack trace:\n...\n#4 0x00007fe53ced67f3 in __GI_abort () at ./stdlib/abort.c:79\n#5 0x000055f28555951e in ExceptionalCondition (conditionName=conditionName@entry=0x55f285744788 \n\"relation->rd_rel->relam == InvalidOid\", fileName=fileName@entry=0x55f285743f1c \"relcache.c\", \nlineNumber=lineNumber@entry=1219)\n at assert.c:66\n#6 0x000055f285550450 in RelationBuildDesc (targetRelId=targetRelId@entry=16385, insertIt=insertIt@entry=false) at \nrelcache.c:1219\n#7 0x000055f285550769 in RelationClearRelation (relation=relation@entry=0x7fe5310dd178, rebuild=rebuild@entry=true) at \nrelcache.c:2667\n#8 0x000055f285550c41 in RelationFlushRelation (relation=0x7fe5310dd178) at relcache.c:2850\n#9 0x000055f285550ca0 in RelationCacheInvalidateEntry (relationId=<optimized out>) at relcache.c:2921\n#10 0x000055f285542551 in LocalExecuteInvalidationMessage (msg=0x55f2861b3160) at inval.c:738\n#11 0x000055f28554159b in ProcessInvalidationMessages (group=0x55f2861b2e6c, func=func@entry=0x55f2855424a8 \n<LocalExecuteInvalidationMessage>) at inval.c:518\n#12 0x000055f285542740 in CommandEndInvalidationMessages () at inval.c:1180\n#13 0x000055f28509cbbd in AtCCI_LocalCache () at xact.c:1550\n#14 0x000055f28509e88e in CommandCounterIncrement () at xact.c:1116\n#15 0x000055f2851d0c8b in DefineRelation (stmt=stmt@entry=0x55f2861803b0, relkind=relkind@entry=102 'f', ownerId=10, \nownerId@entry=0, typaddress=typaddress@entry=0x0,\n queryString=queryString@entry=0x55f28617f870 \"CREATE FOREIGN TABLE fp PARTITION OF pg_am DEFAULT SERVER x;\") at \ntablecmds.c:1008\n#16 0x000055f28540945d in ProcessUtilitySlow (pstate=pstate@entry=0x55f2861a9dc0, pstmt=pstmt@entry=0x55f286180510, \nqueryString=queryString@entry=0x55f28617f870 \"CREATE FOREIGN TABLE fp PARTITION OF pg_am DEFAULT SERVER x;\",\n context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, \ndest=0x55f2861807d0, qc=0x7fff15b5d7c0) at utility.c:1203\n#17 0x000055f28540911f in standard_ProcessUtility (pstmt=0x55f286180510, queryString=0x55f28617f870 \"CREATE FOREIGN \nTABLE fp PARTITION OF pg_am DEFAULT SERVER x;\", readOnlyTree=<optimized out>, context=PROCESS_UTILITY_TOPLEVEL,\n params=0x0, queryEnv=0x0, dest=0x55f2861807d0, qc=0x7fff15b5d7c0) at utility.c:1067\n...\n\nOn 374c7a229~1 it fails with\nERROR: \"pg_am\" is not partitioned\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Tue, 26 Mar 2024 11:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On 2024-Mar-26, Alexander Lakhin wrote:\n\n> Hello Alvaro,\n> \n> 21.03.2024 15:07, Alvaro Herrera wrote:\n> > Given that Michaël is temporarily gone, I propose to push the attached\n> > tomorrow.\n> \n> Please look at a new anomaly introduced with 374c7a229.\n> Starting from that commit, the following erroneous query:\n> CREATE FOREIGN TABLE fp PARTITION OF pg_am DEFAULT SERVER x;\n> \n> triggers an assertion failure:\n> TRAP: failed Assert(\"relation->rd_rel->relam == InvalidOid\"), File: \"relcache.c\", Line: 1219, PID: 3706301\n\nHmm, yeah, we're setting relam for relations that shouldn't have it.\nI propose the attached.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 26 Mar 2024 12:05:47 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Thu, Mar 21, 2024 at 01:07:01PM +0100, Alvaro Herrera wrote:\n> Given that Micha�l is temporarily gone, I propose to push the attached\n> tomorrow.\n\nThanks.\n\nOn Tue, Mar 26, 2024 at 12:05:47PM +0100, Alvaro Herrera wrote:\n> On 2024-Mar-26, Alexander Lakhin wrote:\n> \n> > Hello Alvaro,\n> > \n> > 21.03.2024 15:07, Alvaro Herrera wrote:\n> > > Given that Micha�l is temporarily gone, I propose to push the attached\n> > > tomorrow.\n> > \n> > Please look at a new anomaly introduced with 374c7a229.\n> > Starting from that commit, the following erroneous query:\n> > CREATE FOREIGN TABLE fp PARTITION OF pg_am DEFAULT SERVER x;\n> > \n> > triggers an assertion failure:\n> > TRAP: failed Assert(\"relation->rd_rel->relam == InvalidOid\"), File: \"relcache.c\", Line: 1219, PID: 3706301\n> \n> Hmm, yeah, we're setting relam for relations that shouldn't have it.\n> I propose the attached.\n\nLooks right. That's how I originally wrote it, except for the\n\"stmt->accessMethod != NULL\" case.\n\nI prefered my way - the grammar should refuse to set stmt->accessMethod\nfor inappropriate relkinds. And you could assert that.\n\nI also prefered to set \"accessMethodId = InvalidOid\" once, rather than\ntwice.\n\ndiff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c\nindex 8a02c5b05b6..050be89728f 100644\n--- a/src/backend/commands/tablecmds.c\n+++ b/src/backend/commands/tablecmds.c\n@@ -962,18 +962,21 @@ DefineRelation(CreateStmt *stmt, char relkind, Oid ownerId,\n \t * case of a partitioned table) the parent's, if it has one.\n \t */\n \tif (stmt->accessMethod != NULL)\n-\t\taccessMethodId = get_table_am_oid(stmt->accessMethod, false);\n-\telse if (stmt->partbound)\n \t{\n-\t\tAssert(list_length(inheritOids) == 1);\n-\t\taccessMethodId = get_rel_relam(linitial_oid(inheritOids));\n+\t\tAssert(RELKIND_HAS_TABLE_AM(relkind) || relkind == RELKIND_PARTITIONED_TABLE);\n+\t\taccessMethodId = get_table_am_oid(stmt->accessMethod, false);\n \t}\n-\telse\n-\t\taccessMethodId = InvalidOid;\n+\telse if (RELKIND_HAS_TABLE_AM(relkind) || relkind == RELKIND_PARTITIONED_TABLE)\n+\t{\n+\t\tif (stmt->partbound)\n+\t\t{\n+\t\t\tAssert(list_length(inheritOids) == 1);\n+\t\t\taccessMethodId = get_rel_relam(linitial_oid(inheritOids));\n+\t\t}\n \n-\t/* still nothing? use the default */\n-\tif (RELKIND_HAS_TABLE_AM(relkind) && !OidIsValid(accessMethodId))\n-\t\taccessMethodId = get_table_am_oid(default_table_access_method, false);\n+\t\tif (RELKIND_HAS_TABLE_AM(relkind) && !OidIsValid(accessMethodId))\n+\t\t\taccessMethodId = get_table_am_oid(default_table_access_method, false);\n+\t}\n \n \t/*\n \t * Create the relation. Inherited defaults and constraints are passed in\n\n\n",
"msg_date": "Tue, 26 Mar 2024 17:54:58 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On 2024-Mar-26, Justin Pryzby wrote:\n\n> Looks right. That's how I originally wrote it, except for the\n> \"stmt->accessMethod != NULL\" case.\n> \n> I prefered my way - the grammar should refuse to set stmt->accessMethod\n> for inappropriate relkinds. And you could assert that.\n\nHmm, I didn't like this at first sight, because it looked convoluted and\nbaroque, but I compared both versions for a while and I ended up liking\nyours more than mine, so I adopted it.\n\n> I also prefered to set \"accessMethodId = InvalidOid\" once, rather than\n> twice.\n\nGrumble. I don't like initialization at declare time, so far from the\ncode that depends on the value. But the alternative would have been to\nassign right where this blocks starts, an additional line. I pushed it\nlike you had it.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"This is what I like so much about PostgreSQL. Most of the surprises\nare of the \"oh wow! That's cool\" Not the \"oh shit!\" kind. :)\"\nScott Marlowe, http://archives.postgresql.org/pgsql-admin/2008-10/msg00152.php\n\n\n",
"msg_date": "Thu, 28 Mar 2024 16:58:31 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "Hello Alvaro,\n\n28.03.2024 18:58, Alvaro Herrera wrote:\n> Grumble. I don't like initialization at declare time, so far from the\n> code that depends on the value. But the alternative would have been to\n> assign right where this blocks starts, an additional line. I pushed it\n> like you had it.\n\nI've stumbled upon a test failure caused by the test query added in that\ncommit:\n--- .../src/test/regress/expected/create_am.out 2024-03-28 12:14:11.700764888 -0400\n+++ .../src/test/recovery/tmp_check/results/create_am.out 2024-03-31 03:10:28.172244122 -0400\n@@ -549,7 +549,10 @@\n ERROR: access method \"btree\" is not of type TABLE\n -- Other weird invalid cases that cause problems\n CREATE FOREIGN TABLE fp PARTITION OF pg_am DEFAULT SERVER x;\n-ERROR: \"pg_am\" is not partitioned\n+ERROR: deadlock detected\n+DETAIL: Process 3076180 waits for AccessShareLock on relation 1259 of database 16386; blocked by process 3076181.\n+Process 3076181 waits for AccessShareLock on relation 2601 of database 16386; blocked by process 3076180.\n+HINT: See server log for query details.\n -- Drop table access method, which fails as objects depends on it\n DROP ACCESS METHOD heap2;\n ERROR: cannot drop access method heap2 because other objects depend on it\n\n027_stream_regress_primary.log contains:\n2024-03-31 03:10:26.728 EDT [3076181] pg_regress/vacuum LOG: statement: VACUUM FULL pg_class;\n...\n2024-03-31 03:10:26.797 EDT [3076180] pg_regress/create_am LOG: statement: CREATE FOREIGN TABLE fp PARTITION OF pg_am \nDEFAULT SERVER x;\n...\n2024-03-31 03:10:28.183 EDT [3076181] pg_regress/vacuum LOG: statement: VACUUM FULL pg_database;\n\nThis simple demo confirms the issue:\nfor ((i=1;i<=20;i++)); do\necho \"iteration $i\"\necho \"VACUUM FULL pg_class;\" | psql >psql-1.log &\necho \"CREATE FOREIGN TABLE fp PARTITION OF pg_am DEFAULT SERVER x;\" | psql >psql-2.log &\nwait\ndone\n\n...\niteration 15\nERROR: \"pg_am\" is not partitioned\niteration 16\nERROR: deadlock detected\nDETAIL: Process 2556377 waits for AccessShareLock on relation 1259 of database 16384; blocked by process 2556378.\nProcess 2556378 waits for AccessShareLock on relation 2601 of database 16384; blocked by process 2556377.\nHINT: See server log for query details.\n...\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Sun, 31 Mar 2024 12:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Sun, Mar 31, 2024 at 12:00:00PM +0300, Alexander Lakhin wrote:\n> Hello Alvaro,\n> \n> 28.03.2024 18:58, Alvaro Herrera wrote:\n> > Grumble. I don't like initialization at declare time, so far from the\n> > code that depends on the value. But the alternative would have been to\n> > assign right where this blocks starts, an additional line. I pushed it\n> > like you had it.\n> \n> I've stumbled upon a test failure caused by the test query added in that\n> commit:\n> +ERROR:� deadlock detected\n> +DETAIL:� Process 3076180 waits for AccessShareLock on relation 1259 of database 16386; blocked by process 3076181.\n> +Process 3076181 waits for AccessShareLock on relation 2601 of database 16386; blocked by process 3076180.\n\nI think means that, although it was cute to use pg_am in the reproducer\ngiven in the problem report, it's not a good choice to use here in the\nsql regression tests.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 31 Mar 2024 06:11:02 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Sun, Mar 31, 2024 at 12:00:00PM +0300, Alexander Lakhin wrote:\n>> I've stumbled upon a test failure caused by the test query added in that\n>> commit:\n>> +ERROR: deadlock detected\n>> +DETAIL: Process 3076180 waits for AccessShareLock on relation 1259 of database 16386; blocked by process 3076181.\n>> +Process 3076181 waits for AccessShareLock on relation 2601 of database 16386; blocked by process 3076180.\n\n> I think means that, although it was cute to use pg_am in the reproducer\n> given in the problem report, it's not a good choice to use here in the\n> sql regression tests.\n\nAnother case here:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sevengill&dt=2024-04-02%2001%3A32%3A17\n\nAFAICS, e2395cdbe posits that taking exclusive lock on pg_am in the\nmiddle of a bunch of concurrent regression scripts couldn't possibly\ncause any problems. Really?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Apr 2024 01:06:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Tue, Apr 02, 2024 at 01:06:06AM -0400, Tom Lane wrote:\n> AFAICS, e2395cdbe posits that taking exclusive lock on pg_am in the\n> middle of a bunch of concurrent regression scripts couldn't possibly\n> cause any problems. Really?\n\nThere is no need for a catalog here to trigger the failure, and it\nwould have happened as long as a foreign table is used. The problem\nintroduced in 374c7a229042 fixed by e2395cdbe83a comes from a thinko\non my side, my apologies for that and the delay in replying. Thanks\nfor the extra fix done in 13b3b62746ec, Alvaro.\n--\nMichael",
"msg_date": "Mon, 15 Apr 2024 10:46:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Mon, Apr 15, 2024 at 10:46:00AM +0900, Michael Paquier wrote:\n> There is no need for a catalog here to trigger the failure, and it\n> would have happened as long as a foreign table is used. The problem\n> introduced in 374c7a229042 fixed by e2395cdbe83a comes from a thinko\n> on my side, my apologies for that and the delay in replying. Thanks\n> for the extra fix done in 13b3b62746ec, Alvaro.\n\nWhile doing more tests with this feature, among other things, I've\nspotted an incorrect behavior with dump/restore with the handling of\nthe GUC default_table_access_method when it comes to partitions.\nImagine the following in database \"a\":\nCREATE TABLE parent_tab (id int) PARTITION BY RANGE (id);\nCREATE TABLE parent_tab_2 (id int) PARTITION BY RANGE (id) USING heap;\nCREATE TABLE parent_tab_3 (id int) PARTITION BY RANGE (id);\n\nThis leads to the following in pg_class:\n=# SELECT relname, relam FROM pg_class WHERE oid > 16000;\n relname | relam \n--------------+-------\n parent_tab | 0\n parent_tab_2 | 2\n parent_tab_3 | 0\n(3 rows)\n\nNow, let's do the following:\n$ createdb b\n$ pg_dump | psql b\n$ psql b\n=# SELECT relname, relam FROM pg_class WHERE oid > 16000;\n relname | relam \n--------------+-------\n parent_tab | 0\n parent_tab_2 | 0\n parent_tab_3 | 0\n(3 rows)\n\nAnd parent_tab_2 would now rely on the default GUC when creating new\npartitions rather than enforce heap.\n\nIt seems to me that we are going to extend the GUC\ndefault_table_access_method with a \"default\" mode to be able to force\nrelam to 0 and make a difference with the non-0 case, in the same way\nas ALTER TABLE SET ACCESS METHOD DEFAULT. The thing is that, like\ntablespaces, we have to rely on a GUC and not a USING clause to be\nable to handle --no-table-access-method.\n\nAn interesting point comes to what we should do for\ndefault_table_access_method set to \"default\" when dealing with\nsomething else than a partitioned table, where an error may be\nadapted. Still, I'm wondering if there are more flavors I lack\nimagination for. This requires more careful design work.\n\nPerhaps somebody has a good idea?\n--\nMichael",
"msg_date": "Tue, 16 Apr 2024 14:14:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Tue, Apr 16, 2024 at 02:14:21PM +0900, Michael Paquier wrote:\n> It seems to me that we are going to extend the GUC\n> default_table_access_method with a \"default\" mode to be able to force\n> relam to 0 and make a difference with the non-0 case, in the same way\n> as ALTER TABLE SET ACCESS METHOD DEFAULT. The thing is that, like\n> tablespaces, we have to rely on a GUC and not a USING clause to be\n> able to handle --no-table-access-method.\n> \n> An interesting point comes to what we should do for\n> default_table_access_method set to \"default\" when dealing with\n> something else than a partitioned table, where an error may be\n> adapted. Still, I'm wondering if there are more flavors I lack\n> imagination for. This requires more careful design work.\n> \n> Perhaps somebody has a good idea?\n\nActually, I've come up with an idea just after hitting the send\nbutton: let's use an extra ALTER TABLE SET ACCESS METHOD rather than\nrely on the GUC to set the AM of the partitioned table correctly.\nThis extra command should be optional, depending on\n--no-table-access-method. If a partitioned table has 0 as relam,\nlet's not add this extra ALTER TABLE at all.\n--\nMichael",
"msg_date": "Tue, 16 Apr 2024 14:19:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Tue, Apr 16, 2024 at 02:19:56PM +0900, Michael Paquier wrote:\n> Actually, I've come up with an idea just after hitting the send\n> button: let's use an extra ALTER TABLE SET ACCESS METHOD rather than\n> rely on the GUC to set the AM of the partitioned table correctly.\n> This extra command should be optional, depending on\n> --no-table-access-method. If a partitioned table has 0 as relam,\n> let's not add this extra ALTER TABLE at all.\n\nI have explored this idea, and while this is tempting this faces a\ncouple of challenges:\n1) Binary upgrades would fail because the table rewrite created by\nALTER TABLE SET ACCESS METHOD for relkinds with physical storage\nexpects heap_create_with_catalog to have a fixed OID, but the rewrite\nwould require extra steps to be able to handle that, and I am not\nconvinced that more binary_upgrade_set_next_heap_relfilenode() is a\ngood idea.\n2) We could limit these extra ALTER TABLE commands to be generated for\npartitioned tables. This is kind of confusing as resulting dumps\nwould mix SET commands for default_table_access_method that would\naffect tables with physical storage, while partitioned tables would\nhave their own extra ALTER TABLE commands. Another issue that needs\nmore consideration is that TocEntrys don't hold any relkind\ninformation so pg_backup_archiver.c cannot make a difference with\ntables and partitioned tables to select if SET or ALTER TABLE should\nbe generated.\n\nSeveral designs are possible, like:\n- Mix SET and ALTER TABLE commands in the dumps to set the AM, SET for\ntables and matviews, ALTER TABLE for relations without storage. This\nwould bypass the binary upgrade problem with the fixed relid.\n- Use only SET, requiring a new \"default\" value for\ndefault_table_access_method that would force a partitioned table's\nrelam to be 0. Be stricter with the \"current\" table AM tracked in\npg_dump's backup archiver.\n- Use only ALTER TABLE commands, with extra binary upgrade tweaks to\nforce relation OIDs for the second heap_create_with_catalog() done\nwith the rewrite to update a relation's AM.\n\nWith all that in mind, it may be better to revert 374c7a229042 and\ne2395cdbe83a from HEAD and reconsider how to tackle the dump issues in\nv18 or newer versions as all of the approaches I can think of lead to\nmore complications of their own.\n\nPlease see attached a non-polished POC that switches dumps to use\nALTER TABLE, that I've used to detect the upgrade problems.\n\nThoughts or comments are welcome.\n--\nMichael",
"msg_date": "Wed, 17 Apr 2024 14:31:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On 2024-Apr-17, Michael Paquier wrote:\n\n> 2) We could limit these extra ALTER TABLE commands to be generated for\n> partitioned tables. This is kind of confusing as resulting dumps\n> would mix SET commands for default_table_access_method that would\n> affect tables with physical storage, while partitioned tables would\n> have their own extra ALTER TABLE commands.\n\nHmm, cannot we simply add a USING clause to the CREATE TABLE command for\npartitioned tables? That would override the\ndefault_table_access_method, so it should give the correct result, no?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 17 Apr 2024 09:40:02 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On 2024-Apr-17, Alvaro Herrera wrote:\n\n> Hmm, cannot we simply add a USING clause to the CREATE TABLE command for\n> partitioned tables? That would override the\n> default_table_access_method, so it should give the correct result, no?\n\nAh, upthread you noted that pg_restore-time --no-table-access-method\nneeds to be able to elide it, so a dump-time USING clause doesn't work.\n\nI think it's easy enough to add a \"bool ispartitioned\" to TableInfo and\nuse an ALTER TABLE or rely on the GUC depending on that -- seems easy\nenough.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 17 Apr 2024 09:50:02 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Wed, Apr 17, 2024 at 09:50:02AM +0200, Alvaro Herrera wrote:\n> I think it's easy enough to add a \"bool ispartitioned\" to TableInfo and\n> use an ALTER TABLE or rely on the GUC depending on that -- seems easy\n> enough.\n\nYeah, that would be easy enough to track but I was wondering about\nadding the relkind instead. Still, one thing that I found confusing\nis the dump generated in this case, as it would mix the SET and the\nALTER TABLE commands so one reading the dumps may wonder why the SET\nhas no effect for a CREATE TABLE PARTITION OF without USING. Perhaps\nthat's fine and I just worry too much ;)\n\nThe extra ALTER commands need to be generated after the object\ndefinitions, so we'd need a new subroutine similar to\n_selectTableAccessMethod() like a _selectTableAccessMethodNoStorage().\nOr grouping both together is just simpler?\n--\nMichael",
"msg_date": "Wed, 17 Apr 2024 17:13:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On 2024-Apr-17, Michael Paquier wrote:\n\n> Yeah, that would be easy enough to track but I was wondering about\n> adding the relkind instead. Still, one thing that I found confusing\n> is the dump generated in this case, as it would mix the SET and the\n> ALTER TABLE commands so one reading the dumps may wonder why the SET\n> has no effect for a CREATE TABLE PARTITION OF without USING. Perhaps\n> that's fine and I just worry too much ;)\n\nHmm, maybe we should do a RESET of default_table_access_method before\nprinting the CREATE TABLE to avoid the confusion.\n\n> The extra ALTER commands need to be generated after the object\n> definitions, so we'd need a new subroutine similar to\n> _selectTableAccessMethod() like a _selectTableAccessMethodNoStorage().\n> Or grouping both together is just simpler?\n\nI think there should be two routines, since _select* routines just print\na SET command; maybe the new one would be _printAlterTableAM() or\nsomething like that. Having _select() print an ALTER TABLE command\ndepending on relkind (or the boolean flag) would be confusing, I think.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 17 Apr 2024 10:31:52 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "BTW if nothing else, this thread led me to discover a 18-month-old typo\nin the Spanish translation of pg_dump:\n\n-msgstr \" --no-tablespaces no volcar métodos de acceso de tablas\\n\"\n+msgstr \" --no-table-access-method no volcar métodos de acceso de tablas\\n\"\n\nOops.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Los dioses no protegen a los insensatos. Éstos reciben protección de\notros insensatos mejor dotados\" (Luis Wu, Mundo Anillo)\n\n\n",
"msg_date": "Wed, 17 Apr 2024 11:41:00 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Wed, Apr 17, 2024 at 10:31:52AM +0200, Alvaro Herrera wrote:\n> On 2024-Apr-17, Michael Paquier wrote:\n>> Yeah, that would be easy enough to track but I was wondering about\n>> adding the relkind instead. Still, one thing that I found confusing\n>> is the dump generated in this case, as it would mix the SET and the\n>> ALTER TABLE commands so one reading the dumps may wonder why the SET\n>> has no effect for a CREATE TABLE PARTITION OF without USING. Perhaps\n>> that's fine and I just worry too much ;)\n> \n> Hmm, maybe we should do a RESET of default_table_access_method before\n> printing the CREATE TABLE to avoid the confusion.\n\nA hard reset would make the business around currTableAM that decides\nwhen to generate the SET default_table_access_method queries slightly\nmore complicated, while increasing the number of queries run on the\nserver.\n\n>> The extra ALTER commands need to be generated after the object\n>> definitions, so we'd need a new subroutine similar to\n>> _selectTableAccessMethod() like a _selectTableAccessMethodNoStorage().\n>> Or grouping both together is just simpler?\n> \n> I think there should be two routines, since _select* routines just print\n> a SET command; maybe the new one would be _printAlterTableAM() or\n> something like that. Having _select() print an ALTER TABLE command\n> depending on relkind (or the boolean flag) would be confusing, I think.\n\nFine by me to use two routines to generate the two different commands.\nI am finishing with the attached for now, making dumps, restores and\nupgrades work happily as far as I've tested.\n\nI was also worrying about a need to dump the protocol version to be\nable to track the relkind in the toc entries, but a45c78e3284b has\nalready done one. The difference in AM handling between relations\nwithout storage and relations with storage pushes the relkind logic\nmore within the internals of pg_backup_archiver.c.\n\nWhat do you think?\n--\nMichael",
"msg_date": "Thu, 18 Apr 2024 09:42:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On 2024-Apr-18, Michael Paquier wrote:\n\n> On Wed, Apr 17, 2024 at 10:31:52AM +0200, Alvaro Herrera wrote:\n\n> > Hmm, maybe we should do a RESET of default_table_access_method before\n> > printing the CREATE TABLE to avoid the confusion.\n> \n> A hard reset would make the business around currTableAM that decides\n> when to generate the SET default_table_access_method queries slightly\n> more complicated, while increasing the number of queries run on the\n> server.\n\nHmm, okay. (I don't think we really care too much about the number of\nqueries, do we?)\n\n> Fine by me to use two routines to generate the two different commands.\n> I am finishing with the attached for now, making dumps, restores and\n> upgrades work happily as far as I've tested.\n\nGreat.\n\n> I was also worrying about a need to dump the protocol version to be\n> able to track the relkind in the toc entries, but a45c78e3284b has\n> already done one. The difference in AM handling between relations\n> without storage and relations with storage pushes the relkind logic\n> more within the internals of pg_backup_archiver.c.\n\nHmm, does this mean that every dump taking since a45c78e3284b (April\n1st) and before this commit will be unrestorable? This doesn't worry me\ntoo much, because we aren't even in beta yet ... and I think we don't\nhave a strict policy about it.\n\n> --- a/src/bin/pg_dump/t/002_pg_dump.pl\n> +++ b/src/bin/pg_dump/t/002_pg_dump.pl\n> @@ -4591,11 +4591,9 @@ my %tests = (\n> \t\t\tCREATE TABLE dump_test.regress_pg_dump_table_am_child_2\n> \t\t\t PARTITION OF dump_test.regress_pg_dump_table_am_parent FOR VALUES IN (2);',\n> \t\tregexp => qr/^\n> -\t\t\t\\QSET default_table_access_method = regress_table_am;\\E\n> -\t\t\t(\\n(?!SET[^;]+;)[^\\n]*)*\n> -\t\t\t\\n\\QCREATE TABLE dump_test.regress_pg_dump_table_am_parent (\\E\n> -\t\t\t(.*\\n)*\n> \t\t\t\\QSET default_table_access_method = heap;\\E\n> +\t\t\t(.*\\n)*\n> +\t\t\t\\QALTER TABLE dump_test.regress_pg_dump_table_am_parent SET ACCESS METHOD regress_table_am;\\E\n> \t\t\t(\\n(?!SET[^;]+;)[^\\n]*)*\n> \t\t\t\\n\\QCREATE TABLE dump_test.regress_pg_dump_table_am_child_1 (\\E\n> \t\t\t(.*\\n)*\n\nThis looks strange -- why did you remove matching for the CREATE TABLE\nof the parent table? That line should appear shortly before the ALTER\nTABLE SET ACCESS METHOD for the same table, shouldn't it? Maybe your\nintention was to remove only the SET default_table_access_method\n= regress_table_am line ... but it's not clear to me why we have the\n\"SET default_table_access_method = heap\" line before the ALTER TABLE SET\nACCESS METHOD.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"If it is not right, do not do it.\nIf it is not true, do not say it.\" (Marcus Aurelius, Meditations)\n\n\n",
"msg_date": "Thu, 18 Apr 2024 18:17:56 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Thu, Apr 18, 2024 at 06:17:56PM +0200, Alvaro Herrera wrote:\n> On 2024-Apr-18, Michael Paquier wrote:\n>> I was also worrying about a need to dump the protocol version to be\n>> able to track the relkind in the toc entries, but a45c78e3284b has\n>> already done one. The difference in AM handling between relations\n>> without storage and relations with storage pushes the relkind logic\n>> more within the internals of pg_backup_archiver.c.\n> \n> Hmm, does this mean that every dump taking since a45c78e3284b (April\n> 1st) and before this commit will be unrestorable? This doesn't worry me\n> too much, because we aren't even in beta yet ... and I think we don't\n> have a strict policy about it.\n\nI've been scanning the history of K_VERS_1_* in the recent years, and\nit does not seem that we have a case where we would have needed to\nbump the version twice in the same release cycle. Anyway, yes, any\ndump taken since 1_16 has been bumped would fail to restore with this\npatch in place. For an unreleased not-yet-in-beta branch, why should\nwe care? Things are not set in stone, like extensions. If others\nhave comments about this point, feel free of course.\n\n>> --- a/src/bin/pg_dump/t/002_pg_dump.pl\n>> +++ b/src/bin/pg_dump/t/002_pg_dump.pl\n>> @@ -4591,11 +4591,9 @@ my %tests = (\n>> \t\t\tCREATE TABLE dump_test.regress_pg_dump_table_am_child_2\n>> \t\t\t PARTITION OF dump_test.regress_pg_dump_table_am_parent FOR VALUES IN (2);',\n>> \t\tregexp => qr/^\n>> -\t\t\t\\QSET default_table_access_method = regress_table_am;\\E\n>> -\t\t\t(\\n(?!SET[^;]+;)[^\\n]*)*\n>> -\t\t\t\\n\\QCREATE TABLE dump_test.regress_pg_dump_table_am_parent (\\E\n>> -\t\t\t(.*\\n)*\n>> \t\t\t\\QSET default_table_access_method = heap;\\E\n>> +\t\t\t(.*\\n)*\n>> +\t\t\t\\QALTER TABLE dump_test.regress_pg_dump_table_am_parent SET ACCESS METHOD regress_table_am;\\E\n>> \t\t\t(\\n(?!SET[^;]+;)[^\\n]*)*\n>> \t\t\t\\n\\QCREATE TABLE dump_test.regress_pg_dump_table_am_child_1 (\\E\n>> \t\t\t(.*\\n)*\n> \n> This looks strange -- why did you remove matching for the CREATE TABLE\n> of the parent table? That line should appear shortly before the ALTER\n> TABLE SET ACCESS METHOD for the same table, shouldn't it?\n\nYeah, with the ALTER in place that did not seem that mandatory but I\ndon't mind keeping it, as well.\n\n> Maybe your\n> intention was to remove only the SET default_table_access_method\n> = regress_table_am line ... but it's not clear to me why we have the\n> \"SET default_table_access_method = heap\" line before the ALTER TABLE SET\n> ACCESS METHOD.\n\nThis comes from the contents of the dump for\nregress_pg_dump_table_am_2, that uses heap as table AM. A SET is\nissued for it before dumping regress_pg_dump_table_am_parent and its\npartitions. One trick that I can think of to make the output parsing\nof the test more palatable is to switch the AMs used by the two\npartitions, so as we finish with two SET queries before each partition\nrather than one before the partitioned table. See the attached for\nthe idea.\n--\nMichael",
"msg_date": "Fri, 19 Apr 2024 10:41:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Fri, Apr 19, 2024 at 10:41:30AM +0900, Michael Paquier wrote:\n> This comes from the contents of the dump for\n> regress_pg_dump_table_am_2, that uses heap as table AM. A SET is\n> issued for it before dumping regress_pg_dump_table_am_parent and its\n> partitions. One trick that I can think of to make the output parsing\n> of the test more palatable is to switch the AMs used by the two\n> partitions, so as we finish with two SET queries before each partition\n> rather than one before the partitioned table. See the attached for\n> the idea.\n\nFYI, as this is an open item, I am planning to wrap that at the\nbeginning of next week after a second lookup. If there are any\ncomments, feel free.\n--\nMichael",
"msg_date": "Sat, 20 Apr 2024 11:55:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Sat, Apr 20, 2024 at 11:55:46AM +0900, Michael Paquier wrote:\n> FYI, as this is an open item, I am planning to wrap that at the\n> beginning of next week after a second lookup. If there are any\n> comments, feel free.\n\nThis one is now fixed with f46bee346c3b, and the open item is marked\nas fixed.\n--\nMichael",
"msg_date": "Mon, 22 Apr 2024 15:20:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "It occurred to me that psql \\dP+ should show the AM of partitioned\ntables (and other partitioned rels).\nArguably, this could've been done when \\dP was introduced in v12, but\nat that point would've shown the AM only for partitioned indexes.\nBut it makes a lot of sense to do it now that partitioned tables support\nAMs. I suggest to consider this for v17.\n\nregression=# \\dP+\n List of partitioned relations\n Schema | Name | Owner | Type | Table | Access method | Total size | Description\n--------+----------------------+---------+-------------------+----------------+---------------+------------+-------------\n public | mlparted | pryzbyj | partitioned table | | heap2 | 104 kB |\n public | tableam_parted_heap2 | pryzbyj | partitioned table | | | 32 kB |\n public | trigger_parted | pryzbyj | partitioned table | | | 0 bytes |\n public | upsert_test | pryzbyj | partitioned table | | | 8192 bytes |\n public | trigger_parted_pkey | pryzbyj | partitioned index | trigger_parted | btree | 16 kB |\n public | upsert_test_pkey | pryzbyj | partitioned index | upsert_test | btree | 8192 bytes |\n---\n src/bin/psql/describe.c | 13 ++++++++++++-\n 1 file changed, 12 insertions(+), 1 deletion(-)\n\ndiff --git a/src/bin/psql/describe.c b/src/bin/psql/describe.c\nindex f67bf0b8925..22a668409e7 100644\n--- a/src/bin/psql/describe.c\n+++ b/src/bin/psql/describe.c\n@@ -4113,7 +4113,7 @@ listPartitionedTables(const char *reltypes, const char *pattern, bool verbose)\n \tPQExpBufferData title;\n \tPGresult *res;\n \tprintQueryOpt myopt = pset.popt;\n-\tbool\t\ttranslate_columns[] = {false, false, false, false, false, false, false, false, false};\n+\tbool\t\ttranslate_columns[] = {false, false, false, false, false, false, false, false, false, false};\n \tconst char *tabletitle;\n \tbool\t\tmixed_output = false;\n \n@@ -4181,6 +4181,14 @@ listPartitionedTables(const char *reltypes, const char *pattern, bool verbose)\n \n \tif (verbose)\n \t{\n+\t\t/*\n+\t\t * Table access methods were introduced in v12, and can be set on\n+\t\t * partitioned tables since v17.\n+\t\t */\n+\t\tappendPQExpBuffer(&buf,\n+\t\t\t\t\t\t \",\\n am.amname as \\\"%s\\\"\",\n+\t\t\t\t\t\t gettext_noop(\"Access method\"));\n+\n \t\tif (showNested)\n \t\t{\n \t\t\tappendPQExpBuffer(&buf,\n@@ -4216,6 +4224,9 @@ listPartitionedTables(const char *reltypes, const char *pattern, bool verbose)\n \n \tif (verbose)\n \t{\n+\t\tappendPQExpBufferStr(&buf,\n+\t\t\t\t\t\t\t \"\\n LEFT JOIN pg_catalog.pg_am am ON c.relam = am.oid\");\n+\n \t\tif (pset.sversion < 120000)\n \t\t{\n \t\t\tappendPQExpBufferStr(&buf,\n-- \n2.42.0\n\n\n\n",
"msg_date": "Tue, 21 May 2024 08:33:51 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Tue, May 21, 2024 at 08:33:51AM -0500, Justin Pryzby wrote:\n> It occurred to me that psql \\dP+ should show the AM of partitioned\n> tables (and other partitioned rels).\n\nping\n\n\n",
"msg_date": "Tue, 4 Jun 2024 07:49:20 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Tue, May 21, 2024 at 08:33:51AM -0500, Justin Pryzby wrote:\n> It occurred to me that psql \\dP+ should show the AM of partitioned\n> tables (and other partitioned rels).\n> Arguably, this could've been done when \\dP was introduced in v12, but\n> at that point would've shown the AM only for partitioned indexes.\n> But it makes a lot of sense to do it now that partitioned tables support\n> AMs. I suggest to consider this for v17.\n\nNot sure that this is a must-have. It is nice to have, but extra\ninformation is a new feature IMO. Any extra opinions?\n\nI would suggest to attach a patch, that makes review easier. And so\nhere is one.\n--\nMichael",
"msg_date": "Thu, 6 Jun 2024 09:43:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Thu, Jun 06, 2024 at 09:43:45AM +0900, Michael Paquier wrote:\n> Not sure that this is a must-have. It is nice to have, but extra\n> information is a new feature IMO. Any extra opinions?\n\nHearing nothing, I've applied that on HEAD now that v18 is open.\n--\nMichael",
"msg_date": "Tue, 2 Jul 2024 09:23:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, Jun 06, 2024 at 09:43:45AM +0900, Michael Paquier wrote:\n>> Not sure that this is a must-have. It is nice to have, but extra\n>> information is a new feature IMO. Any extra opinions?\n\n> Hearing nothing, I've applied that on HEAD now that v18 is open.\n\nWhile this won't actually fail against a v10 or v11 server, it won't\nshow anything useful either (because relam is zero for heaps in\npre-v12 versions). Perhaps there should be a check to only add the\nextra column if server >= v12?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Jul 2024 20:40:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "On Mon, Jul 01, 2024 at 08:40:17PM -0400, Tom Lane wrote:\n> While this won't actually fail against a v10 or v11 server, it won't\n> show anything useful either (because relam is zero for heaps in\n> pre-v12 versions). Perhaps there should be a check to only add the\n> extra column if server >= v12?\n\nI've thought about that and would be OK to restrict things with this\nsuggestion if you'd prefer it. However, I could not decide in favor\nof it as using a psql \\dP+ >= v18 gives the possibility to show the\nAMs of partitioned indexes, as these are also part listed in \\dP. So\nI've found that useful in itself.\n--\nMichael",
"msg_date": "Tue, 2 Jul 2024 10:18:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Jul 01, 2024 at 08:40:17PM -0400, Tom Lane wrote:\n>> While this won't actually fail against a v10 or v11 server, it won't\n>> show anything useful either (because relam is zero for heaps in\n>> pre-v12 versions). Perhaps there should be a check to only add the\n>> extra column if server >= v12?\n\n> I've thought about that and would be OK to restrict things with this\n> suggestion if you'd prefer it. However, I could not decide in favor\n> of it as using a psql \\dP+ >= v18 gives the possibility to show the\n> AMs of partitioned indexes, as these are also part listed in \\dP. So\n> I've found that useful in itself.\n\nAh, I'd forgotten that partitioned indexes are shown too.\nNever mind then.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Jul 2024 21:29:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
}
] |
[
{
"msg_contents": "funcs.sgml has\n\n 42 <@ '{[1,7)}'::int4multirange\n\nand calls it true. The attached fixes that.\n\nIncluded are two more changes where actual output differs a bit from \nwhat the doc examples show.\n\nErik",
"msg_date": "Wed, 18 May 2022 03:08:32 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "funcs.sgml - wrong example"
},
{
"msg_contents": "At Wed, 18 May 2022 03:08:32 +0200, Erik Rijkers <er@xs4all.nl> wrote in \n> funcs.sgml has\n> \n> 42 <@ '{[1,7)}'::int4multirange\n> \n> and calls it true. The attached fixes that.\n> \n> Included are two more changes where actual output differs a bit from\n> what the doc examples show.\n\nA bit off-topic and just out of curiocity, is there a reason other\nthan speed (and history?) for that we won't truncate trailing zeros in\nthe output of log(b,n)?\n\nSince we have get_min_scale since 13, for example, with the following\ntweak, we get 6.0 for log(2.0, 64.0), which looks nicer.\n\n\n@@ -10300,6 +10300,8 @@ log_var(const NumericVar *base, const NumericVar *num, NumericVar *result)\n \t/* Divide and round to the required scale */\n \tdiv_var_fast(&ln_num, &ln_base, result, rscale, true);\n \n+\tresult->dscale = Max(get_min_scale(result), base->dscale);\n+\tresult->dscale = Max(result->dscale, num->dscale);\n \tfree_var(&ln_num);\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 18 May 2022 11:11:02 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: funcs.sgml - wrong example"
},
{
"msg_contents": "At Wed, 18 May 2022 11:11:02 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Wed, 18 May 2022 03:08:32 +0200, Erik Rijkers <er@xs4all.nl> wrote in \n> > funcs.sgml has\n> > \n> > 42 <@ '{[1,7)}'::int4multirange\n> > \n> > and calls it true. The attached fixes that.\n> > \n> > Included are two more changes where actual output differs a bit from\n> > what the doc examples show.\n\nForgot to mention, the all changes look good. The log(b,n) has 16\ntrailing digits at least since 9.6.\n\n> A bit off-topic and just out of curiocity, is there a reason other\n> than speed (and history?) for that we won't truncate trailing zeros in\n> the output of log(b,n)?\n\nHmm. A bit wrong. I meant that, if we can allow some additional cycles\nand we don't stick to the past behavior of the function, we could have\na nicer result.\n\n> Since we have get_min_scale since 13, for example, with the following\n> tweak, we get 6.0 for log(2.0, 64.0), which looks nicer.\n> \n> \n> @@ -10300,6 +10300,8 @@ log_var(const NumericVar *base, const NumericVar *num, NumericVar *result)\n> \t/* Divide and round to the required scale */\n> \tdiv_var_fast(&ln_num, &ln_base, result, rscale, true);\n> \n> +\tresult->dscale = Max(get_min_scale(result), base->dscale);\n> +\tresult->dscale = Max(result->dscale, num->dscale);\n> \tfree_var(&ln_num);\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 18 May 2022 11:19:31 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: funcs.sgml - wrong example"
},
{
"msg_contents": "On Wed, May 18, 2022 at 03:08:32AM +0200, Erik Rijkers wrote:\n> funcs.sgml has\n> \n> 42 <@ '{[1,7)}'::int4multirange\n> \n> and calls it true. The attached fixes that.\n> \n> Included are two more changes where actual output differs a bit from what\n> the doc examples show.\n\nThis patch is RFC but seems to have been forgotten.\nFeel free to add it to the next CF if nobody applies it.\n\nNote that I needed to apply it with use git am -p0 - I think it was created\nwith vanilla diff.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 1 Jun 2022 07:10:58 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: funcs.sgml - wrong example"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Wed, May 18, 2022 at 03:08:32AM +0200, Erik Rijkers wrote:\n>> funcs.sgml has\n>> 42 <@ '{[1,7)}'::int4multirange\n>> and calls it true. The attached fixes that.\n>> \n>> Included are two more changes where actual output differs a bit from what\n>> the doc examples show.\n\n> This patch is RFC but seems to have been forgotten.\n> Feel free to add it to the next CF if nobody applies it.\n\nPushed now. I modified that example to 4 <@ '{[1,7)}'::int4multirange\nso that it would indeed return true, which is our usual style.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Jun 2022 10:41:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: funcs.sgml - wrong example"
}
] |
[
{
"msg_contents": "Hi pg-hackers,\n\nI have written a postgres extension, and know that memory leak check can be\ndone with valgrind. With the help of postgres_valgrind_wiki\n<https://wiki.postgresql.org/wiki/Valgrind> started postgres server with\nthe valgrind(as given in the wiki)\n\nvalgrind --leak-check=no --gen-suppressions=all \\\n --suppressions=src/tools/valgrind.supp --time-stamp=yes \\\n --error-markers=VALGRINDERROR-BEGIN,VALGRIND ERROR-END \\\n --log-file=$HOME/pg-valgrind/%p.log --trace-children=yes \\\n postgres --log_line_prefix=\"%m %p \" \\\n --log_statement=all --shared_buffers=64MB 2>&1 | tee\n$HOME/pg-valgrind/postmaster.log\n\nI have few doubts in here,\n\n1. When I run with *--leak-check=full*, I get memory leaks for postgres\nfunctions under possibly or definitely lost categories.. Is this expected?\nIf yes, how shall i ignore it?(by creating .supp?).. kindly suggest\n2. Is there any other way to test my extension memory leaks alone, because\ncombining with postgres leaks is making instrumentation complex?..\n3. I have seen some macros for valgrind support within postgres source code\nunder utils/memdebug.h, but couldn't get complete idea of using it from the\ncomments in pg_config_manual.h under *USE_VALGRIND *macro, pls provide some\nguidance here..\n\nThank you,\nNatarajan R\n\nHi pg-hackers,I have written a postgres extension, and know that memory leak check can be done with valgrind. With the help of postgres_valgrind_wiki started postgres server with the valgrind(as given in the wiki)valgrind --leak-check=no --gen-suppressions=all \\\n --suppressions=src/tools/valgrind.supp --time-stamp=yes \\\n --error-markers=VALGRINDERROR-BEGIN,VALGRIND ERROR-END \\\n --log-file=$HOME/pg-valgrind/%p.log --trace-children=yes \\\n postgres --log_line_prefix=\"%m %p \" \\\n --log_statement=all --shared_buffers=64MB 2>&1 | tee $HOME/pg-valgrind/postmaster.logI have few doubts in here,1. When I run with --leak-check=full, I get memory leaks for postgres functions under possibly or definitely lost categories.. Is this expected? If yes, how shall i ignore it?(by creating .supp?).. kindly suggest2. Is there any other way to test my extension memory leaks alone, because combining with postgres leaks is making instrumentation complex?..3. I have seen some macros for valgrind support within postgres source code under utils/memdebug.h, but couldn't get complete idea of using it from the comments in pg_config_manual.h under USE_VALGRIND macro, pls provide some guidance here..Thank you,Natarajan R",
"msg_date": "Wed, 18 May 2022 10:13:04 +0530",
"msg_from": "Natarajan R <nataraj3098@gmail.com>",
"msg_from_op": true,
"msg_subject": "Valgrind mem-check for postgres extension"
},
{
"msg_contents": "Natarajan R <nataraj3098@gmail.com> writes:\n> I have few doubts in here,\n\n> 1. When I run with *--leak-check=full*, I get memory leaks for postgres\n> functions under possibly or definitely lost categories.. Is this expected?\n\nMaybe ... you did not show your test case, so it's hard to say. But it\ncould well be that this is an artifact of failing to define USE_VALGRIND.\n\n> 2. Is there any other way to test my extension memory leaks alone, because\n> combining with postgres leaks is making instrumentation complex?..\n\nNo, not really.\n\n> 3. I have seen some macros for valgrind support within postgres source code\n> under utils/memdebug.h, but couldn't get complete idea of using it from the\n> comments in pg_config_manual.h under *USE_VALGRIND *macro, pls provide some\n> guidance here..\n\nIf you didn't build the core code with USE_VALGRIND defined, then none of\nthis stuff is going to work ideally.\n\nThe way I like to do it is to run configure, and then manually add\n\"#define USE_VALGRIND\" to the generated src/include/pg_config.h\nfile before invoking \"make\". Probably other people have different\nhabits.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 May 2022 01:12:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Valgrind mem-check for postgres extension"
},
{
"msg_contents": "\nOn 2022-05-18 We 01:12, Tom Lane wrote:\n> Natarajan R <nataraj3098@gmail.com> writes:\n>> I have few doubts in here,\n>> 1. When I run with *--leak-check=full*, I get memory leaks for postgres\n>> functions under possibly or definitely lost categories.. Is this expected?\n> Maybe ... you did not show your test case, so it's hard to say. But it\n> could well be that this is an artifact of failing to define USE_VALGRIND.\n>\n>> 2. Is there any other way to test my extension memory leaks alone, because\n>> combining with postgres leaks is making instrumentation complex?..\n> No, not really.\n>\n>> 3. I have seen some macros for valgrind support within postgres source code\n>> under utils/memdebug.h, but couldn't get complete idea of using it from the\n>> comments in pg_config_manual.h under *USE_VALGRIND *macro, pls provide some\n>> guidance here..\n> If you didn't build the core code with USE_VALGRIND defined, then none of\n> this stuff is going to work ideally.\n>\n> The way I like to do it is to run configure, and then manually add\n> \"#define USE_VALGRIND\" to the generated src/include/pg_config.h\n> file before invoking \"make\". Probably other people have different\n> habits.\n\n\nThe standard buildfarm config uses these for valgrind builds:\n\n CFLAGS => \"-fno-omit-frame-pointer -O0 -fPIC\",\n CPPFLAGS => \"-DUSE_VALGRIND -DRELCACHE_FORCE_RELEASE\",\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 18 May 2022 12:08:27 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Valgrind mem-check for postgres extension"
},
{
"msg_contents": ">> I have few doubts in here,\n>> 1. When I run with *--leak-check=full*, I get memory leaks for postgres\n>> functions under possibly or definitely lost categories.. Is this\nexpected?\n\n> Maybe ... you did not show your test case, so it's hard to say. But it\n> could well be that this is an artifact of failing to define USE_VALGRIND\n\nI did not run any test, just simply started the plain postgres(my extension\nnot installed) with valgrind(command given below)..\n\n> *valgrind --leak-check=full --suppressions=src/tools/valgrind.supp\n> --time-stamp=yes --error-markers=VALGRINDERROR-BEGIN,VALGRINDERROR-END\n> --trace-children=yes --log-file=pg_valgrind/pg-valgrind_%p.log postgres -D\n> data*\n\n\n>> 3. I have seen some macros for valgrind support within postgres source\ncode\n>> under utils/memdebug.h, but couldn't get complete idea of using it from\nthe\n>> comments in pg_config_manual.h under *USE_VALGRIND *macro, pls provide\nsome\n>> guidance here..\n\n> If you didn't build the core code with USE_VALGRIND defined, then none of\n> this stuff is going to work ideally.\n\n> The way I like to do it is to run configure, and then manually add\n> \"#define USE_VALGRIND\" to the generated src/include/pg_config.h\n> file before invoking \"make\". Probably other people have different\n> habits.\n\nI tried the things like you said\n1. Pg configured already, so #define USE_VALGRIND in src/include/pg_config.h\n2. make, make install\n3. started pg with valgrind(same command mentioned above)..\nNow, memory leaks are being reported for palloc, malloc, MemoryContextAlloc\nfunctions..(sample image from a valgrind report for a process attached).\nI couldn't specify from which process, this was generated.. as the valgrind\nprovide pid alone..\n\n[image: image.png]\n\nIn an overview, I am trying to test a memory leak for my extension, correct\nme if i am doing it the wrong way..\n\nRegards,\nNatarajan R\n\n\nOn Wed, 18 May 2022 at 21:38, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 2022-05-18 We 01:12, Tom Lane wrote:\n> > Natarajan R <nataraj3098@gmail.com> writes:\n> >> I have few doubts in here,\n> >> 1. When I run with *--leak-check=full*, I get memory leaks for postgres\n> >> functions under possibly or definitely lost categories.. Is this\n> expected?\n> > Maybe ... you did not show your test case, so it's hard to say. But it\n> > could well be that this is an artifact of failing to define USE_VALGRIND.\n> >\n> >> 2. Is there any other way to test my extension memory leaks alone,\n> because\n> >> combining with postgres leaks is making instrumentation complex?..\n> > No, not really.\n> >\n> >> 3. I have seen some macros for valgrind support within postgres source\n> code\n> >> under utils/memdebug.h, but couldn't get complete idea of using it from\n> the\n> >> comments in pg_config_manual.h under *USE_VALGRIND *macro, pls provide\n> some\n> >> guidance here..\n> > If you didn't build the core code with USE_VALGRIND defined, then none of\n> > this stuff is going to work ideally.\n> >\n> > The way I like to do it is to run configure, and then manually add\n> > \"#define USE_VALGRIND\" to the generated src/include/pg_config.h\n> > file before invoking \"make\". Probably other people have different\n> > habits.\n>\n>\n> The standard buildfarm config uses these for valgrind builds:\n>\n> CFLAGS => \"-fno-omit-frame-pointer -O0 -fPIC\",\n> CPPFLAGS => \"-DUSE_VALGRIND -DRELCACHE_FORCE_RELEASE\",\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>",
"msg_date": "Fri, 20 May 2022 09:44:07 +0530",
"msg_from": "Natarajan R <nataraj3098@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Valgrind mem-check for postgres extension"
}
] |
[
{
"msg_contents": "Hi,\n\n+ tup = SearchSysCache1(RELOID, ObjectIdGetDatum(relid));\n+ accessMethodId = ((Form_pg_class) GETSTRUCT(tup))->relam;\n\n- /* look up the access method, verify it is for a table */\n- if (accessMethod != NULL)\n- accessMethodId = get_table_am_oid(accessMethod, false);\n+ if (!HeapTupleIsValid(tup))\n+ elog(ERROR, \"cache lookup failed for relation %u\", relid);\n\nShouldn't the validity of tup be checked before relam field is accessed ?\n\nCheers\n\nHi,+ tup = SearchSysCache1(RELOID, ObjectIdGetDatum(relid));+ accessMethodId = ((Form_pg_class) GETSTRUCT(tup))->relam;- /* look up the access method, verify it is for a table */- if (accessMethod != NULL)- accessMethodId = get_table_am_oid(accessMethod, false);+ if (!HeapTupleIsValid(tup))+ elog(ERROR, \"cache lookup failed for relation %u\", relid);Shouldn't the validity of tup be checked before relam field is accessed ?Cheers",
"msg_date": "Wed, 18 May 2022 16:02:51 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: ALTER TABLE SET ACCESS METHOD on partitioned tables"
}
] |
[
{
"msg_contents": "Hi all,\n(Added Andrew in CC.)\n\nWhile working more on expanding the tests of pg_upgrade for\ncross-version checks, I have noticed that we don't expose a routine\nable to get back _pg_version from a node, which should remain a\nprivate field of Cluster.pm. We already do that for install_path, as\nof case added by 87076c4.\n\nAny objections or comments about the addition of a routine to get the\nPostgreSQL::Version, as of the attached?\n\nThanks,\n--\nMichael",
"msg_date": "Thu, 19 May 2022 10:38:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Addition of PostgreSQL::Test::Cluster::pg_version()"
},
{
"msg_contents": "> On 19 May 2022, at 03:38, Michael Paquier <michael@paquier.xyz> wrote:\n\n> Any objections or comments about the addition of a routine to get the\n> PostgreSQL::Version, as of the attached?\n\nI haven't tested the patch, but +1 on the idea.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 19 May 2022 09:03:57 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Addition of PostgreSQL::Test::Cluster::pg_version()"
},
{
"msg_contents": "\nOn 2022-05-18 We 21:38, Michael Paquier wrote:\n> Hi all,\n> (Added Andrew in CC.)\n>\n> While working more on expanding the tests of pg_upgrade for\n> cross-version checks, I have noticed that we don't expose a routine\n> able to get back _pg_version from a node, which should remain a\n> private field of Cluster.pm. We already do that for install_path, as\n> of case added by 87076c4.\n>\n> Any objections or comments about the addition of a routine to get the\n> PostgreSQL::Version, as of the attached?\n>\n\n\nLooks ok. PostgreSQL::Version is designed so that the object behaves\nsanely in comparisons and when interpolated into a string.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 19 May 2022 07:28:53 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Addition of PostgreSQL::Test::Cluster::pg_version()"
},
{
"msg_contents": "On Thu, May 19, 2022 at 07:28:53AM -0400, Andrew Dunstan wrote:\n> Looks ok. PostgreSQL::Version is designed so that the object behaves\n> sanely in comparisons and when interpolated into a string.\n\nI saw that, and that's kind of nice when it comes to write\nversion-specific code paths in the tests ;) \n--\nMichael",
"msg_date": "Fri, 20 May 2022 09:54:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Addition of PostgreSQL::Test::Cluster::pg_version()"
},
{
"msg_contents": "On Thu, May 19, 2022 at 07:28:53AM -0400, Andrew Dunstan wrote:\n> Looks ok. PostgreSQL::Version is designed so that the object behaves\n> sanely in comparisons and when interpolated into a string.\n\nOkay, I have applied this thing. I'll move back to my business with\nthe tests of pg_upgrade...\n--\nMichael",
"msg_date": "Fri, 20 May 2022 19:51:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Addition of PostgreSQL::Test::Cluster::pg_version()"
}
] |
[
{
"msg_contents": "Hi hackers.\n\nFYI, I saw that there was a recent Build-farm error on the \"grison\" machine [1]\n[1] https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=grison&br=HEAD\n\nThe error happened during \"subscriptionCheck\" phase in the TAP test\nt/031_column_list.pl\nThis test file was added by this [2] commit.\n[2] https://github.com/postgres/postgres/commit/923def9a533a7d986acfb524139d8b9e5466d0a5\n\n~~\n\nI checked the history of fails for that TAP test t/031_column_list.pl\nand found that this same error seems to have been happening\nintermittently for at least the last 50 days.\n\nDetails of similar previous errors from the BF are listed below.\n\n~~~\n\n1. Details for system \"grison\" failure at stage subscriptionCheck,\nsnapshot taken 2022-05-18 18:11:45\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2022-05-18%2018%3A11%3A45\n\n[22:02:08] t/029_on_error.pl .................. ok 25475 ms ( 0.01\nusr 0.00 sys + 15.39 cusr 5.59 csys = 20.99 CPU)\n# poll_query_until timed out executing this query:\n# SELECT '0/1530588' <= replay_lsn AND state = 'streaming'\n# FROM pg_catalog.pg_stat_replication\n# WHERE application_name IN ('sub1', 'walreceiver')\n# expecting this output:\n# t\n# last actual query output:\n#\n# with stderr:\n# Tests were run but no plan was declared and done_testing() was not seen.\n# Looks like your test exited with 29 just after 22.\n[22:09:25] t/031_column_list.pl ...............\n...\n[22:02:47.887](1.829s) ok 22 - partitions with different replica\nidentities not replicated correctly Waiting for replication conn\nsub1's replay_lsn to pass 0/1530588 on publisher\n[22:09:25.395](397.508s) # poll_query_until timed out executing this query:\n# SELECT '0/1530588' <= replay_lsn AND state = 'streaming'\n# FROM pg_catalog.pg_stat_replication\n# WHERE application_name IN ('sub1', 'walreceiver')\n# expecting this output:\n# t\n# last actual query output:\n#\n# with stderr:\ntimed out waiting for catchup at t/031_column_list.pl line 728.\n### Stopping node \"publisher\" using mode immediate\n\n~~~\n\n2. Details for system \"xenodermus\" failure at stage subscriptionCheck,\nsnapshot taken 2022-04-16 21:00:04\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=xenodermus&dt=2022-04-16%2021%3A00%3A04\n\n[00:15:32] t/029_on_error.pl .................. ok 8278 ms ( 0.00\nusr 0.00 sys + 1.33 cusr 0.55 csys = 1.88 CPU)\n# poll_query_until timed out executing this query:\n# SELECT '0/1543648' <= replay_lsn AND state = 'streaming'\n# FROM pg_catalog.pg_stat_replication\n# WHERE application_name IN ('sub1', 'walreceiver')\n# expecting this output:\n# t\n# last actual query output:\n#\n# with stderr:\n# Tests were run but no plan was declared and done_testing() was not seen.\n# Looks like your test exited with 29 just after 25.\n[00:22:30] t/031_column_list.pl ...............\n...\n[00:16:04.100](0.901s) ok 25 - partitions with different replica\nidentities not replicated correctly Waiting for replication conn\nsub1's replay_lsn to pass 0/1543648 on publisher\n[00:22:29.923](385.823s) # poll_query_until timed out executing this query:\n# SELECT '0/1543648' <= replay_lsn AND state = 'streaming'\n# FROM pg_catalog.pg_stat_replication\n# WHERE application_name IN ('sub1', 'walreceiver')\n# expecting this output:\n# t\n# last actual query output:\n#\n# with stderr:\ntimed out waiting for catchup at t/031_column_list.pl line 818.\n\n~~~\n\n3. Details for system \"phycodurus\" failure at stage subscriptionCheck,\nsnapshot taken 2022-04-05 17:30:04\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2022-04-05%2017%3A30%3A04\n\n# poll_query_until timed out executing this query:\n# SELECT '0/1528640' <= replay_lsn AND state = 'streaming'\n# FROM pg_catalog.pg_stat_replication\n# WHERE application_name IN ('sub1', 'walreceiver')\n# expecting this output:\n# t\n# last actual query output:\n#\n# with stderr:\n# Tests were run but no plan was declared and done_testing() was not seen.\n# Looks like your test exited with 29 just after 22.\n[20:50:25] t/031_column_list.pl ...............\n...\nok 22 - partitions with different replica identities not replicated\ncorrectly Waiting for replication conn sub1's replay_lsn to pass\n0/1528640 on publisher # poll_query_until timed out executing this\nquery:\n# SELECT '0/1528640' <= replay_lsn AND state = 'streaming'\n# FROM pg_catalog.pg_stat_replication\n# WHERE application_name IN ('sub1', 'walreceiver')\n# expecting this output:\n# t\n# last actual query output:\n#\n# with stderr:\ntimed out waiting for catchup at t/031_column_list.pl line 667.\n\n~~~\n\n4. Details for system \"phycodurus\" failure at stage subscriptionCheck,\nsnapshot taken 2022-04-05 17:30:04\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2022-04-05%2017%3A30%3A04\n\n[20:43:04] t/030_sequences.pl ................. ok 11108 ms ( 0.00\nusr 0.00 sys + 1.49 cusr 0.40 csys = 1.89 CPU)\n# poll_query_until timed out executing this query:\n# SELECT '0/1528640' <= replay_lsn AND state = 'streaming'\n# FROM pg_catalog.pg_stat_replication\n# WHERE application_name IN ('sub1', 'walreceiver')\n# expecting this output:\n# t\n# last actual query output:\n#\n# with stderr:\n# Tests were run but no plan was declared and done_testing() was not seen.\n# Looks like your test exited with 29 just after 22.\n[20:50:25] t/031_column_list.pl ...............\n...\nok 22 - partitions with different replica identities not replicated\ncorrectly Waiting for replication conn sub1's replay_lsn to pass\n0/1528640 on publisher # poll_query_until timed out executing this\nquery:\n# SELECT '0/1528640' <= replay_lsn AND state = 'streaming'\n# FROM pg_catalog.pg_stat_replication\n# WHERE application_name IN ('sub1', 'walreceiver')\n# expecting this output:\n# t\n# last actual query output:\n#\n# with stderr:\ntimed out waiting for catchup at t/031_column_list.pl line 667.\n\n~~~\n\n5. Details for system \"grison\" failure at stage subscriptionCheck,\nsnapshot taken 2022-04-03 18:11:39\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2022-04-03%2018%3A11%3A39\n\n[22:28:00] t/030_sequences.pl ................. ok 22970 ms ( 0.01\nusr 0.00 sys + 14.93 cusr 5.14 csys = 20.08 CPU)\n# poll_query_until timed out executing this query:\n# SELECT '0/1528CF0' <= replay_lsn AND state = 'streaming'\n# FROM pg_catalog.pg_stat_replication\n# WHERE application_name IN ('sub1', 'walreceiver')\n# expecting this output:\n# t\n# last actual query output:\n#\n# with stderr:\n# Tests were run but no plan was declared and done_testing() was not seen.\n# Looks like your test exited with 29 just after 22.\n[22:35:11] t/031_column_list.pl ...............\n....\nok 22 - partitions with different replica identities not replicated\ncorrectly Waiting for replication conn sub1's replay_lsn to pass\n0/1528CF0 on publisher # poll_query_until timed out executing this\nquery:\n# SELECT '0/1528CF0' <= replay_lsn AND state = 'streaming'\n# FROM pg_catalog.pg_stat_replication\n# WHERE application_name IN ('sub1', 'walreceiver')\n# expecting this output:\n# t\n# last actual query output:\n#\n# with stderr:\ntimed out waiting for catchup at t/031_column_list.pl line 667.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 19 May 2022 14:26:56 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Build-farm - intermittent error in 031_column_list.pl"
},
{
"msg_contents": "At Thu, 19 May 2022 14:26:56 +1000, Peter Smith <smithpb2250@gmail.com> wrote in \n> Hi hackers.\n> \n> FYI, I saw that there was a recent Build-farm error on the \"grison\" machine [1]\n> [1] https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=grison&br=HEAD\n> \n> The error happened during \"subscriptionCheck\" phase in the TAP test\n> t/031_column_list.pl\n> This test file was added by this [2] commit.\n> [2] https://github.com/postgres/postgres/commit/923def9a533a7d986acfb524139d8b9e5466d0a5\n\nWhat is happening for all of them looks like that the name of a\npublication created by CREATE PUBLICATION without a failure report is\nmissing for a walsender came later. It seems like CREATE PUBLICATION\ncan silently fail to create a publication, or walsender somehow failed\nto find existing one.\n\n\n> ~~\n> \n> I checked the history of fails for that TAP test t/031_column_list.pl\n> and found that this same error seems to have been happening\n> intermittently for at least the last 50 days.\n> \n> Details of similar previous errors from the BF are listed below.\n> \n> ~~~\n> \n> 1. Details for system \"grison\" failure at stage subscriptionCheck,\n> snapshot taken 2022-05-18 18:11:45\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2022-05-18%2018%3A11%3A45\n> \n> [22:02:08] t/029_on_error.pl .................. ok 25475 ms ( 0.01\n> usr 0.00 sys + 15.39 cusr 5.59 csys = 20.99 CPU)\n> # poll_query_until timed out executing this query:\n> # SELECT '0/1530588' <= replay_lsn AND state = 'streaming'\n> # FROM pg_catalog.pg_stat_replication\n> # WHERE application_name IN ('sub1', 'walreceiver')\n> # expecting this output:\n> # t\n> # last actual query output:\n> #\n> # with stderr:\n> # Tests were run but no plan was declared and done_testing() was not seen.\n> # Looks like your test exited with 29 just after 22.\n> [22:09:25] t/031_column_list.pl ...............\n> ...\n> [22:02:47.887](1.829s) ok 22 - partitions with different replica\n> identities not replicated correctly Waiting for replication conn\n> sub1's replay_lsn to pass 0/1530588 on publisher\n> [22:09:25.395](397.508s) # poll_query_until timed out executing this query:\n> # SELECT '0/1530588' <= replay_lsn AND state = 'streaming'\n> # FROM pg_catalog.pg_stat_replication\n> # WHERE application_name IN ('sub1', 'walreceiver')\n> # expecting this output:\n> # t\n> # last actual query output:\n> #\n> # with stderr:\n> timed out waiting for catchup at t/031_column_list.pl line 728.\n> ### Stopping node \"publisher\" using mode immediate\n\n2022-04-17 00:16:04.278 CEST [293659][client backend][4/270:0][031_column_list.pl] LOG: statement: CREATE PUBLICATION pub9 FOR TABLE test_part_d (a) WITH (publish_via_partition_root = true);\n2022-04-17 00:16:04.279 CEST [293659][client backend][:0][031_column_list.pl] LOG: disconnection: session time: 0:00:00.002 user=bf database=postgres host=[local]\n\n\"CREATE PUBLICATION pub9\" is executed at 00:16:04.278 on 293659 then\nthe session has been disconnected. But the following request for the\nsame publication fails due to the absense of the publication.\n\n2022-04-17 00:16:08.147 CEST [293856][walsender][3/0:0][sub1] STATEMENT: START_REPLICATION SLOT \"sub1\" LOGICAL 0/153DB88 (proto_version '3', publication_names '\"pub9\"')\n2022-04-17 00:16:08.148 CEST [293856][walsender][3/0:0][sub1] ERROR: publication \"pub9\" does not exist\n\n\n> ~~~\n> \n> 2. Details for system \"xenodermus\" failure at stage subscriptionCheck,\n> snapshot taken 2022-04-16 21:00:04\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=xenodermus&dt=2022-04-16%2021%3A00%3A04\n\nThe same. pub9 is missing after creation.\n\n> ~~~\n> \n> 3. Details for system \"phycodurus\" failure at stage subscriptionCheck,\n> snapshot taken 2022-04-05 17:30:04\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2022-04-05%2017%3A30%3A04\n\nThe same happens for pub7..\n\n> 4. Details for system \"phycodurus\" failure at stage subscriptionCheck,\n> snapshot taken 2022-04-05 17:30:04\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2022-04-05%2017%3A30%3A04\n\nSame. pub7 is missing.\n\n> 5. Details for system \"grison\" failure at stage subscriptionCheck,\n> snapshot taken 2022-04-03 18:11:39\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grison&dt=2022-04-03%2018%3A11%3A39\n\nSame. pub7 is missing.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 19 May 2022 15:58:04 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Build-farm - intermittent error in 031_column_list.pl"
},
{
"msg_contents": "On Thu, May 19, 2022 at 12:28 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 19 May 2022 14:26:56 +1000, Peter Smith <smithpb2250@gmail.com> wrote in\n> > Hi hackers.\n> >\n> > FYI, I saw that there was a recent Build-farm error on the \"grison\" machine [1]\n> > [1] https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=grison&br=HEAD\n> >\n> > The error happened during \"subscriptionCheck\" phase in the TAP test\n> > t/031_column_list.pl\n> > This test file was added by this [2] commit.\n> > [2] https://github.com/postgres/postgres/commit/923def9a533a7d986acfb524139d8b9e5466d0a5\n>\n> What is happening for all of them looks like that the name of a\n> publication created by CREATE PUBLICATION without a failure report is\n> missing for a walsender came later. It seems like CREATE PUBLICATION\n> can silently fail to create a publication, or walsender somehow failed\n> to find existing one.\n>\n\nDo you see anything in LOGS which indicates CREATE SUBSCRIPTION has failed?\n\n>\n> > ~~\n> >\n>\n> 2022-04-17 00:16:04.278 CEST [293659][client backend][4/270:0][031_column_list.pl] LOG: statement: CREATE PUBLICATION pub9 FOR TABLE test_part_d (a) WITH (publish_via_partition_root = true);\n> 2022-04-17 00:16:04.279 CEST [293659][client backend][:0][031_column_list.pl] LOG: disconnection: session time: 0:00:00.002 user=bf database=postgres host=[local]\n>\n> \"CREATE PUBLICATION pub9\" is executed at 00:16:04.278 on 293659 then\n> the session has been disconnected. But the following request for the\n> same publication fails due to the absense of the publication.\n>\n> 2022-04-17 00:16:08.147 CEST [293856][walsender][3/0:0][sub1] STATEMENT: START_REPLICATION SLOT \"sub1\" LOGICAL 0/153DB88 (proto_version '3', publication_names '\"pub9\"')\n> 2022-04-17 00:16:08.148 CEST [293856][walsender][3/0:0][sub1] ERROR: publication \"pub9\" does not exist\n>\n\nThis happens after \"ALTER SUBSCRIPTION sub1 SET PUBLICATION pub9\". The\nprobable theory is that ALTER SUBSCRIPTION will lead to restarting of\napply worker (which we can see in LOGS as well) and after the restart,\nthe apply worker will use the existing slot and replication origin\ncorresponding to the subscription. Now, it is possible that before\nrestart the origin has not been updated and the WAL start location\npoints to a location prior to where PUBLICATION pub9 exists which can\nlead to such an error. Once this error occurs, apply worker will never\nbe able to proceed and will always return the same error. Does this\nmake sense?\n\nUnless you or others see a different theory, this seems to be the\nexisting problem in logical replication which is manifested by this\ntest. If we just want to fix these test failures, we can create a new\nsubscription instead of altering the existing publication to point to\nthe new publication.\n\nNote: Added Tomas to know his views as he has committed this test.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 19 May 2022 15:16:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Build-farm - intermittent error in 031_column_list.pl"
},
{
"msg_contents": "On Thu, May 19, 2022 at 3:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, May 19, 2022 at 12:28 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Thu, 19 May 2022 14:26:56 +1000, Peter Smith <smithpb2250@gmail.com> wrote in\n> > > Hi hackers.\n> > >\n> > > FYI, I saw that there was a recent Build-farm error on the \"grison\" machine [1]\n> > > [1] https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=grison&br=HEAD\n> > >\n> > > The error happened during \"subscriptionCheck\" phase in the TAP test\n> > > t/031_column_list.pl\n> > > This test file was added by this [2] commit.\n> > > [2] https://github.com/postgres/postgres/commit/923def9a533a7d986acfb524139d8b9e5466d0a5\n> >\n> > What is happening for all of them looks like that the name of a\n> > publication created by CREATE PUBLICATION without a failure report is\n> > missing for a walsender came later. It seems like CREATE PUBLICATION\n> > can silently fail to create a publication, or walsender somehow failed\n> > to find existing one.\n> >\n>\n> Do you see anything in LOGS which indicates CREATE SUBSCRIPTION has failed?\n>\n> >\n> > > ~~\n> > >\n> >\n> > 2022-04-17 00:16:04.278 CEST [293659][client backend][4/270:0][031_column_list.pl] LOG: statement: CREATE PUBLICATION pub9 FOR TABLE test_part_d (a) WITH (publish_via_partition_root = true);\n> > 2022-04-17 00:16:04.279 CEST [293659][client backend][:0][031_column_list.pl] LOG: disconnection: session time: 0:00:00.002 user=bf database=postgres host=[local]\n> >\n> > \"CREATE PUBLICATION pub9\" is executed at 00:16:04.278 on 293659 then\n> > the session has been disconnected. But the following request for the\n> > same publication fails due to the absense of the publication.\n> >\n> > 2022-04-17 00:16:08.147 CEST [293856][walsender][3/0:0][sub1] STATEMENT: START_REPLICATION SLOT \"sub1\" LOGICAL 0/153DB88 (proto_version '3', publication_names '\"pub9\"')\n> > 2022-04-17 00:16:08.148 CEST [293856][walsender][3/0:0][sub1] ERROR: publication \"pub9\" does not exist\n> >\n>\n> This happens after \"ALTER SUBSCRIPTION sub1 SET PUBLICATION pub9\". The\n> probable theory is that ALTER SUBSCRIPTION will lead to restarting of\n> apply worker (which we can see in LOGS as well) and after the restart,\n> the apply worker will use the existing slot and replication origin\n> corresponding to the subscription. Now, it is possible that before\n> restart the origin has not been updated and the WAL start location\n> points to a location prior to where PUBLICATION pub9 exists which can\n> lead to such an error. Once this error occurs, apply worker will never\n> be able to proceed and will always return the same error. Does this\n> make sense?\n>\n> Unless you or others see a different theory, this seems to be the\n> existing problem in logical replication which is manifested by this\n> test. If we just want to fix these test failures, we can create a new\n> subscription instead of altering the existing publication to point to\n> the new publication.\n>\n\nIf the above theory is correct then I think allowing the publisher to\ncatch up with \"$node_publisher->wait_for_catchup('sub1');\" before\nALTER SUBSCRIPTION should fix this problem. Because if before ALTER\nboth publisher and subscriber are in sync then the new publication\nshould be visible to WALSender.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 19 May 2022 16:42:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Build-farm - intermittent error in 031_column_list.pl"
},
{
"msg_contents": "At Thu, 19 May 2022 16:42:31 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Thu, May 19, 2022 at 3:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > This happens after \"ALTER SUBSCRIPTION sub1 SET PUBLICATION pub9\". The\n> > probable theory is that ALTER SUBSCRIPTION will lead to restarting of\n> > apply worker (which we can see in LOGS as well) and after the restart,\n\nYes.\n\n> > the apply worker will use the existing slot and replication origin\n> > corresponding to the subscription. Now, it is possible that before\n> > restart the origin has not been updated and the WAL start location\n> > points to a location prior to where PUBLICATION pub9 exists which can\n> > lead to such an error. Once this error occurs, apply worker will never\n> > be able to proceed and will always return the same error. Does this\n> > make sense?\n\nWow. I didin't thought that line. That theory explains the silence and\nmakes sense even though I don't see LSN transistions that clearly\nsupport it. I dimly remember a similar kind of problem..\n\n> > Unless you or others see a different theory, this seems to be the\n> > existing problem in logical replication which is manifested by this\n> > test. If we just want to fix these test failures, we can create a new\n> > subscription instead of altering the existing publication to point to\n> > the new publication.\n> >\n> \n> If the above theory is correct then I think allowing the publisher to\n> catch up with \"$node_publisher->wait_for_catchup('sub1');\" before\n> ALTER SUBSCRIPTION should fix this problem. Because if before ALTER\n> both publisher and subscriber are in sync then the new publication\n> should be visible to WALSender.\n\nIt looks right to me. That timetravel seems inintuitive but it's the\n(current) way it works.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 20 May 2022 10:28:19 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Build-farm - intermittent error in 031_column_list.pl"
},
{
"msg_contents": "On Fri, May 20, 2022 at 6:58 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> > > the apply worker will use the existing slot and replication origin\n> > > corresponding to the subscription. Now, it is possible that before\n> > > restart the origin has not been updated and the WAL start location\n> > > points to a location prior to where PUBLICATION pub9 exists which can\n> > > lead to such an error. Once this error occurs, apply worker will never\n> > > be able to proceed and will always return the same error. Does this\n> > > make sense?\n>\n> Wow. I didin't thought that line. That theory explains the silence and\n> makes sense even though I don't see LSN transistions that clearly\n> support it. I dimly remember a similar kind of problem..\n>\n> > > Unless you or others see a different theory, this seems to be the\n> > > existing problem in logical replication which is manifested by this\n> > > test. If we just want to fix these test failures, we can create a new\n> > > subscription instead of altering the existing publication to point to\n> > > the new publication.\n> > >\n> >\n> > If the above theory is correct then I think allowing the publisher to\n> > catch up with \"$node_publisher->wait_for_catchup('sub1');\" before\n> > ALTER SUBSCRIPTION should fix this problem. Because if before ALTER\n> > both publisher and subscriber are in sync then the new publication\n> > should be visible to WALSender.\n>\n> It looks right to me.\n>\n\nLet's wait for Tomas or others working in this area to share their thoughts.\n\n> That timetravel seems inintuitive but it's the\n> (current) way it works.\n>\n\nI have thought about it but couldn't come up with a good way to change\nthe way currently it works. Moreover, I think it is easy to hit this\nin other ways as well. Say, you first create a subscription with a\nnon-existent publication and then do operation on any unrelated table\non the publisher before creating the required publication, we will hit\nexactly this problem of \"publication does not exist\", so I think we\nmay need to live with this behavior and write tests carefully.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 20 May 2022 09:28:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Build-farm - intermittent error in 031_column_list.pl"
},
{
"msg_contents": "\n\nOn 5/20/22 05:58, Amit Kapila wrote:\n> On Fri, May 20, 2022 at 6:58 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>>\n>>>> the apply worker will use the existing slot and replication origin\n>>>> corresponding to the subscription. Now, it is possible that before\n>>>> restart the origin has not been updated and the WAL start location\n>>>> points to a location prior to where PUBLICATION pub9 exists which can\n>>>> lead to such an error. Once this error occurs, apply worker will never\n>>>> be able to proceed and will always return the same error. Does this\n>>>> make sense?\n>>\n>> Wow. I didin't thought that line. That theory explains the silence and\n>> makes sense even though I don't see LSN transistions that clearly\n>> support it. I dimly remember a similar kind of problem..\n>>\n>>>> Unless you or others see a different theory, this seems to be the\n>>>> existing problem in logical replication which is manifested by this\n>>>> test. If we just want to fix these test failures, we can create a new\n>>>> subscription instead of altering the existing publication to point to\n>>>> the new publication.\n>>>>\n>>>\n>>> If the above theory is correct then I think allowing the publisher to\n>>> catch up with \"$node_publisher->wait_for_catchup('sub1');\" before\n>>> ALTER SUBSCRIPTION should fix this problem. Because if before ALTER\n>>> both publisher and subscriber are in sync then the new publication\n>>> should be visible to WALSender.\n>>\n>> It looks right to me.\n>>\n> \n> Let's wait for Tomas or others working in this area to share their thoughts.\n> \n\nAre we really querying the publications (in get_rel_sync_entry) using\nthe historical snapshot? I haven't really realized this, but yeah, that\nmight explain the issue.\n\nThe new TAP test does ALTER SUBSCRIPTION ... SET PUBLICATION much more\noften than any other test (there are ~15 calls, 12 of which are in this\nnew test). That might be why we haven't seen failures before. Or maybe\nthe existing tests simply are not vulnerable to this, because they\neither do wait_for_catchup late enough or don't do any DML right before\nexecuting SET PUBLICATION.\n\n>> That timetravel seems inintuitive but it's the\n>> (current) way it works.\n>>\n> \n> I have thought about it but couldn't come up with a good way to change\n> the way currently it works. Moreover, I think it is easy to hit this\n> in other ways as well. Say, you first create a subscription with a\n> non-existent publication and then do operation on any unrelated table\n> on the publisher before creating the required publication, we will hit\n> exactly this problem of \"publication does not exist\", so I think we\n> may need to live with this behavior and write tests carefully.\n> \n\nYeah, I think it pretty much requires ensuring the subscriber is fully\ncaught up with the publisher, otherwise ALTER SUBSCRIPTION may break the\nreplication in an unrecoverable way (actually, you can alter the\nsubscription and remove the publication again, right?).\n\nBut this is not just about tests, of course - the same issue applies to\nregular replication. That's a bit unfortunate, so maybe we should think\nabout making this less fragile.\n\nWe might make sure the subscriber is not lagging (essentially the\nwait_for_catchup) - which the users will have to do anyway (although\nmaybe they know the publisher is beyond the LSN where it was created).\n\nThe other option would be to detect such case, somehow - if you don't\nsee the publication yet, see if it exists in current snapshot, and then\nmaybe ignore this error. But that has other issues (the publication\nmight have been created and dropped, in which case you won't see it).\nAlso, we'd probably have to ignore RelationSyncEntry for a while, which\nseems quite expensive.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 20 May 2022 12:31:17 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Build-farm - intermittent error in 031_column_list.pl"
},
{
"msg_contents": "On Thursday, May 19, 2022 8:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Thu, May 19, 2022 at 3:16 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Thu, May 19, 2022 at 12:28 PM Kyotaro Horiguchi\r\n> > <horikyota.ntt@gmail.com> wrote:\r\n> > >\r\n> > > At Thu, 19 May 2022 14:26:56 +1000, Peter Smith\r\n> > > <smithpb2250@gmail.com> wrote in\r\n> > > > Hi hackers.\r\n> > > >\r\n> > > > FYI, I saw that there was a recent Build-farm error on the\r\n> > > > \"grison\" machine [1] [1]\r\n> > > > https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=grison\r\n> > > > &br=HEAD\r\n> > > >\r\n> > > > The error happened during \"subscriptionCheck\" phase in the TAP\r\n> > > > test t/031_column_list.pl This test file was added by this [2]\r\n> > > > commit.\r\n> > > > [2]\r\n> > > >\r\n> https://github.com/postgres/postgres/commit/923def9a533a7d986acfb5\r\n> > > > 24139d8b9e5466d0a5\r\n> > >\r\n> > > 2022-04-17 00:16:04.278 CEST [293659][client\r\n> > > backend][4/270:0][031_column_list.pl] LOG: statement: CREATE\r\n> > > PUBLICATION pub9 FOR TABLE test_part_d (a) WITH\r\n> > > (publish_via_partition_root = true);\r\n> > > 2022-04-17 00:16:04.279 CEST [293659][client\r\n> > > backend][:0][031_column_list.pl] LOG: disconnection: session time:\r\n> > > 0:00:00.002 user=bf database=postgres host=[local]\r\n> > >\r\n> > > \"CREATE PUBLICATION pub9\" is executed at 00:16:04.278 on 293659 then\r\n> > > the session has been disconnected. But the following request for the\r\n> > > same publication fails due to the absense of the publication.\r\n> > >\r\n> > > 2022-04-17 00:16:08.147 CEST [293856][walsender][3/0:0][sub1]\r\n> > > STATEMENT: START_REPLICATION SLOT \"sub1\" LOGICAL 0/153DB88\r\n> > > (proto_version '3', publication_names '\"pub9\"')\r\n> > > 2022-04-17 00:16:08.148 CEST [293856][walsender][3/0:0][sub1] ERROR:\r\n> > > publication \"pub9\" does not exist\r\n> > >\r\n> >\r\n> > This happens after \"ALTER SUBSCRIPTION sub1 SET PUBLICATION pub9\".\r\n> The\r\n> > probable theory is that ALTER SUBSCRIPTION will lead to restarting of\r\n> > apply worker (which we can see in LOGS as well) and after the restart,\r\n> > the apply worker will use the existing slot and replication origin\r\n> > corresponding to the subscription. Now, it is possible that before\r\n> > restart the origin has not been updated and the WAL start location\r\n> > points to a location prior to where PUBLICATION pub9 exists which can\r\n> > lead to such an error. Once this error occurs, apply worker will never\r\n> > be able to proceed and will always return the same error. Does this\r\n> > make sense?\r\n> >\r\n> > Unless you or others see a different theory, this seems to be the\r\n> > existing problem in logical replication which is manifested by this\r\n> > test. If we just want to fix these test failures, we can create a new\r\n> > subscription instead of altering the existing publication to point to\r\n> > the new publication.\r\n> >\r\n> \r\n> If the above theory is correct then I think allowing the publisher to catch up with\r\n> \"$node_publisher->wait_for_catchup('sub1');\" before ALTER SUBSCRIPTION\r\n> should fix this problem. Because if before ALTER both publisher and\r\n> subscriber are in sync then the new publication should be visible to\r\n> WALSender.\r\nHi,\r\n\r\n\r\nI've attached a patch for the fix proposed here.\r\nFirst of all, thank you so much for helping me offlist, Amit-san.\r\n\r\nI reproduced the failure like [1] by commenting out\r\nWalSndWaitForWal's call of WalSndKeepalive and running the test.\r\nThis comment out intends to suppress the advance of confirmed_flush location\r\nafter creating a publication.\r\n\r\nIn short, my understanding how the bug happened is, \r\n1. we execute 'create publication pubX' and create one publication.\r\n2. 'alter subscription subY set publication pubX' makes the apply worker exit\r\n3. relaunched apply worker searches for pubX. But, the slot position(confirmed_flush)\r\n doesn't get updated and points to some location before create publication at the publisher node.\r\n\r\nApplying the attached patch have made the test pass.\r\n\r\n\r\n[1] the subscriber's log\r\n\r\n2022-05-20 08:56:50.773 UTC [5153] 031_column_list.pl LOG: statement: ALTER SUBSCRIPTION sub1 SET PUBLICATION pub6\r\n2022-05-20 08:56:50.801 UTC [5156] 031_column_list.pl LOG: statement: SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r', 's');\r\n2022-05-20 08:56:50.846 UTC [5112] LOG: logical replication apply worker for subscription \"sub1\" will restart because of a parameter change\r\n2022-05-20 08:56:50.915 UTC [5158] 031_column_list.pl LOG: statement: SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r', 's');\r\n...\r\n2022-05-20 08:56:51.257 UTC [5164] 031_column_list.pl LOG: statement: SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r', 's');\r\n2022-05-20 08:56:51.353 UTC [5166] LOG: logical replication apply worker for subscription \"sub1\" has started\r\n2022-05-20 08:56:51.366 UTC [5168] LOG: logical replication table synchronization worker for subscription \"sub1\", table \"test_part_a\" has started\r\n2022-05-20 08:56:51.370 UTC [5171] 031_column_list.pl LOG: statement: SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN ('r', 's');\r\n2022-05-20 08:56:51.373 UTC [5166] ERROR: could not receive data from WAL stream: ERROR: publication \"pub6\" does not exist\r\nCONTEXT: slot \"sub1\", output plugin \"pgoutput\", in the change callback, associated LSN 0/15C61B8\r\n2022-05-20 08:56:51.374 UTC [4338] LOG: background worker \"logical replication worker\" (PID 5166) exited with exit code 1\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Fri, 20 May 2022 14:53:22 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Build-farm - intermittent error in 031_column_list.pl"
},
{
"msg_contents": "On Fri, May 20, 2022 at 4:01 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 5/20/22 05:58, Amit Kapila wrote:\n>\n> Are we really querying the publications (in get_rel_sync_entry) using\n> the historical snapshot?\n>\n\nYes.\n\n> I haven't really realized this, but yeah, that\n> might explain the issue.\n>\n> The new TAP test does ALTER SUBSCRIPTION ... SET PUBLICATION much more\n> often than any other test (there are ~15 calls, 12 of which are in this\n> new test). That might be why we haven't seen failures before. Or maybe\n> the existing tests simply are not vulnerable to this,\n>\n\nRight, I have checked the other cases are not vulnerable to this,\notherwise, I think we would have seen intermittent failures till now.\nThey don't seem to be doing DMLs before the creation of a publication\nor they create a subscription pointing to the same publication before.\n\n> because they\n> either do wait_for_catchup late enough or don't do any DML right before\n> executing SET PUBLICATION.\n>\n> >> That timetravel seems inintuitive but it's the\n> >> (current) way it works.\n> >>\n> >\n> > I have thought about it but couldn't come up with a good way to change\n> > the way currently it works. Moreover, I think it is easy to hit this\n> > in other ways as well. Say, you first create a subscription with a\n> > non-existent publication and then do operation on any unrelated table\n> > on the publisher before creating the required publication, we will hit\n> > exactly this problem of \"publication does not exist\", so I think we\n> > may need to live with this behavior and write tests carefully.\n> >\n>\n> Yeah, I think it pretty much requires ensuring the subscriber is fully\n> caught up with the publisher, otherwise ALTER SUBSCRIPTION may break the\n> replication in an unrecoverable way (actually, you can alter the\n> subscription and remove the publication again, right?).\n>\n\nRight.\n\n> But this is not just about tests, of course - the same issue applies to\n> regular replication. That's a bit unfortunate, so maybe we should think\n> about making this less fragile.\n>\n\nAgreed, provided we find some reasonable solution.\n\n> We might make sure the subscriber is not lagging (essentially the\n> wait_for_catchup) - which the users will have to do anyway (although\n> maybe they know the publisher is beyond the LSN where it was created).\n>\n\nThis won't work for the case mentioned above where we create a\nsubscription with non-existent publications, then perform DML and then\n'CREATE PUBLICATION'.\n\n> The other option would be to detect such case, somehow - if you don't\n> see the publication yet, see if it exists in current snapshot, and then\n> maybe ignore this error. But that has other issues (the publication\n> might have been created and dropped, in which case you won't see it).\n>\n\nTrue, the dropped case would again be tricky to deal with and I think\nwe will end up publishing some operations which are performed before\nthe publication is even created.\n\n> Also, we'd probably have to ignore RelationSyncEntry for a while, which\n> seems quite expensive.\n>\n\nYet another option could be that we continue using a historic snapshot\nbut ignore publications that are not found for the purpose of\ncomputing RelSyncEntry attributes. We won't mark such an entry as\nvalid till all the publications are loaded without anything missing. I\nthink such cases in practice won't be enough to matter. This means we\nwon't publish operations on tables corresponding to that publication\ntill we found such a publication and that seems okay.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 21 May 2022 09:03:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Build-farm - intermittent error in 031_column_list.pl"
},
{
"msg_contents": "On Sat, May 21, 2022 at 9:03 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, May 20, 2022 at 4:01 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>\n> > Also, we'd probably have to ignore RelationSyncEntry for a while, which\n> > seems quite expensive.\n> >\n>\n> Yet another option could be that we continue using a historic snapshot\n> but ignore publications that are not found for the purpose of\n> computing RelSyncEntry attributes. We won't mark such an entry as\n> valid till all the publications are loaded without anything missing. I\n> think such cases in practice won't be enough to matter. This means we\n> won't publish operations on tables corresponding to that publication\n> till we found such a publication and that seems okay.\n>\n\nAttached, find the patch to show what I have in mind for this. Today,\nwe have received a bug report with a similar symptom [1] and that\nshould also be fixed with this. The reported bug should also be fixed\nwith this.\n\nThoughts?\n\n\n[1] - https://www.postgresql.org/message-id/CANWRaJyyD%3D9c1E2HdF-Tqfe7%2BvuCQnAkXd6%2BEFwxC0wM%3D313AA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 24 May 2022 18:19:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Build-farm - intermittent error in 031_column_list.pl"
},
{
"msg_contents": "On Tuesday, May 24, 2022 9:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Sat, May 21, 2022 at 9:03 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Fri, May 20, 2022 at 4:01 PM Tomas Vondra\r\n> > <tomas.vondra@enterprisedb.com> wrote:\r\n> >\r\n> > > Also, we'd probably have to ignore RelationSyncEntry for a while,\r\n> > > which seems quite expensive.\r\n> > >\r\n> >\r\n> > Yet another option could be that we continue using a historic snapshot\r\n> > but ignore publications that are not found for the purpose of\r\n> > computing RelSyncEntry attributes. We won't mark such an entry as\r\n> > valid till all the publications are loaded without anything missing. I\r\n> > think such cases in practice won't be enough to matter. This means we\r\n> > won't publish operations on tables corresponding to that publication\r\n> > till we found such a publication and that seems okay.\r\n> >\r\n> \r\n> Attached, find the patch to show what I have in mind for this. Today, we have\r\n> received a bug report with a similar symptom [1] and that should also be fixed\r\n> with this. The reported bug should also be fixed with this.\r\n> \r\n> Thoughts?\r\nHi,\r\n\r\n\r\nI agree with this direction.\r\nI think this approach solves the issue fundamentally\r\nand is better than the first approach to add several calls\r\nof wait_for_catchup in the test, since taking the first one\r\nmeans we need to care about avoiding the same issue,\r\nwhenever we write a new (similar) test, even after the modification.\r\n\r\n\r\nI've used the patch to check below things.\r\n1. The patch can be applied and make check-world has passed without failure.\r\n2. HEAD applied with the patch passed all tests in src/test/subscription\r\n (including 031_column_list.pl), after commenting out of WalSndWaitForWal's WalSndKeepalive.\r\n3. The new bug fix report in 'How is this possible \"publication does not exist\"' thread\r\n has been fixed. FYI, after I execute the script's function, I also conduct\r\n additional insert to the publisher, and this was correctly replicated on the subscriber.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Wed, 25 May 2022 02:28:01 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Build-farm - intermittent error in 031_column_list.pl"
},
{
"msg_contents": "At Tue, 24 May 2022 18:19:45 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Sat, May 21, 2022 at 9:03 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, May 20, 2022 at 4:01 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > > Also, we'd probably have to ignore RelationSyncEntry for a while, which\n> > > seems quite expensive.\n> > >\n> >\n> > Yet another option could be that we continue using a historic snapshot\n> > but ignore publications that are not found for the purpose of\n> > computing RelSyncEntry attributes. We won't mark such an entry as\n> > valid till all the publications are loaded without anything missing. I\n> > think such cases in practice won't be enough to matter. This means we\n> > won't publish operations on tables corresponding to that publication\n> > till we found such a publication and that seems okay.\n> >\n> \n> Attached, find the patch to show what I have in mind for this. Today,\n> we have received a bug report with a similar symptom [1] and that\n> should also be fixed with this. The reported bug should also be fixed\n> with this.\n> \n> Thoughts?\n> \n> \n> [1] - https://www.postgresql.org/message-id/CANWRaJyyD%3D9c1E2HdF-Tqfe7%2BvuCQnAkXd6%2BEFwxC0wM%3D313AA%40mail.gmail.com\n\nIt does \"fix\" the case of [1]. But AFAIS\nRelationSyncEntry.replicate_valid is only used to inhibit repeated\nloading in get_rel_sync_entry and the function doesn't seem to be\nassumed to return a invalid entry. (Since the flag is not checked\nnowhere else.)\n\nFor example pgoutput_change does not check for the flag of the entry\nreturned from the function before uses it, which is not seemingly\nsafe. (I didn't check further, though)\n\nDon't we need to explicitly avoid using invalid entries outside the\nfunction?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 25 May 2022 11:46:06 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Build-farm - intermittent error in 031_column_list.pl"
},
{
"msg_contents": "On Wed, May 25, 2022 at 8:16 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> It does \"fix\" the case of [1]. But AFAIS\n> RelationSyncEntry.replicate_valid is only used to inhibit repeated\n> loading in get_rel_sync_entry and the function doesn't seem to be\n> assumed to return a invalid entry. (Since the flag is not checked\n> nowhere else.)\n>\n> For example pgoutput_change does not check for the flag of the entry\n> returned from the function before uses it, which is not seemingly\n> safe. (I didn't check further, though)\n>\n> Don't we need to explicitly avoid using invalid entries outside the\n> function?\n>\n\nWe decide that based on pubactions in the callers, so even if entry is\nvalid, it won't do anything. Actually, we don't need to avoid setting\nreplication_valid flag as some of the publications for the table may\nbe already present. We can check if the publications_valid flag is set\nwhile trying to validate the entry. Now, even if we don't find any\npublications the replicate_valid flag will be set but none of the\nactions will be set, so it won't do anything in the caller. Is this\nbetter than the previous approach?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 25 May 2022 16:56:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Build-farm - intermittent error in 031_column_list.pl"
},
{
"msg_contents": "On 5/25/22 13:26, Amit Kapila wrote:\n> On Wed, May 25, 2022 at 8:16 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>>\n>> It does \"fix\" the case of [1]. But AFAIS\n>> RelationSyncEntry.replicate_valid is only used to inhibit repeated\n>> loading in get_rel_sync_entry and the function doesn't seem to be\n>> assumed to return a invalid entry. (Since the flag is not checked\n>> nowhere else.)\n>>\n>> For example pgoutput_change does not check for the flag of the entry\n>> returned from the function before uses it, which is not seemingly\n>> safe. (I didn't check further, though)\n>>\n>> Don't we need to explicitly avoid using invalid entries outside the\n>> function?\n>>\n> \n> We decide that based on pubactions in the callers, so even if entry is\n> valid, it won't do anything. Actually, we don't need to avoid setting\n> replication_valid flag as some of the publications for the table may\n> be already present. We can check if the publications_valid flag is set\n> while trying to validate the entry. Now, even if we don't find any\n> publications the replicate_valid flag will be set but none of the\n> actions will be set, so it won't do anything in the caller. Is this\n> better than the previous approach?\n> \n\nFor the record, I'm not convinced this is the right way to fix the\nissue, as it may easily mask the real problem.\n\nWe do silently ignore missing objects in various places, but only when\neither requested or when it's obvious it's expected and safe to ignore.\nBut I'm not sure that applies here, in a clean way.\n\nImagine you have a subscriber using two publications p1 and p2, and\nsomeone comes around and drops p1 by mistake. With the proposed patch,\nthe subscription will notice this, but it'll continue sending data\nignoring the missing publication. Yes, it will continue working, but\nit's quite possible this breaks the subscriber and it's be better to\nfail and stop replicating.\n\nThe other aspect I dislike is that we just stop caching publication\ninfo, forcing us to reload it for every replicated change/row. So even\nif dropping the publication happens not to \"break\" the subscriber (i.e.\nthe data makes sense), this may easily cause performance issues, lag in\nthe replication, and so on. And the users will have no idea why and/or\nhow to fix it, because we just do this silently.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 25 May 2022 15:24:33 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Build-farm - intermittent error in 031_column_list.pl"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> The other aspect I dislike is that we just stop caching publication\n> info, forcing us to reload it for every replicated change/row.\n\nOuch --- that seems likely to be completely horrid. At the very least\nI'd want to see some benchmark numbers before concluding we could\nlive with that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 25 May 2022 10:28:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Build-farm - intermittent error in 031_column_list.pl"
},
{
"msg_contents": "On Wed, May 25, 2022 at 6:54 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 5/25/22 13:26, Amit Kapila wrote:\n> > On Wed, May 25, 2022 at 8:16 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> >>\n> >> It does \"fix\" the case of [1]. But AFAIS\n> >> RelationSyncEntry.replicate_valid is only used to inhibit repeated\n> >> loading in get_rel_sync_entry and the function doesn't seem to be\n> >> assumed to return a invalid entry. (Since the flag is not checked\n> >> nowhere else.)\n> >>\n> >> For example pgoutput_change does not check for the flag of the entry\n> >> returned from the function before uses it, which is not seemingly\n> >> safe. (I didn't check further, though)\n> >>\n> >> Don't we need to explicitly avoid using invalid entries outside the\n> >> function?\n> >>\n> >\n> > We decide that based on pubactions in the callers, so even if entry is\n> > valid, it won't do anything. Actually, we don't need to avoid setting\n> > replication_valid flag as some of the publications for the table may\n> > be already present. We can check if the publications_valid flag is set\n> > while trying to validate the entry. Now, even if we don't find any\n> > publications the replicate_valid flag will be set but none of the\n> > actions will be set, so it won't do anything in the caller. Is this\n> > better than the previous approach?\n> >\n>\n> For the record, I'm not convinced this is the right way to fix the\n> issue, as it may easily mask the real problem.\n>\n> We do silently ignore missing objects in various places, but only when\n> either requested or when it's obvious it's expected and safe to ignore.\n> But I'm not sure that applies here, in a clean way.\n>\n> Imagine you have a subscriber using two publications p1 and p2, and\n> someone comes around and drops p1 by mistake. With the proposed patch,\n> the subscription will notice this, but it'll continue sending data\n> ignoring the missing publication. Yes, it will continue working, but\n> it's quite possible this breaks the subscriber and it's be better to\n> fail and stop replicating.\n>\n\nIdeally, shouldn't we disallow drop of publication in such cases where\nit is part of some subscription? I know it will be tricky because some\nsubscriptions could be disabled.\n\n> The other aspect I dislike is that we just stop caching publication\n> info, forcing us to reload it for every replicated change/row. So even\n> if dropping the publication happens not to \"break\" the subscriber (i.e.\n> the data makes sense), this may easily cause performance issues, lag in\n> the replication, and so on. And the users will have no idea why and/or\n> how to fix it, because we just do this silently.\n>\n\nYeah, this is true that if there are missing publications, it needs to\nreload all the publications info again unless we build a mechanism to\nchange the existing cached entry by loading only required info. The\nother thing we could do here is to LOG the info for missing\npublications to make users aware of the fact. I think we can also\nintroduce a new option while defining/altering subscription to\nindicate whether to continue on missing publication or not, that way\nby default we will stop replication as we are doing now but users will\nhave a way to move replication.\n\nBTW, what are the other options we have to fix the cases where\nreplication is broken (or the user has no clue on how to proceed) as\nwe are discussing the case here or the OP reported yet another case on\npgsql-bugs [1]?\n\n[1] - https://www.postgresql.org/message-id/CANWRaJyyD%3D9c1E2HdF-Tqfe7%2BvuCQnAkXd6%2BEFwxC0wM%3D313AA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 26 May 2022 08:07:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Build-farm - intermittent error in 031_column_list.pl"
},
{
"msg_contents": "On Thursday, May 26, 2022 11:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Wed, May 25, 2022 at 6:54 PM Tomas Vondra\r\n> <tomas.vondra@enterprisedb.com> wrote:\r\n> >\r\n> > On 5/25/22 13:26, Amit Kapila wrote:\r\n> > > On Wed, May 25, 2022 at 8:16 AM Kyotaro Horiguchi\r\n> > > <horikyota.ntt@gmail.com> wrote:\r\n> > >>\r\n> > >> It does \"fix\" the case of [1]. But AFAIS\r\n> > >> RelationSyncEntry.replicate_valid is only used to inhibit repeated\r\n> > >> loading in get_rel_sync_entry and the function doesn't seem to be\r\n> > >> assumed to return a invalid entry. (Since the flag is not checked\r\n> > >> nowhere else.)\r\n> > >>\r\n> > >> For example pgoutput_change does not check for the flag of the\r\n> > >> entry returned from the function before uses it, which is not\r\n> > >> seemingly safe. (I didn't check further, though)\r\n> > >>\r\n> > >> Don't we need to explicitly avoid using invalid entries outside the\r\n> > >> function?\r\n> > >>\r\n> > >\r\n> > > We decide that based on pubactions in the callers, so even if entry\r\n> > > is valid, it won't do anything. Actually, we don't need to avoid\r\n> > > setting replication_valid flag as some of the publications for the\r\n> > > table may be already present. We can check if the publications_valid\r\n> > > flag is set while trying to validate the entry. Now, even if we\r\n> > > don't find any publications the replicate_valid flag will be set but\r\n> > > none of the actions will be set, so it won't do anything in the\r\n> > > caller. Is this better than the previous approach?\r\n> > >\r\n> >\r\n> > For the record, I'm not convinced this is the right way to fix the\r\n> > issue, as it may easily mask the real problem.\r\n> >\r\n> > We do silently ignore missing objects in various places, but only when\r\n> > either requested or when it's obvious it's expected and safe to ignore.\r\n> > But I'm not sure that applies here, in a clean way.\r\n> >\r\n> > Imagine you have a subscriber using two publications p1 and p2, and\r\n> > someone comes around and drops p1 by mistake. With the proposed patch,\r\n> > the subscription will notice this, but it'll continue sending data\r\n> > ignoring the missing publication. Yes, it will continue working, but\r\n> > it's quite possible this breaks the subscriber and it's be better to\r\n> > fail and stop replicating.\r\n> >\r\n> \r\n> Ideally, shouldn't we disallow drop of publication in such cases where it is part\r\n> of some subscription? I know it will be tricky because some subscriptions\r\n> could be disabled.\r\n> \r\n> > The other aspect I dislike is that we just stop caching publication\r\n> > info, forcing us to reload it for every replicated change/row. So even\r\n> > if dropping the publication happens not to \"break\" the subscriber (i.e.\r\n> > the data makes sense), this may easily cause performance issues, lag\r\n> > in the replication, and so on. And the users will have no idea why\r\n> > and/or how to fix it, because we just do this silently.\r\n> >\r\n> \r\n> Yeah, this is true that if there are missing publications, it needs to reload all the\r\n> publications info again unless we build a mechanism to change the existing\r\n> cached entry by loading only required info. The other thing we could do here is\r\n> to LOG the info for missing publications to make users aware of the fact. I think\r\n> we can also introduce a new option while defining/altering subscription to\r\n> indicate whether to continue on missing publication or not, that way by default\r\n> we will stop replication as we are doing now but users will have a way to move\r\n> replication.\r\n> \r\n> BTW, what are the other options we have to fix the cases where replication is\r\n> broken (or the user has no clue on how to proceed) as we are discussing the\r\n> case here or the OP reported yet another case on pgsql-bugs [1]?\r\nHi, \r\n\r\n\r\nFYI, I've noticed that after the last report by Peter-san\r\nwe've gotten the same errors on Build Farm.\r\nWe need to keep discussing to conclude this.\r\n\r\n\r\n1. Details for system \"xenodermus\" failure at stage subscriptionCheck, snapshot taken 2022-05-31 13:00:04\r\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=xenodermus&dt=2022-05-31%2013%3A00%3A04\r\n\r\n\r\n2. Details for system \"phycodurus\" failure at stage subscriptionCheck, snapshot taken 2022-05-26 17:30:04\r\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2022-05-26%2017%3A30%3A04\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Wed, 1 Jun 2022 02:06:11 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Build-farm - intermittent error in 031_column_list.pl"
},
{
"msg_contents": "Hello hackers,\n\n01.06.2022 05:06, osumi.takamichi@fujitsu.com wrote:\n> FYI, I've noticed that after the last report by Peter-san\n> we've gotten the same errors on Build Farm.\n> We need to keep discussing to conclude this.\n>\n>\n> 1. Details for system \"xenodermus\" failure at stage subscriptionCheck, snapshot taken 2022-05-31 13:00:04\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=xenodermus&dt=2022-05-31%2013%3A00%3A04\n>\n>\n> 2. Details for system \"phycodurus\" failure at stage subscriptionCheck, snapshot taken 2022-05-26 17:30:04\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2022-05-26%2017%3A30%3A04\n\nI think, I've discovered what causes that test failure.\nWhen playing with bgwriter during researching [1], I made it more\naggressive with a dirty hack:\n-#define LOG_SNAPSHOT_INTERVAL_MS 15000\n+#define LOG_SNAPSHOT_INTERVAL_MS 1\n\n rc = WaitLatch(MyLatch,\n WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n- BgWriterDelay /* ms */ , WAIT_EVENT_BGWRITER_MAIN);\n+ 1 /* ms */ , WAIT_EVENT_BGWRITER_MAIN);\n\nWith this modification, I ran `make check -C src/test/subscription` in a\nloop and observed the same failure of 031_column_list as discussed here.\nWith log_min_messages = DEBUG2, I see that in a failed case there is no\n'\"sub1\" has now caught up' message (as Amit and Tomas guessed upthread)\n(See full logs from one successful and two failed runs attached (I added\nsome extra logging and enabled wal_debug to better understand what's going\non here).)\n\nIf we look at the failures occurred in the buildfarm:\nThe first two from the past:\n1)\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=xenodermus&dt=2022-05-31%2013%3A00%3A04\n\n2022-05-26 20:39:23.828 CEST [276284][postmaster][:0][] LOG: starting PostgreSQL 15beta1 on x86_64-pc-linux-gnu, \ncompiled by gcc-10 (Debian 10.3.0-15) 10.3.0, 64-bit\n...\n2022-05-26 20:39:39.768 CEST [277545][walsender][3/0:0][sub1] ERROR: publication \"pub6\" does not exist\n\n2)\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=phycodurus&dt=2022-05-26%2017%3A30%3A04\n\n2022-05-31 16:33:25.506 CEST [3223685][postmaster][:0][] LOG: starting PostgreSQL 15beta1 on x86_64-pc-linux-gnu, \ncompiled by clang version 6.0.1 , 64-bit\n...\n2022-05-31 16:33:41.114 CEST [3224511][walsender][3/0:0][sub1] ERROR: publication \"pub6\" does not exist\n\nThe other two from the present:\n3)\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=kestrel&dt=2023-12-14%2009%3A14%3A52&stg=subscription-check\n\n2023-12-14 09:26:12.523 UTC [1144979][postmaster][:0] LOG: starting PostgreSQL 16.1 on x86_64-pc-linux-gnu, compiled by \nDebian clang version 13.0.1-11+b2, 64-bit\n...\n2023-12-14 09:26:28.663 UTC [1157936][walsender][3/0:0] ERROR: publication \"pub6\" does not exist\n\n4)\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mylodon&dt=2023-11-17%2018%3A28%3A24\n\n2023-11-17 18:31:13.594 UTC [200939] LOG: starting PostgreSQL 17devel on x86_64-linux, compiled by clang-14.0.6, 64-bit\n...\n2023-11-17 18:31:29.292 UTC [222103] sub1 ERROR: publication \"pub6\" does not exist\n\nwe can see that all the failures occurred approximately since 16 seconds\nafter the server start. And it's very close to predefined\nLOG_SNAPSHOT_INTERVAL_MS.\n\n[1] https://www.postgresql.org/message-id/6f85667e-5754-5d35-dbf1-c83fe08c1e48%40gmail.com\n\nBest regards,\nAlexander",
"msg_date": "Tue, 16 Jan 2024 21:00:01 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Build-farm - intermittent error in 031_column_list.pl"
}
] |
[
{
"msg_contents": "Hi:\nI write a C function, the function is as follows:\ncreate function st_geosotgrid(geom geometry, level integer) returns geosotgrid[]\n immutable\n strict\n parallel safe\n language c\nas\n$$\nbegin\n-- missing source code\nend;\n$$;\n\n\n\n\nAt the same time, I set the relevant parameters:\nforce_parallel_mode: offmax_parallel_maintenance_workers: 4max_parallel_workers: 8max_parallel_workers_per_gather: 2max_worker_processes: 8min_parallel_index_scan_size: 64min_parallel_table_scan_size: 1024parallel_leader_participation: on\nset parallel_setup_cost = 10;set parallel_tuple_cost = 0.001;\n\nsql:\nselect st_geosotgrid(geom,20) from t_polygon_gis;\n\nand the explain as follows:\n\nGather (cost=10.00..5098.67 rows=200000 width=32) Workers Planned: 2 -> Parallel Seq Scan on t_polygon_gis (cost=0.00..4888.67 rows=83333 width=32)\n\n\nwhen i explain analyze ,the parallel worker is suspend:\n\n\n\n\nI would like to know how can I get it to work properly?\n\nThank You!",
"msg_date": "Thu, 19 May 2022 08:59:17 +0000 (UTC)",
"msg_from": "\"huangning290@yahoo.com\" <huangning290@yahoo.com>",
"msg_from_op": true,
"msg_subject": "parallel not working"
},
{
"msg_contents": "On Thu, May 19, 2022 at 8:05 AM huangning290@yahoo.com <\nhuangning290@yahoo.com> wrote:\n\n> I would like to know how can I get it to work properly?\n>\n\nI suppose you have a bug in your C code. Try hooking up a debugger to one\nof the sessions that is hung and see what it's doing e.g.\n\ngdb -p 26130\nbt\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\nOn Thu, May 19, 2022 at 8:05 AM huangning290@yahoo.com <huangning290@yahoo.com> wrote:I would like to know how can I get it to work properly?I suppose you have a bug in your C code. Try hooking up a debugger to one of the sessions that is hung and see what it's doing e.g.gdb -p 26130bt -- Robert HaasEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 19 May 2022 16:13:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: parallel not working"
}
] |
[
{
"msg_contents": "Debian unstable mips64el:\n\n2022-05-18 22:57:34.436 UTC client backend[19222] pg_regress/triggers STATEMENT: drop trigger trg1 on trigpart3;\n...\n2022-05-18 22:57:39.110 UTC postmaster[7864] LOG: server process (PID 19222) was terminated by signal 11: Segmentation fault\n2022-05-18 22:57:39.110 UTC postmaster[7864] DETAIL: Failed process was running: SELECT a.attname,\n\t pg_catalog.format_type(a.atttypid, a.atttypmod),\n\t (SELECT pg_catalog.pg_get_expr(d.adbin, d.adrelid, true)\n\t FROM pg_catalog.pg_attrdef d\n\t WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND a.atthasdef),\n\t a.attnotnull,\n\t (SELECT c.collname FROM pg_catalog.pg_collation c, pg_catalog.pg_type t\n\t WHERE c.oid = a.attcollation AND t.oid = a.atttypid AND a.attcollation <> t.typcollation) AS attcollation,\n\t a.attidentity,\n\t a.attgenerated\n\tFROM pg_catalog.pg_attribute a\n\tWHERE a.attrelid = '21816' AND a.attnum > 0 AND NOT a.attisdropped\n\tORDER BY a.attnum;\n\n******** build/src/test/regress/tmp_check/data/core ********\n\nwarning: Can't open file /dev/shm/PostgreSQL.387042440 during file-backed mapping note processing\n\nwarning: Can't open file /dev/shm/PostgreSQL.4014890228 during file-backed mapping note processing\n\nwarning: Can't open file /dev/zero (deleted) during file-backed mapping note processing\n\nwarning: Can't open file /SYSV035e8a2e (deleted) during file-backed mapping note processing\n[New LWP 19222]\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib/mips64el-linux-gnuabi64/libthread_db.so.1\".\nCore was generated by `postgres: buildd regression [local] SELECT '.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0x000000ffd000565c in ?? ()\n#0 0x000000ffd000565c in ?? ()\nNo symbol table info available.\n#1 0x000000aaad76b730 in ExecEvalExprSwitchContext (isNull=0xfffb9e85e7, econtext=0xaab20e9f90, state=0xaab20ea108) at ./build/../src/include/executor/executor.h:343\n retDatum = <optimized out>\n oldContext = 0xaab1fabb10\n retDatum = <optimized out>\n oldContext = <optimized out>\n#2 ExecProject (projInfo=0xaab20ea100) at ./build/../src/include/executor/executor.h:377\n econtext = 0xaab20e9f90\n state = 0xaab20ea108\n slot = 0xaab20ea5b0\n isnull = false\n#3 ExecScan (node=0xaab20ea100, accessMtd=0xaaad78b6d0 <IndexNext>, recheckMtd=0xaaad78bf08 <IndexRecheck>) at ./build/../src/backend/executor/execScan.c:238\n slot = <optimized out>\n econtext = <optimized out>\n qual = <optimized out>\n projInfo = 0xaab20ea100\n#4 0x000000aaad76b730 in ExecEvalExprSwitchContext (isNull=0xaab20ea450, econtext=0xaab20e9f90, state=0x1208) at ./build/../src/include/executor/executor.h:343\n retDatum = <optimized out>\n oldContext = 0xaab20ea6c8\n retDatum = <optimized out>\n oldContext = <optimized out>\n#5 ExecProject (projInfo=0x1200) at ./build/../src/include/executor/executor.h:377\n econtext = 0xaab20e9f90\n state = 0x1208\n slot = 0xaab20ea638\n isnull = false\n#6 ExecScan (node=0xaab20ea6d0, accessMtd=0xaab20ea6cd, recheckMtd=0xaab20ea310) at ./build/../src/backend/executor/execScan.c:238\n slot = <optimized out>\n econtext = <optimized out>\n qual = <optimized out>\n projInfo = 0x1200\n#7 0xffffffffffffffff in ?? ()\nNo symbol table info available.\nBacktrace stopped: frame did not save the PC\n\n\nFull build log:\nhttps://buildd.debian.org/status/fetch.php?pkg=postgresql-15&arch=mips64el&ver=15%7Ebeta1-1&stamp=1652916002&raw=0\n\nChristoph\n\n\n",
"msg_date": "Thu, 19 May 2022 17:09:35 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "15beta1 crash on mips64el in pg_regress/triggers"
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> Debian unstable mips64el:\n\nHmm, so what's different between this and buildfarm member topminnow?\n\nIs the crash 100% reproducible for you?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 May 2022 11:12:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 15beta1 crash on mips64el in pg_regress/triggers"
},
{
"msg_contents": "Re: Tom Lane\n> Christoph Berg <myon@debian.org> writes:\n> > Debian unstable mips64el:\n> \n> Hmm, so what's different between this and buildfarm member topminnow?\n> \n> Is the crash 100% reproducible for you?\n\nI have scheduled a rebuild now, we'll know in a few hours...\n\nChristoph\n\n\n",
"msg_date": "Thu, 19 May 2022 17:15:53 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: 15beta1 crash on mips64el in pg_regress/triggers"
},
{
"msg_contents": "Re: Tom Lane\n> Christoph Berg <myon@debian.org> writes:\n> > Debian unstable mips64el:\n> \n> Hmm, so what's different between this and buildfarm member topminnow?\n\nThat one is running Debian jessie (aka oldoldoldoldstable), uses\n-mabi=32 with gcc 4.9, and runs a kernel from 2015.\n\nThe Debian buildd is this: https://db.debian.org/machines.cgi?host=mipsel-aql-01\nThe host should be running Debian buster, with the build done in an\nunstable chroot. I don't know what \"LS3A-RS780-1w (Quad Core Loongson 3A)\"\nmeans, but it's probably much newer hardware than the other one.\n\nChristoph\n\n\n",
"msg_date": "Thu, 19 May 2022 17:26:32 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: 15beta1 crash on mips64el in pg_regress/triggers"
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> Re: Tom Lane\n>> Hmm, so what's different between this and buildfarm member topminnow?\n\n> That one is running Debian jessie (aka oldoldoldoldstable), uses\n> -mabi=32 with gcc 4.9, and runs a kernel from 2015.\n> The Debian buildd is this: https://db.debian.org/machines.cgi?host=mipsel-aql-01\n> The host should be running Debian buster, with the build done in an\n> unstable chroot. I don't know what \"LS3A-RS780-1w (Quad Core Loongson 3A)\"\n> means, but it's probably much newer hardware than the other one.\n\nI see that the gcc farm[1] has another mips64 machine running Debian\nbuster, so I've started a build there to see what happens.\n\n\t\t\tregards, tom lane\n\n[1] https://cfarm.tetaneutral.net/machines/list/\n\n\n",
"msg_date": "Thu, 19 May 2022 11:49:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 15beta1 crash on mips64el in pg_regress/triggers"
},
{
"msg_contents": "Re: To Tom Lane\n> > Is the crash 100% reproducible for you?\n> \n> I have scheduled a rebuild now, we'll know in a few hours...\n\nThe build was much faster this time (different machine), and worked.\n\nhttps://buildd.debian.org/status/logs.php?pkg=postgresql-15&arch=mips64el\n\nI'll also start a test build on the mips64el porterbox I have access to.\n\nChristoph\n\n\n",
"msg_date": "Thu, 19 May 2022 19:20:05 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: 15beta1 crash on mips64el in pg_regress/triggers"
},
{
"msg_contents": "I wrote:\n> I see that the gcc farm[1] has another mips64 machine running Debian\n> buster, so I've started a build there to see what happens.\n\nMany kilowatt-hours later, I've entirely failed to reproduce this\non gcc230. Not sure how to investigate further. Given that your\noriginal build machine is so slow, could it be timing-related?\nHard to see how, given the location of the crash, but ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 May 2022 19:17:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 15beta1 crash on mips64el in pg_regress/triggers"
},
{
"msg_contents": "Re: Tom Lane\n> Many kilowatt-hours later, I've entirely failed to reproduce this\n> on gcc230. Not sure how to investigate further. Given that your\n> original build machine is so slow, could it be timing-related?\n> Hard to see how, given the location of the crash, but ...\n\nMy other rebuild (on yet another machine) also passed fine, so we can\npossibly attribute that to some hardware glitch on the original\nmachine. But it's being used as a regular buildd for Debian, so I\nguess it would have already been noticed if there was any general\nproblem with it. I'll try reaching out to the buildd folks if they\nknow anything.\n\nhttps://buildd.debian.org/status/recent.php?bad_results_only=on&a=mips64el&suite=experimental\n\nChristoph\n\n\n",
"msg_date": "Fri, 20 May 2022 10:11:12 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: 15beta1 crash on mips64el in pg_regress/triggers"
}
] |
[
{
"msg_contents": "Debian unstable mips (the old 32-bit one):\n\ntest vacuum-conflict ... ok 2170 ms\ntest vacuum-skip-locked ... ok 2445 ms\ntest stats ... FAILED 38898 ms\ntest horizons ... ok 4543 ms\ntest predicate-hash ... ok 22419 ms\n\n******** build/src/test/isolation/output_iso/regression.diffs ********\ndiff -U3 /<<PKGBUILDDIR>>/src/test/isolation/expected/stats.out /<<PKGBUILDDIR>>/build/src/test/isolation/output_iso/results/stats.out\n--- /<<PKGBUILDDIR>>/src/test/isolation/expected/stats.out\t2022-05-16 21:10:42.000000000 +0000\n+++ /<<PKGBUILDDIR>>/build/src/test/isolation/output_iso/results/stats.out\t2022-05-18 23:26:56.573000536 +0000\n@@ -2854,7 +2854,7 @@\n\n seq_scan|seq_tup_read|n_tup_ins|n_tup_upd|n_tup_del|n_live_tup|n_dead_tup|vacuum_count\n --------+------------+---------+---------+---------+----------+----------+------------\n- 3| 9| 5| 1| 0| 1| 1| 0\n+ 3| 9| 5| 1| 0| 4| 1| 0\n (1 row)\n\nFull build log:\nhttps://buildd.debian.org/status/fetch.php?pkg=postgresql-15&arch=mipsel&ver=15%7Ebeta1-1&stamp=1652916588&raw=0\n\n(I'll try rescheduling this build as well, the last one took 4h before\nit failed.)\n\nChristoph\n\n\n",
"msg_date": "Thu, 19 May 2022 17:19:09 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "15beta1 test failure on mips in isolation/expected/stats"
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> Debian unstable mips (the old 32-bit one):\n\n> --- /<<PKGBUILDDIR>>/src/test/isolation/expected/stats.out\t2022-05-16 21:10:42.000000000 +0000\n> +++ /<<PKGBUILDDIR>>/build/src/test/isolation/output_iso/results/stats.out\t2022-05-18 23:26:56.573000536 +0000\n> @@ -2854,7 +2854,7 @@\n\n> seq_scan|seq_tup_read|n_tup_ins|n_tup_upd|n_tup_del|n_live_tup|n_dead_tup|vacuum_count\n> --------+------------+---------+---------+---------+----------+----------+------------\n> - 3| 9| 5| 1| 0| 1| 1| 0\n> + 3| 9| 5| 1| 0| 4| 1| 0\n> (1 row)\n\nI have just discovered that I can reproduce this identical symptom\nfairly repeatably on an experimental lashup that I've been running\nwith bleeding-edge NetBSD on my ancient HPPA box. (You didn't think\nI was just going to walk away from that hardware, did you?)\n\nEven more interesting, the repeatability varies with the settings\nof max_connections and max_prepared_transactions. At low values\n(resp. 20 and 0) I've not been able to make it happen at all, but\nat 100 and 2 it happens circa three times out of four.\n\nI have no idea where to start looking, but this is clearly an issue\nin the new stats code ... or else the hoped-for goal of removing\nflakiness from the stats tests is just as far away as ever.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 May 2022 21:42:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 15beta1 test failure on mips in isolation/expected/stats"
},
{
"msg_contents": "At Thu, 19 May 2022 21:42:31 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Christoph Berg <myon@debian.org> writes:\n> > Debian unstable mips (the old 32-bit one):\n> \n> > --- /<<PKGBUILDDIR>>/src/test/isolation/expected/stats.out\t2022-05-16 21:10:42.000000000 +0000\n> > +++ /<<PKGBUILDDIR>>/build/src/test/isolation/output_iso/results/stats.out\t2022-05-18 23:26:56.573000536 +0000\n> > @@ -2854,7 +2854,7 @@\n> \n> > seq_scan|seq_tup_read|n_tup_ins|n_tup_upd|n_tup_del|n_live_tup|n_dead_tup|vacuum_count\n> > --------+------------+---------+---------+---------+----------+----------+------------\n> > - 3| 9| 5| 1| 0| 1| 1| 0\n> > + 3| 9| 5| 1| 0| 4| 1| 0\n> > (1 row)\n> \n> I have just discovered that I can reproduce this identical symptom\n> fairly repeatably on an experimental lashup that I've been running\n> with bleeding-edge NetBSD on my ancient HPPA box. (You didn't think\n> I was just going to walk away from that hardware, did you?)\n> \n> Even more interesting, the repeatability varies with the settings\n> of max_connections and max_prepared_transactions. At low values\n> (resp. 20 and 0) I've not been able to make it happen at all, but\n> at 100 and 2 it happens circa three times out of four.\n> \n> I have no idea where to start looking, but this is clearly an issue\n> in the new stats code ... or else the hoped-for goal of removing\n> flakiness from the stats tests is just as far away as ever.\n\nDoesn't the step s1_table_stats needs a blocking condition (s2_ff)?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 20 May 2022 11:02:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 15beta1 test failure on mips in isolation/expected/stats"
},
{
"msg_contents": "At Fri, 20 May 2022 11:02:21 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Doesn't the step s1_table_stats needs a blocking condition (s2_ff)?\n\ns/needs/need/;\n\nIf I removed the step s2_ff, I see the same difference. So I think it\nis that. Some other permutations seem to need the same.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 20 May 2022 11:14:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 15beta1 test failure on mips in isolation/expected/stats"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Fri, 20 May 2022 11:02:21 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n>> Doesn't the step s1_table_stats needs a blocking condition (s2_ff)?\n> s/needs/need/;\n> If I removed the step s2_ff, I see the same difference. So I think it\n> is that. Some other permutations seem to need the same.\n\nHmm ... it does seem like the answer might be somewhere around there,\nbut it's not there exactly. This won't fix it:\n\n- s1_table_stats\n+ s1_table_stats(s2_ff)\n\nThat sort of marker only stabilizes cases where two different steps might\nbe reported as completing in either order. In this case, that's clearly\nnot the problem. What we do need is to ensure that s1_table_stats doesn't\n*launch* before s2_ff is done. However, it doesn't look to me like that's\nwhat's happening. isolation/README explains that\n\n Notice that these markers can only delay reporting of the completion\n of a step, not the launch of a step. The isolationtester will launch\n the next step in a permutation as soon as (A) all prior steps of the\n same session are done, and (B) the immediately preceding step in the\n permutation is done or deemed blocked. For this purpose, \"deemed\n blocked\" means that it has been seen to be waiting on a database lock,\n or that it is complete but the report of its completion is delayed by\n one of these markers.\n\nThere's no \"waiting...\" reports in the test output, nor do we have any\ncondition markers in stats.spec right now, so I don't think any steps\nhave been \"deemed blocked\".\n\nWhat I am wondering about at this point is whether the effects of\npg_stat_force_next_flush() could somehow be delayed until after we\nhave told the client the command is complete. I've not poked into\nthe code in that area, but if that could happen it'd explain this\nbehavior.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 May 2022 22:58:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 15beta1 test failure on mips in isolation/expected/stats"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-19 21:42:31 -0400, Tom Lane wrote:\n> > seq_scan|seq_tup_read|n_tup_ins|n_tup_upd|n_tup_del|n_live_tup|n_dead_tup|vacuum_count\n> > --------+------------+---------+---------+---------+----------+----------+------------\n> > - 3| 9| 5| 1| 0| 1| 1| 0\n> > + 3| 9| 5| 1| 0| 4| 1| 0\n> > (1 row)\n\n> Even more interesting, the repeatability varies with the settings\n> of max_connections and max_prepared_transactions.\n\nThat is indeed quite odd.\n\n\n> At low values (resp. 20 and 0) I've not been able to make it happen at all,\n> but at 100 and 2 it happens circa three times out of four.\n\nIs this the only permutation that fails?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 19 May 2022 20:47:27 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: 15beta1 test failure on mips in isolation/expected/stats"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-05-19 21:42:31 -0400, Tom Lane wrote:\n>> Even more interesting, the repeatability varies with the settings\n>> of max_connections and max_prepared_transactions.\n\n> That is indeed quite odd.\n\n>> At low values (resp. 20 and 0) I've not been able to make it happen at all,\n>> but at 100 and 2 it happens circa three times out of four.\n\n> Is this the only permutation that fails?\n\nNo, but those values definitely seem to affect the probability of\nfailure. I just finished a more extensive series of runs, and got:\n\nsuccesses/tries with max_connections/max_prepared_transactions:\n5/5 OK with 20/2\n5/5 OK with 100/0\n3/5 OK with 100/2\n4/5 OK with 200/2\n5/6 OK with 100/10\n5/6 OK with 50/2\n6/10 OK with 100/2\n\nIt seems like the probability decreases again if I raise either\nnumber further. So I'm now guessing that this is purely a timing\nissue and that somehow 100/2 is near the sweet spot for hitting\nthe timing window. Why those settings would affect pgstats at\nall is unclear, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 May 2022 23:58:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 15beta1 test failure on mips in isolation/expected/stats"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-19 22:58:14 -0400, Tom Lane wrote:\n> What I am wondering about at this point is whether the effects of\n> pg_stat_force_next_flush() could somehow be delayed until after we\n> have told the client the command is complete.\n\nIt shouldn't - it just forces pg_stat_report_stat() to flush (rather than\ndoing so based on the time of the last report). And pg_stat_report_stat()\nhappens before ReadyForQuery().\n\nHm. Does the instability vanish if you switch s2_commit_prepared_a and s1_ff?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 19 May 2022 20:59:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: 15beta1 test failure on mips in isolation/expected/stats"
},
{
"msg_contents": "At Thu, 19 May 2022 22:58:14 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> Notice that these markers can only delay reporting of the completion\n> of a step, not the launch of a step. The isolationtester will launch\n> the next step in a permutation as soon as (A) all prior steps of the\n> same session are done, and (B) the immediately preceding step in the\n> permutation is done or deemed blocked. For this purpose, \"deemed\n> blocked\" means that it has been seen to be waiting on a database lock,\n> or that it is complete but the report of its completion is delayed by\n> one of these markers.\n> \n> There's no \"waiting...\" reports in the test output, nor do we have any\n> condition markers in stats.spec right now, so I don't think any steps\n> have been \"deemed blocked\".\n\nMmm... Thanks. I miunderstood the effect of it..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 20 May 2022 13:12:09 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 15beta1 test failure on mips in isolation/expected/stats"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Hm. Does the instability vanish if you switch s2_commit_prepared_a and s1_ff?\n\nLike this?\n\ndiff --git a/src/test/isolation/specs/stats.spec b/src/test/isolation/specs/stats.spec\nindex be4ae1f4ff..70be29b207 100644\n--- a/src/test/isolation/specs/stats.spec\n+++ b/src/test/isolation/specs/stats.spec\n@@ -562,8 +562,9 @@ permutation\n s1_table_insert_k1 # should be counted\n s1_table_update_k1 # dito\n s1_prepare_a\n+ s1_ff\n s2_commit_prepared_a\n- s1_ff s2_ff\n+ s2_ff\n s1_table_stats\n \n # S1 prepares, S1 aborts prepared\n\nThere's some fallout in the expected-file, of course, but this\ndoes seem to fix it (20 consecutive successful runs now at\n100/2). Don't see why though ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 May 2022 00:22:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 15beta1 test failure on mips in isolation/expected/stats"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-20 00:22:14 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Hm. Does the instability vanish if you switch s2_commit_prepared_a and s1_ff?\n> \n> Like this?\n\nYea.\n\n\n> diff --git a/src/test/isolation/specs/stats.spec b/src/test/isolation/specs/stats.spec\n> index be4ae1f4ff..70be29b207 100644\n> --- a/src/test/isolation/specs/stats.spec\n> +++ b/src/test/isolation/specs/stats.spec\n> @@ -562,8 +562,9 @@ permutation\n> s1_table_insert_k1 # should be counted\n> s1_table_update_k1 # dito\n> s1_prepare_a\n> + s1_ff\n> s2_commit_prepared_a\n> - s1_ff s2_ff\n> + s2_ff\n> s1_table_stats\n> \n> # S1 prepares, S1 aborts prepared\n> \n> There's some fallout in the expected-file, of course, but this\n> does seem to fix it (20 consecutive successful runs now at\n> 100/2). Don't see why though ...\n\nI think what might be happening is that the transactional stats updates get\nreported by s2 *before* the non-transactional stats updates come in from\ns1. I.e. the pgstat_report_stat() at the end of s2_commit_prepared_a does a\nreport, because the machine is slow enough for it to be \"time to reports stats\nagain\". Then s1 reports its non-transactional stats.\n\nIt looks like our stats maintenance around truncation isn't quite \"concurrency\nsafe\". That code hasn't meaningfully changed, but it'd not be surprising if\nit's not 100% precise...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 19 May 2022 21:41:06 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: 15beta1 test failure on mips in isolation/expected/stats"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-05-20 00:22:14 -0400, Tom Lane wrote:\n>> There's some fallout in the expected-file, of course, but this\n>> does seem to fix it (20 consecutive successful runs now at\n>> 100/2). Don't see why though ...\n\n> I think what might be happening is that the transactional stats updates get\n> reported by s2 *before* the non-transactional stats updates come in from\n> s1. I.e. the pgstat_report_stat() at the end of s2_commit_prepared_a does a\n> report, because the machine is slow enough for it to be \"time to reports stats\n> again\". Then s1 reports its non-transactional stats.\n\nSounds plausible. And I left the test loop running, and it's now past\n100 consecutive successes, so I think this change definitely \"fixes\" it.\n\n> It looks like our stats maintenance around truncation isn't quite \"concurrency\n> safe\". That code hasn't meaningfully changed, but it'd not be surprising if\n> it's not 100% precise...\n\nYeah. Probably not something to try to improve post-beta, especially\nsince it's not completely clear how transactional and non-transactional\ncases *should* interact. Maybe non-transactional updates should be\npushed immediately? But I'm not sure if that's fully correct, and\nit definitely sounds expensive.\n\nI'd be good with tweaking this test case as you suggest, and maybe\nrevisiting the topic later.\n\nKyotaro-san worried about whether any other places in stats.spec\nhave the same issue. I've not seen any evidence of that in my\ntests, but perhaps some other machine with different timing\ncould find it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 May 2022 01:25:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 15beta1 test failure on mips in isolation/expected/stats"
},
{
"msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> I think what might be happening is that the transactional stats updates get\n>> reported by s2 *before* the non-transactional stats updates come in from\n>> s1. I.e. the pgstat_report_stat() at the end of s2_commit_prepared_a does a\n>> report, because the machine is slow enough for it to be \"time to reports stats\n>> again\". Then s1 reports its non-transactional stats.\n\n> Sounds plausible. And I left the test loop running, and it's now past\n> 100 consecutive successes, so I think this change definitely \"fixes\" it.\n\nIn the light of morning, at least half of the parameter dependency is\nnow obvious: the problematic test case involves a prepared transaction,\nso it fails completely with max_prepared_transactions = 0. The isolation\ntest harness masks that by matching against stats_1.out, but it's not\nreally a \"success\".\n\nMy numbers do still suggest that there's a weak dependence on\nmax_connections, but it's possible that that's a mirage. I did not run\nenough test cycles to be able to say positively that that's a real effect\n(and the machine's slow enough that I'm not excited about doing so).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 May 2022 11:34:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 15beta1 test failure on mips in isolation/expected/stats"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-20 01:25:10 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-05-20 00:22:14 -0400, Tom Lane wrote:\n> >> There's some fallout in the expected-file, of course, but this\n> >> does seem to fix it (20 consecutive successful runs now at\n> >> 100/2). Don't see why though ...\n> \n> > I think what might be happening is that the transactional stats updates get\n> > reported by s2 *before* the non-transactional stats updates come in from\n> > s1. I.e. the pgstat_report_stat() at the end of s2_commit_prepared_a does a\n> > report, because the machine is slow enough for it to be \"time to reports stats\n> > again\". Then s1 reports its non-transactional stats.\n> \n> Sounds plausible. And I left the test loop running, and it's now past\n> 100 consecutive successes, so I think this change definitely \"fixes\" it.\n\nFWIW, the problem can be reliably reproduced by sticking a\npgstat_force_next_flush() into pgstat_twophase_postcommit(). This is the only\nfailure when doing so.\n\n\n> > It looks like our stats maintenance around truncation isn't quite \"concurrency\n> > safe\". That code hasn't meaningfully changed, but it'd not be surprising if\n> > it's not 100% precise...\n> \n> Yeah. Probably not something to try to improve post-beta, especially\n> since it's not completely clear how transactional and non-transactional\n> cases *should* interact.\n\nYea. It's also not normally particularly crucial to be accurate down to that\ndegree.\n\n\n> Maybe non-transactional updates should be\n> pushed immediately? But I'm not sure if that's fully correct, and\n> it definitely sounds expensive.\n\nI think that'd be far too expensive - the majority of stats are\nnon-transactional...\n\nI think what we could do is to model truncates as subtracting the number of\nlive/dead rows the truncating backend knows about, rather than setting them to\n0. But that of course could incur other inaccuracies.\n\n\n> I'd be good with tweaking this test case as you suggest, and maybe\n> revisiting the topic later.\n\nPushed the change of the test. Christoph, just to make sure, can you confirm\nthat this fixes the test instability for you?\n\n\n> Kyotaro-san worried about whether any other places in stats.spec\n> have the same issue. I've not seen any evidence of that in my\n> tests, but perhaps some other machine with different timing\n> could find it.\n\nI tried to find some by putting in forced flushes in a bunch of places before,\nand now some more, without finding further cases.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 22 May 2022 15:29:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: 15beta1 test failure on mips in isolation/expected/stats"
},
{
"msg_contents": "Re: Andres Freund\n> > I'd be good with tweaking this test case as you suggest, and maybe\n> > revisiting the topic later.\n> \n> Pushed the change of the test. Christoph, just to make sure, can you confirm\n> that this fixes the test instability for you?\n\nUnfortunately I could not reproduce the problem on the mipsel porter\nbox I have access to (which is the same box used as mips64el porter\nbox for the other thread). Running the \"stats\" test 30 times in a loop\nalways made it pass.\n\nOn the Debian buildds, the build has succeeded now in the 3rd try:\n\nhttps://buildd.debian.org/status/logs.php?pkg=postgresql-15&ver=15%7Ebeta1-1&arch=mipsel\n\nChristoph\n\n\n",
"msg_date": "Tue, 24 May 2022 21:00:33 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: 15beta1 test failure on mips in isolation/expected/stats"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nPrompted by a question on IRC, here's a patch to add a result_types\ncolumn to the pg_prepared_statements view, so that one can see the types\nof the columns returned by a prepared statement, not just the parameter\ntypes.\n\nI'm not quite sure about the column name, suggestions welcome.\n\n- ilmari",
"msg_date": "Thu, 19 May 2022 16:34:05 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add result_types column to pg_prepared_statements view"
},
{
"msg_contents": "Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> writes:\n\n> Hi hackers,\n>\n> Prompted by a question on IRC, here's a patch to add a result_types\n> column to the pg_prepared_statements view, so that one can see the types\n> of the columns returned by a prepared statement, not just the parameter\n> types.\n\nAdded to the 2022-07 commitfest: https://commitfest.postgresql.org/38/3644/\n\n- ilmari\n\n\n",
"msg_date": "Thu, 19 May 2022 16:39:13 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add result_types column to pg_prepared_statements view"
},
{
"msg_contents": "On 19.05.22 17:34, Dagfinn Ilmari Mannsåker wrote:\n> Prompted by a question on IRC, here's a patch to add a result_types\n> column to the pg_prepared_statements view, so that one can see the types\n> of the columns returned by a prepared statement, not just the parameter\n> types.\n> \n> I'm not quite sure about the column name, suggestions welcome.\n\nI think this patch is sensible.\n\nI see one issue: When you describe a prepared statement via the \nprotocol, if a result field has a domain as its type, the RowDescription \nmessage sends the underlying base type, not the domain type directly \n(see SendRowDescriptionMessage()). But it doesn't do that for the \nparameters (see exec_describe_statement_message()). I don't know why \nthat is; the protocol documentation doesn't mention it. Might be worth \nlooking into, and checking whether the analogous information contained \nin this view should be made consistent.\n\n\n",
"msg_date": "Fri, 1 Jul 2022 12:28:49 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add result_types column to pg_prepared_statements view"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n\n> On 19.05.22 17:34, Dagfinn Ilmari Mannsåker wrote:\n>> Prompted by a question on IRC, here's a patch to add a result_types\n>> column to the pg_prepared_statements view, so that one can see the types\n>> of the columns returned by a prepared statement, not just the parameter\n>> types.\n>> I'm not quite sure about the column name, suggestions welcome.\n>\n> I think this patch is sensible.\n>\n> I see one issue: When you describe a prepared statement via the\n> protocol, if a result field has a domain as its type, the RowDescription \n> message sends the underlying base type, not the domain type directly\n> (see SendRowDescriptionMessage()). But it doesn't do that for the \n> parameters (see exec_describe_statement_message()). I don't know why\n> that is; the protocol documentation doesn't mention it. Might be worth \n> looking into, and checking whether the analogous information contained\n> in this view should be made consistent.\n\nA bit of git archaeology shows that the change was made by Tom (Cc-ed)\nin 7.4:\n\ncommit d9b679c13a820eb7b464a1eeb1f177c3fea13ece\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2003-05-13 18:39:50 +0000\n\n In RowDescription messages, report columns of domain datatypes as having\n the type OID and typmod of the underlying base type. Per discussions\n a few weeks ago with Andreas Pflug and others. Note that this behavioral\n change affects both old- and new-protocol clients.\n\nI can't find that discussion in the archive, but someone did complain\nabout it shortly after:\n\nhttps://www.postgresql.org/message-id/flat/D71A1574-A772-11D7-913D-0030656EE7B2%40icx.net\n\nI think in this case returning the domain type is more useful, since\nit's easy to get from that to the base type, but not vice versa.\n\nThe arguments about client-side type-specific value handling for\nRowDescription don't apply here IMO, since this view is more\nuser-facing.\n\n- ilmari\n\n\n",
"msg_date": "Fri, 01 Jul 2022 13:27:20 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add result_types column to pg_prepared_statements view"
},
{
"msg_contents": "\nOn 01.07.22 14:27, Dagfinn Ilmari Mannsåker wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> \n>> On 19.05.22 17:34, Dagfinn Ilmari Mannsåker wrote:\n>>> Prompted by a question on IRC, here's a patch to add a result_types\n>>> column to the pg_prepared_statements view, so that one can see the types\n>>> of the columns returned by a prepared statement, not just the parameter\n>>> types.\n>>> I'm not quite sure about the column name, suggestions welcome.\n>>\n>> I think this patch is sensible.\n\n> The arguments about client-side type-specific value handling for\n> RowDescription don't apply here IMO, since this view is more\n> user-facing.\n\nI agree. It's also easy to change if needed. Committed as is.\n\n\n",
"msg_date": "Tue, 5 Jul 2022 07:34:24 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add result_types column to pg_prepared_statements view"
},
{
"msg_contents": "On Tue, 5 Jul 2022, at 06:34, Peter Eisentraut wrote:\n> On 01.07.22 14:27, Dagfinn Ilmari Mannsåker wrote:\n>> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> \n>>> On 19.05.22 17:34, Dagfinn Ilmari Mannsåker wrote:\n>>>> Prompted by a question on IRC, here's a patch to add a result_types\n>>>> column to the pg_prepared_statements view, so that one can see the types\n>>>> of the columns returned by a prepared statement, not just the parameter\n>>>> types.\n>>>> I'm not quite sure about the column name, suggestions welcome.\n>>>\n>>> I think this patch is sensible.\n>\n>> The arguments about client-side type-specific value handling for\n>> RowDescription don't apply here IMO, since this view is more\n>> user-facing.\n>\n> I agree. It's also easy to change if needed. Committed as is.\n\nThanks!\n\n\n",
"msg_date": "Tue, 05 Jul 2022 08:31:27 +0100",
"msg_from": "=?UTF-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add result_types column to pg_prepared_statements view"
},
{
"msg_contents": "On 05.07.22 09:31, Dagfinn Ilmari Mannsåker wrote:\n> On Tue, 5 Jul 2022, at 06:34, Peter Eisentraut wrote:\n>> On 01.07.22 14:27, Dagfinn Ilmari Mannsåker wrote:\n>>> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>>>\n>>>> On 19.05.22 17:34, Dagfinn Ilmari Mannsåker wrote:\n>>>>> Prompted by a question on IRC, here's a patch to add a result_types\n>>>>> column to the pg_prepared_statements view, so that one can see the types\n>>>>> of the columns returned by a prepared statement, not just the parameter\n>>>>> types.\n>>>>> I'm not quite sure about the column name, suggestions welcome.\n>>>>\n>>>> I think this patch is sensible.\n>>\n>>> The arguments about client-side type-specific value handling for\n>>> RowDescription don't apply here IMO, since this view is more\n>>> user-facing.\n>>\n>> I agree. It's also easy to change if needed. Committed as is.\n> \n> Thanks!\n\nThere was a problem that we didn't cover: Not all prepared statements \nhave result descriptors (e.g., DML statements), so that would crash as \nwritten. I have changed it to return null for result_types in that \ncase, and added a test case.\n\n\n",
"msg_date": "Tue, 5 Jul 2022 11:20:53 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add result_types column to pg_prepared_statements view"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n\n> There was a problem that we didn't cover: Not all prepared statements\n> have result descriptors (e.g., DML statements), so that would crash as \n> written. \n\nD'oh!\n\n> I have changed it to return null for result_types in that case, and\n> added a test case.\n\nThanks for spotting and fixing that.\n\n- ilmari\n\n\n",
"msg_date": "Tue, 05 Jul 2022 10:27:54 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add result_types column to pg_prepared_statements view"
}
] |
[
{
"msg_contents": "Greetings!\n\nI was wondering if you could provide me with initial feedback on my GSoC proposal, as well as if you have any comments about it. And would it be possible to know whether I got accepted as a contributor?\n\nBest Regards,\nIsraa Odeh.\nSent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows\n\n\n\n\n\n\n\n\n\n\nGreetings!\n\n\nI was wondering if you could provide me with initial feedback on my GSoC proposal, as well as if you have any comments about it. And would it be possible to know whether I got accepted as a contributor?\n\nBest Regards,\nIsraa Odeh.\nSent from \nMail for Windows",
"msg_date": "Thu, 19 May 2022 18:12:39 +0000",
"msg_from": "Israa Odeh <israa2110@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Inquiring about my GSoC Proposal."
},
{
"msg_contents": "Greetings,\n\n* Israa Odeh (israa2110@hotmail.com) wrote:\n> I was wondering if you could provide me with initial feedback on my GSoC proposal, as well as if you have any comments about it. And would it be possible to know whether I got accepted as a contributor?\n\nGoogle published this information and all accepted contributors will be\nhearing from the mentors soon.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 20 May 2022 15:52:25 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Inquiring about my GSoC Proposal."
}
] |
[
{
"msg_contents": "./tmp_install/usr/local/pgsql/bin/postgres -D ./src/test/regress/tmp_check/data -c min_dynamic_shared_memory=1MB\n\nTRAP: FailedAssertion(\"val > base\", File: \"../../../../src/include/utils/relptr.h\", Line: 67, PID: 21912)\n./tmp_install/usr/local/pgsql/bin/postgres(ExceptionalCondition+0xa0)[0x55af5c9c463e]\n./tmp_install/usr/local/pgsql/bin/postgres(FreePageManagerInitialize+0x94)[0x55af5c9f4478]\n./tmp_install/usr/local/pgsql/bin/postgres(dsm_shmem_init+0x87)[0x55af5c841532]\n./tmp_install/usr/local/pgsql/bin/postgres(CreateSharedMemoryAndSemaphores+0x8d)[0x55af5c843f30]\n./tmp_install/usr/local/pgsql/bin/postgres(+0x41805c)[0x55af5c7c605c]\n./tmp_install/usr/local/pgsql/bin/postgres(PostmasterMain+0x959)[0x55af5c7ca8e7]\n./tmp_install/usr/local/pgsql/bin/postgres(main+0x229)[0x55af5c70af7f]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7)[0x7f736d6e1b97]\n./tmp_install/usr/local/pgsql/bin/postgres(_start+0x2a)[0x55af5c4794fa]\n\nIt looks like this may be pre-existing problem exposed by\n\ncommit e07d4ddc55fdcf82082950b3eb0cd8f728284c9d\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Sat Mar 26 14:29:29 2022 -0400\n\n Suppress compiler warning in relptr_store().\n\n\n\n",
"msg_date": "Thu, 19 May 2022 14:38:39 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "pg15b1: FailedAssertion(\"val > base\", File:\n \"...src/include/utils/relptr.h\", Line: 67, PID: 30485)"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> ./tmp_install/usr/local/pgsql/bin/postgres -D ./src/test/regress/tmp_check/data -c min_dynamic_shared_memory=1MB\n> TRAP: FailedAssertion(\"val > base\", File: \"../../../../src/include/utils/relptr.h\", Line: 67, PID: 21912)\n\nYeah, I see it too.\n\n> It looks like this may be pre-existing problem exposed by\n> commit e07d4ddc55fdcf82082950b3eb0cd8f728284c9d\n\nAgreed. Here I see\n\n#5 FreePageManagerInitialize (fpm=fpm@entry=0x7f34b3ddd300, \n base=base@entry=0x7f34b3ddd300 \"\") at freepage.c:187\n#6 0x000000000082423e in dsm_shmem_init () at dsm.c:473\n\nso that where we do\n\n\trelptr_store(base, fpm->self, fpm);\n\nthe \"relative\" pointer value would have to be zero, making the case\nindistinguishable from a NULL pointer. We can either change the\ncaller so that these addresses aren't the same, or give up the\nability to store NULL in relptrs ... doesn't seem like a hard call.\n\nOne interesting question I didn't look into is why it takes a nondefault\nvalue of min_dynamic_shared_memory to expose this bug.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 May 2022 17:16:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\",\n File: \"...src/include/utils/relptr.h\", Line: 67, PID: 30485)"
},
{
"msg_contents": "At Thu, 19 May 2022 17:16:03 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > ./tmp_install/usr/local/pgsql/bin/postgres -D ./src/test/regress/tmp_check/data -c min_dynamic_shared_memory=1MB\n> > TRAP: FailedAssertion(\"val > base\", File: \"../../../../src/include/utils/relptr.h\", Line: 67, PID: 21912)\n> \n> Yeah, I see it too.\n> \n> > It looks like this may be pre-existing problem exposed by\n> > commit e07d4ddc55fdcf82082950b3eb0cd8f728284c9d\n> \n> Agreed. Here I see\n> \n> #5 FreePageManagerInitialize (fpm=fpm@entry=0x7f34b3ddd300, \n> base=base@entry=0x7f34b3ddd300 \"\") at freepage.c:187\n> #6 0x000000000082423e in dsm_shmem_init () at dsm.c:473\n> \n> so that where we do\n> \n> \trelptr_store(base, fpm->self, fpm);\n> \n> the \"relative\" pointer value would have to be zero, making the case\n> indistinguishable from a NULL pointer. We can either change the\n> caller so that these addresses aren't the same, or give up the\n> ability to store NULL in relptrs ... doesn't seem like a hard call.\n> \n> One interesting question I didn't look into is why it takes a nondefault\n> value of min_dynamic_shared_memory to expose this bug.\n\nThe path is taken only when a valid value is given to the\nparameter. If we don't use preallocated dsm, it is initialized\nelsewhere. In those cases the first bytes of the base address (the\nsecond parameter of FreePageManagerInitialize) are used for\ndsa_segment_header so the relptr won't be zero (!= NULL).\n\nIt can be silenced by wasting the first MAXALIGN bytes of\ndsm_main_space_begin..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 20 May 2022 12:00:14 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\", File:\n \"...src/include/utils/relptr.h\", Line: 67, PID: 30485)"
},
{
"msg_contents": "At Fri, 20 May 2022 12:00:14 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 19 May 2022 17:16:03 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> > Justin Pryzby <pryzby@telsasoft.com> writes:\n> > > ./tmp_install/usr/local/pgsql/bin/postgres -D ./src/test/regress/tmp_check/data -c min_dynamic_shared_memory=1MB\n> > > TRAP: FailedAssertion(\"val > base\", File: \"../../../../src/include/utils/relptr.h\", Line: 67, PID: 21912)\n> > \n> > Yeah, I see it too.\n> > \n> > > It looks like this may be pre-existing problem exposed by\n> > > commit e07d4ddc55fdcf82082950b3eb0cd8f728284c9d\n> > \n> > Agreed. Here I see\n> > \n> > #5 FreePageManagerInitialize (fpm=fpm@entry=0x7f34b3ddd300, \n> > base=base@entry=0x7f34b3ddd300 \"\") at freepage.c:187\n> > #6 0x000000000082423e in dsm_shmem_init () at dsm.c:473\n> > \n> > so that where we do\n> > \n> > \trelptr_store(base, fpm->self, fpm);\n> > \n> > the \"relative\" pointer value would have to be zero, making the case\n> > indistinguishable from a NULL pointer. We can either change the\n> > caller so that these addresses aren't the same, or give up the\n> > ability to store NULL in relptrs ... doesn't seem like a hard call.\n> > \n> > One interesting question I didn't look into is why it takes a nondefault\n> > value of min_dynamic_shared_memory to expose this bug.\n> \n> The path is taken only when a valid value is given to the\n> parameter. If we don't use preallocated dsm, it is initialized\n> elsewhere. In those cases the first bytes of the base address (the\n> second parameter of FreePageManagerInitialize) are used for\n> dsa_segment_header so the relptr won't be zero (!= NULL).\n> \n> It can be silenced by wasting the first MAXALIGN bytes of\n> dsm_main_space_begin..\n\nActually, that change doesn't result in wasting of usable memory size\nsince the change doesn't move the first effective page.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 31 May 2022 14:05:45 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\", File:\n \"...src/include/utils/relptr.h\", Line: 67, PID: 30485)"
},
{
"msg_contents": "On Thu, May 19, 2022 at 11:00 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> The path is taken only when a valid value is given to the\n> parameter. If we don't use preallocated dsm, it is initialized\n> elsewhere. In those cases the first bytes of the base address (the\n> second parameter of FreePageManagerInitialize) are used for\n> dsa_segment_header so the relptr won't be zero (!= NULL).\n>\n> It can be silenced by wasting the first MAXALIGN bytes of\n> dsm_main_space_begin..\n\nYeah, so when I created this stuff in the first place, I figured that\nit wasn't a problem if we reserved relptr == 0 to mean a NULL pointer,\nbecause you would never have a relative pointer pointing to the\nbeginning of a DSM, because it would probably always start with a\ndsm_toc. But when Thomas made it so that DSM allocations could happen\nin the main shared memory segment, that ceased to be true. This\nexample happened not to break because we never use relptr_access() on\nfpm->self. We do use fpm_segment_base(), but that accidentally fails\nto break, because instead of using relptr_access() it drills right\nthrough the abstraction and doesn't have any kind of special case for\n0. So we can fix this by:\n\n1. Using a relative pointer value other than 0 to represent a null\npointer. Andres suggested (Size) -1.\n2. Not storing the free page manager for the DSM in the main shared\nmemory segment at byte offset 0.\n3. Dropping the assertion while loudly singing \"la la la la la la\".\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 31 May 2022 15:57:14 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\",\n File: \"...src/include/utils/relptr.h\",\n Line: 67, PID: 30485)"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Yeah, so when I created this stuff in the first place, I figured that\n> it wasn't a problem if we reserved relptr == 0 to mean a NULL pointer,\n> because you would never have a relative pointer pointing to the\n> beginning of a DSM, because it would probably always start with a\n> dsm_toc. But when Thomas made it so that DSM allocations could happen\n> in the main shared memory segment, that ceased to be true. This\n> example happened not to break because we never use relptr_access() on\n> fpm->self. We do use fpm_segment_base(), but that accidentally fails\n> to break, because instead of using relptr_access() it drills right\n> through the abstraction and doesn't have any kind of special case for\n> 0.\n\nSeems like that in itself is a a lousy idea. Either the code should\nrespect the abstraction, or it shouldn't be declaring the variable\nas a relptr in the first place.\n\n> So we can fix this by:\n> 1. Using a relative pointer value other than 0 to represent a null\n> pointer. Andres suggested (Size) -1.\n> 2. Not storing the free page manager for the DSM in the main shared\n> memory segment at byte offset 0.\n> 3. Dropping the assertion while loudly singing \"la la la la la la\".\n\nI'm definitely down on #3, because that just leaves the ambiguity\nin place to bite somewhere else in future. #1 would work as long\nas nobody expects memset-to-zero to produce null relptrs, but that\ndoesn't seem very nice either.\n\nOn the whole, wasting MAXALIGN worth of memory seems like the least bad\nalternative, but I wonder if we ought to do it right here as opposed\nto somewhere in the DSM code proper. Why is this DSM space not like\nother DSM spaces in starting with a TOC?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 May 2022 16:10:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\",\n File: \"...src/include/utils/relptr.h\", Line: 67, PID: 30485)"
},
{
"msg_contents": "On Wed, Jun 1, 2022 at 8:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > So we can fix this by:\n> > 1. Using a relative pointer value other than 0 to represent a null\n> > pointer. Andres suggested (Size) -1.\n> > 2. Not storing the free page manager for the DSM in the main shared\n> > memory segment at byte offset 0.\n> > 3. Dropping the assertion while loudly singing \"la la la la la la\".\n>\n> I'm definitely down on #3, because that just leaves the ambiguity\n> in place to bite somewhere else in future. #1 would work as long\n> as nobody expects memset-to-zero to produce null relptrs, but that\n> doesn't seem very nice either.\n>\n> On the whole, wasting MAXALIGN worth of memory seems like the least bad\n> alternative, but I wonder if we ought to do it right here as opposed\n> to somewhere in the DSM code proper. Why is this DSM space not like\n> other DSM spaces in starting with a TOC?\n\nThis FPM isn't in a DSM. (It happens to have DSMs *inside it*,\nbecause I'm using it as a separate DSM allocator: instead of making\nthem with dsm_impl.c mechanisms, this one recycles space from the main\nshmem area). I view FPM as a reusable 4kb page-based memory allocator\nthat could have many potential uses, not as a thing that must live\ninside another thing with a TOC. The fact that it uses the relptr\nthing makes it possible to use FPM inside DSMs too, but that doesn't\nmean it has to be used inside a DSM.\n\nI vote for #1.\n\n\n",
"msg_date": "Wed, 1 Jun 2022 08:32:07 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\",\n File: \"...src/include/utils/relptr.h\",\n Line: 67, PID: 30485)"
},
{
"msg_contents": "On Tue, May 31, 2022 at 4:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Seems like that in itself is a a lousy idea. Either the code should\n> respect the abstraction, or it shouldn't be declaring the variable\n> as a relptr in the first place.\n\nYep. I think it should be respecting the abstraction, but the 2016\nversion of me failed to realize the issue when committing 13e14a78ea1.\nHindsight is 20-20, perhaps.\n\n> > So we can fix this by:\n> > 1. Using a relative pointer value other than 0 to represent a null\n> > pointer. Andres suggested (Size) -1.\n> > 2. Not storing the free page manager for the DSM in the main shared\n> > memory segment at byte offset 0.\n> > 3. Dropping the assertion while loudly singing \"la la la la la la\".\n>\n> I'm definitely down on #3, because that just leaves the ambiguity\n> in place to bite somewhere else in future. #1 would work as long\n> as nobody expects memset-to-zero to produce null relptrs, but that\n> doesn't seem very nice either.\n\nWell, that's a good point that I hadn't considered, actually. I was\nthinking I'd only picked 0 as the value out of adherence to\nconvention, but I might have had this in mind too, at the time.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 31 May 2022 16:58:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\",\n File: \"...src/include/utils/relptr.h\",\n Line: 67, PID: 30485)"
},
{
"msg_contents": "On Tue, May 31, 2022 at 4:32 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> This FPM isn't in a DSM. (It happens to have DSMs *inside it*,\n> because I'm using it as a separate DSM allocator: instead of making\n> them with dsm_impl.c mechanisms, this one recycles space from the main\n> shmem area). I view FPM as a reusable 4kb page-based memory allocator\n> that could have many potential uses, not as a thing that must live\n> inside another thing with a TOC. The fact that it uses the relptr\n> thing makes it possible to use FPM inside DSMs too, but that doesn't\n> mean it has to be used inside a DSM.\n\nCould it use something other than its own address as the base address?\nOne way to do this would be to put it at the *end* of the\n\"Preallocated DSM\" space, rather than the beginning.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 31 May 2022 17:01:04 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\",\n File: \"...src/include/utils/relptr.h\",\n Line: 67, PID: 30485)"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Could it use something other than its own address as the base address?\n\nHmm, maybe we could make something of that idea ...\n\n> One way to do this would be to put it at the *end* of the\n> \"Preallocated DSM\" space, rather than the beginning.\n\n... but that way doesn't sound good. Doesn't it just move the\nproblem to the first object allocated inside the FPM?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 May 2022 17:09:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\",\n File: \"...src/include/utils/relptr.h\", Line: 67, PID: 30485)"
},
{
"msg_contents": "On Wed, Jun 1, 2022 at 9:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Could it use something other than its own address as the base address?\n>\n> Hmm, maybe we could make something of that idea ...\n>\n> > One way to do this would be to put it at the *end* of the\n> > \"Preallocated DSM\" space, rather than the beginning.\n>\n> ... but that way doesn't sound good. Doesn't it just move the\n> problem to the first object allocated inside the FPM?\n\nCount we make the relptrs 1-based, so that 0 is reserved as a sentinel\nthat has the nice memset(0) property?\n\n\n",
"msg_date": "Wed, 1 Jun 2022 09:26:30 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\",\n File: \"...src/include/utils/relptr.h\",\n Line: 67, PID: 30485)"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Count we make the relptrs 1-based, so that 0 is reserved as a sentinel\n> that has the nice memset(0) property?\n\nHm ... almost. A +1 offset would mean that zero is ambiguous with a\npointer to the byte just before the relptr. Maybe that case never\narises in practice, but now that we've seen this problem I'm not real\ncomfortable with such an assumption. But how about a -1 offset?\nThen zero would be ambiguous with a pointer to the second byte of the\nrelptr, and I think I *am* prepared to assume that that has no use-cases.\n\nThe other advantage of such a definition is that it'd help flush out\nanybody breaking the relptr abstraction ;-)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 May 2022 17:52:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\",\n File: \"...src/include/utils/relptr.h\", Line: 67, PID: 30485)"
},
{
"msg_contents": "On Tue, May 31, 2022 at 5:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Count we make the relptrs 1-based, so that 0 is reserved as a sentinel\n> > that has the nice memset(0) property?\n>\n> Hm ... almost. A +1 offset would mean that zero is ambiguous with a\n> pointer to the byte just before the relptr. Maybe that case never\n> arises in practice, but now that we've seen this problem I'm not real\n> comfortable with such an assumption. But how about a -1 offset?\n> Then zero would be ambiguous with a pointer to the second byte of the\n> relptr, and I think I *am* prepared to assume that that has no use-cases.\n>\n> The other advantage of such a definition is that it'd help flush out\n> anybody breaking the relptr abstraction ;-)\n\nSeems backwards to me. A relative pointer is supposed to point to\nsomething inside some range of memory, like a DSM gment -- it can\nnever be legally used to point to anything outside that segment. So it\nseems to me that you could perfectly legally point to the second byte\nof the segment, but never to the -1'th byte.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 31 May 2022 18:04:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\",\n File: \"...src/include/utils/relptr.h\",\n Line: 67, PID: 30485)"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Seems backwards to me. A relative pointer is supposed to point to\n> something inside some range of memory, like a DSM gment -- it can\n> never be legally used to point to anything outside that segment. So it\n> seems to me that you could perfectly legally point to the second byte\n> of the segment, but never to the -1'th byte.\n\nOkay, I was thinking about it slightly wrong: relptr is defined as an\noffset relative to some base address, not to its own address. As long\nas you're prepared to assume that the base address really is the start\nof the addressable area, then yeah the above argument works.\n\nHowever, now that I've corrected that mistaken image ... I wonder if\nit could make sense to redefine relptr as self-relative? That ought\nto provide some notational savings since you'd only need to carry\naround the relptr's own address not that plus a base address.\nProbably not something to consider for v15 though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 May 2022 18:14:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\",\n File: \"...src/include/utils/relptr.h\", Line: 67, PID: 30485)"
},
{
"msg_contents": "On Tue, May 31, 2022 at 6:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> However, now that I've corrected that mistaken image ... I wonder if\n> it could make sense to redefine relptr as self-relative? That ought\n> to provide some notational savings since you'd only need to carry\n> around the relptr's own address not that plus a base address.\n> Probably not something to consider for v15 though.\n\nI think that would be pretty hard to make work, since copying around a\nrelative pointer would change its meaning. Code like \"relptr_foo x =\n*y\" would be broken, for example, but the compiler would not complain.\nOr maybe I misunderstand your idea?\n\nAlso keep in mind that the major use case here is DSM segments, which\ncan be mapped at different addresses in different processes. Mainly,\nwe expect to store relative pointers in the segment to other things in\nthe same segment. Sometimes, we might read the values from there into\nlocal variables - or maybe global variables - in code that is\naccessing those DSM segments for some purpose.\n\nThere is little use for a relative pointer that can access all of the\naddress space that exists. For that, it is better to just as a regular\npointer.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 31 May 2022 18:29:14 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\",\n File: \"...src/include/utils/relptr.h\",\n Line: 67, PID: 30485)"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, May 31, 2022 at 6:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> However, now that I've corrected that mistaken image ... I wonder if\n>> it could make sense to redefine relptr as self-relative? That ought\n>> to provide some notational savings since you'd only need to carry\n>> around the relptr's own address not that plus a base address.\n>> Probably not something to consider for v15 though.\n\n> I think that would be pretty hard to make work, since copying around a\n> relative pointer would change its meaning. Code like \"relptr_foo x =\n> *y\" would be broken, for example, but the compiler would not complain.\n\nSure, but the current definition is far from error-proof as well:\nnothing stops you from using the wrong base address with a relptr's\nvalue. Anyway, it's just idle speculation at this point.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 May 2022 18:39:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\",\n File: \"...src/include/utils/relptr.h\", Line: 67, PID: 30485)"
},
{
"msg_contents": "At Tue, 31 May 2022 16:10:05 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \ntgl> Robert Haas <robertmhaas@gmail.com> writes:\ntgl> > Yeah, so when I created this stuff in the first place, I figured that\ntgl> > it wasn't a problem if we reserved relptr == 0 to mean a NULL pointer,\ntgl> > because you would never have a relative pointer pointing to the\ntgl> > beginning of a DSM, because it would probably always start with a\ntgl> > dsm_toc. But when Thomas made it so that DSM allocations could happen\ntgl> > in the main shared memory segment, that ceased to be true. This\ntgl> > example happened not to break because we never use relptr_access() on\ntgl> > fpm->self. We do use fpm_segment_base(), but that accidentally fails\ntgl> > to break, because instead of using relptr_access() it drills right\ntgl> > through the abstraction and doesn't have any kind of special case for\ntgl> > 0.\ntgl> \ntgl> Seems like that in itself is a a lousy idea. Either the code should\ntgl> respect the abstraction, or it shouldn't be declaring the variable\ntgl> as a relptr in the first place.\ntgl> \ntgl> > So we can fix this by:\ntgl> > 1. Using a relative pointer value other than 0 to represent a null\ntgl> > pointer. Andres suggested (Size) -1.\ntgl> > 2. Not storing the free page manager for the DSM in the main shared\ntgl> > memory segment at byte offset 0.\ntgl> > 3. Dropping the assertion while loudly singing \"la la la la la la\".\ntgl> \ntgl> I'm definitely down on #3, because that just leaves the ambiguity\ntgl> in place to bite somewhere else in future. #1 would work as long\ntgl> as nobody expects memset-to-zero to produce null relptrs, but that\ntgl> doesn't seem very nice either.\ntgl> \ntgl> On the whole, wasting MAXALIGN worth of memory seems like the least bad\ntgl> alternative, but I wonder if we ought to do it right here as opposed\ntgl> to somewhere in the DSM code proper. Why is this DSM space not like\ntgl> other DSM spaces in starting with a TOC?\ntgl> \ntgl> \t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Jun 2022 11:42:01 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\", File:\n \"...src/include/utils/relptr.h\", Line: 67, PID: 30485)"
},
{
"msg_contents": "At Tue, 31 May 2022 15:57:14 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> 1. Using a relative pointer value other than 0 to represent a null\n> pointer. Andres suggested (Size) -1.\n\nI thought that relptr as a part of DSM so the use of offset=0 is\nsomewhat illegal. But I like this. We can fix this by this\nmodification. I think ((Size) -1) is natural to signal something\nspecial. (I see glibc uses \"(size_t) -1\".)\n\n> 2. Not storing the free page manager for the DSM in the main shared\n> memory segment at byte offset 0.\n> 3. Dropping the assertion while loudly singing \"la la la la la la\".\n\nreagards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 01 Jun 2022 11:51:33 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\", File:\n \"...src/include/utils/relptr.h\", Line: 67, PID: 30485)"
},
{
"msg_contents": "At Wed, 01 Jun 2022 11:42:01 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \nme> At Tue, 31 May 2022 16:10:05 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \nme> tgl> Robert Haas <robertmhaas@gmail.com> writes:\nme> tgl> > Yeah, so when I created this stuff in the first place, I figured that\nme> tgl> > it wasn't a problem if we reserved relptr == 0 to mean a NULL pointer,\n\nMmm. Sorry. It's just an accidental shooting.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 01 Jun 2022 11:53:12 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\", File:\n \"...src/include/utils/relptr.h\", Line: 67, PID: 30485)"
},
{
"msg_contents": "On Wed, Jun 1, 2022 at 2:57 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> We do use fpm_segment_base(), but that accidentally fails\n> to break, because instead of using relptr_access() it drills right\n> through the abstraction and doesn't have any kind of special case for\n> 0. So we can fix this by:\n>\n> 1. Using a relative pointer value other than 0 to represent a null\n> pointer. Andres suggested (Size) -1.\n> 2. Not storing the free page manager for the DSM in the main shared\n> memory segment at byte offset 0.\n\nHi all,\n\nFor this open item, the above two ideas were discussed as a short-term\nfix, and my reading of the thread is that the other proposals are too\ninvasive at this point in the cycle. Both of them have a draft patch\nin the thread. #2, i.e. wasting MAXALIGN of space, seems the simplest\nand most localized. Any thoughts on pulling the trigger on either of\nthese two approaches?\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 22 Jun 2022 09:49:50 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\",\n File: \"...src/include/utils/relptr.h\",\n Line: 67, PID: 30485)"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> On Wed, Jun 1, 2022 at 2:57 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> ... So we can fix this by:\n>> 1. Using a relative pointer value other than 0 to represent a null\n>> pointer. Andres suggested (Size) -1.\n>> 2. Not storing the free page manager for the DSM in the main shared\n>> memory segment at byte offset 0.\n\n> For this open item, the above two ideas were discussed as a short-term\n> fix, and my reading of the thread is that the other proposals are too\n> invasive at this point in the cycle. Both of them have a draft patch\n> in the thread. #2, i.e. wasting MAXALIGN of space, seems the simplest\n> and most localized. Any thoughts on pulling the trigger on either of\n> these two approaches?\n\nI'm still of the opinion that 0 == NULL is a good property to have,\nso I vote for #2.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 Jun 2022 22:54:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\",\n File: \"...src/include/utils/relptr.h\", Line: 67, PID: 30485)"
},
{
"msg_contents": "On Wed, Jun 22, 2022 at 2:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > On Wed, Jun 1, 2022 at 2:57 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >> ... So we can fix this by:\n> >> 1. Using a relative pointer value other than 0 to represent a null\n> >> pointer. Andres suggested (Size) -1.\n> >> 2. Not storing the free page manager for the DSM in the main shared\n> >> memory segment at byte offset 0.\n\nFor the record, the third idea proposed was to use 1 for the first\nbyte, so that 0 is reserved for NULL and works with memset(0). Here's\nan attempt at that.",
"msg_date": "Wed, 22 Jun 2022 16:24:06 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\",\n File: \"...src/include/utils/relptr.h\",\n Line: 67, PID: 30485)"
},
{
"msg_contents": "On Wed, Jun 22, 2022 at 4:24 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Jun 22, 2022 at 2:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > John Naylor <john.naylor@enterprisedb.com> writes:\n> > > On Wed, Jun 1, 2022 at 2:57 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > >> ... So we can fix this by:\n> > >> 1. Using a relative pointer value other than 0 to represent a null\n> > >> pointer. Andres suggested (Size) -1.\n> > >> 2. Not storing the free page manager for the DSM in the main shared\n> > >> memory segment at byte offset 0.\n>\n> For the record, the third idea proposed was to use 1 for the first\n> byte, so that 0 is reserved for NULL and works with memset(0). Here's\n> an attempt at that.\n\n... erm, though, duh, I forgot to adjust Assert(val > base). One more time.",
"msg_date": "Wed, 22 Jun 2022 16:33:44 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\",\n File: \"...src/include/utils/relptr.h\",\n Line: 67, PID: 30485)"
},
{
"msg_contents": "On Wed, Jun 22, 2022 at 12:34 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > For the record, the third idea proposed was to use 1 for the first\n> > byte, so that 0 is reserved for NULL and works with memset(0). Here's\n> > an attempt at that.\n>\n> ... erm, though, duh, I forgot to adjust Assert(val > base). One more time.\n\nI like this idea and think this might have the side benefit of making\nit harder to get away with accessing relptr_off directly.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 22 Jun 2022 10:09:28 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\",\n File: \"...src/include/utils/relptr.h\",\n Line: 67, PID: 30485)"
},
{
"msg_contents": "On Thu, Jun 23, 2022 at 2:09 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Jun 22, 2022 at 12:34 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > For the record, the third idea proposed was to use 1 for the first\n> > > byte, so that 0 is reserved for NULL and works with memset(0). Here's\n> > > an attempt at that.\n> >\n> > ... erm, though, duh, I forgot to adjust Assert(val > base). One more time.\n>\n> I like this idea and think this might have the side benefit of making\n> it harder to get away with accessing relptr_off directly.\n\nThanks. Pushed, and back-patched to 14, where\nmin_dynamic_shared_memory arrived.\n\nI wondered in passing if the stuff about relptr_declare() was still\nneeded to avoid confusing pgindent, since we tweaked the indent code a\nbit for macros that take a typename, but it seems that it still\nmangles \"relptr(FooBar) some_struct_member;\", putting extra whitespace\nin front of it. Hmmph.\n\n\n",
"msg_date": "Mon, 27 Jun 2022 13:14:08 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b1: FailedAssertion(\"val > base\",\n File: \"...src/include/utils/relptr.h\",\n Line: 67, PID: 30485)"
}
] |
[
{
"msg_contents": "Hackers,\n\nOver the past few days I've been gathering some benchmark results\ntogether to show the sort performance improvements in PG15 [1].\n\nOne of the test cases I did was to demonstrate Heikki's change to use\na k-way merge (65014000b).\n\nThe test I did to try this out was along the lines of:\n\nset max_parallel_workers_per_gather = 0;\ncreate table t (a bigint not null, b bigint not null, c bigint not\nnull, d bigint not null, e bigint not null, f bigint not null);\n\ninsert into t select x,x,x,x,x,x from generate_Series(1,140247142) x; -- 10GB!\nvacuum freeze t;\n\nThe query I ran was:\n\nselect * from t order by a offset 140247142;\n\nI tested various sizes of work_mem starting at 4MB and doubled that\nall the way to 16GB. For many of the smaller values of work_mem the\nperformance is vastly improved by Heikki's change, however for\nwork_mem = 64MB I detected quite a large slowdown. PG14 took 20.9\nseconds and PG15 beta 1 took 29 seconds!\n\nI've been trying to get to the bottom of this today and finally have\ndiscovered this is due to the tuple size allocations in the sort being\nexactly 64 bytes. Prior to 40af10b57 (Use Generation memory contexts\nto store tuples in sorts) the tuple for the sort would be stored in an\naset context. After 40af10b57 we'll use a generation context. The\nidea with that change is that the generation context does no\npower-of-2 round ups for allocations, so we save memory in most cases.\nHowever, due to this particular test having a tuple size of 64-bytes,\nthere was no power-of-2 wastage with aset.\n\nThe problem is that generation chunks have a larger chunk header than\naset do due to having to store the block pointer that the chunk\nbelongs to so that GenerationFree() can increment the nfree chunks in\nthe block. aset.c does not require this as freed chunks just go onto a\nfreelist that's global to the entire context.\n\nBasically, for my test query, the slowdown is because instead of being\nable to store 620702 tuples per tape over 226 tapes with an aset\ncontext, we can now only store 576845 tuples per tape resulting in\nrequiring 244 tapes when using the generation context.\n\nIf I had added column \"g\" to make the tuple size 72 bytes causing\naset's code to round allocations up to 128 bytes and generation.c to\nmaintain the 72 bytes then the sort would have stored 385805 tuples\nover 364 batches for aset and 538761 tuples over 261 batches using the\ngeneration context. That would have been a huge win.\n\nSo it basically looks like I discovered a very bad case that causes a\nsignificant slowdown. Yet other cases that are not an exact power of\n2 stand to gain significantly from this change.\n\nOne thing 40af10b57 does is stops those terrible performance jumps\nwhen the tuple size crosses a power-of-2 boundary. The performance\nshould be more aligned to the size of the data being sorted now...\nUnfortunately, that seems to mean regressions for large sorts with\npower-of-2 sized tuples.\n\nI'm unsure exactly what I should do about this right now.\n\nDavid\n\n[1] https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/speeding-up-sort-performance-in-postgres-15/ba-p/3396953#change4\n\n\n",
"msg_date": "Fri, 20 May 2022 17:56:06 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On 20/05/2022 08:56, David Rowley wrote:\n> The problem is that generation chunks have a larger chunk header than\n> aset do due to having to store the block pointer that the chunk\n> belongs to so that GenerationFree() can increment the nfree chunks in\n> the block. aset.c does not require this as freed chunks just go onto a\n> freelist that's global to the entire context.\n\nCould the 'context' field be moved from GenerationChunk to GenerationBlock?\n\n- Heikki\n\n\n",
"msg_date": "Fri, 20 May 2022 13:01:31 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On 5/20/22 12:01, Heikki Linnakangas wrote:\n> On 20/05/2022 08:56, David Rowley wrote:\n>> The problem is that generation chunks have a larger chunk header than\n>> aset do due to having to store the block pointer that the chunk\n>> belongs to so that GenerationFree() can increment the nfree chunks in\n>> the block. aset.c does not require this as freed chunks just go onto a\n>> freelist that's global to the entire context.\n> \n> Could the 'context' field be moved from GenerationChunk to GenerationBlock?\n> \n\nNot easily, because GetMemoryChunkContext() expects the context to be\nstored right before the chunk. In principle we could add \"get context\"\ncallback to MemoryContextMethods, so that different implementations can\noverride that.\n\nI wonder how expensive the extra redirect would be, but probably not\nmuch because the places touching chunk->context deal with the block too\n(e.g. GenerationFree has to tweak block->nfree).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 23 May 2022 22:32:10 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 5/20/22 12:01, Heikki Linnakangas wrote:\n>> Could the 'context' field be moved from GenerationChunk to GenerationBlock?\n\n> Not easily, because GetMemoryChunkContext() expects the context to be\n> stored right before the chunk. In principle we could add \"get context\"\n> callback to MemoryContextMethods, so that different implementations can\n> override that.\n\nHow would you know which context type to consult for that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 May 2022 16:47:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On Tue, 24 May 2022 at 08:32, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 5/20/22 12:01, Heikki Linnakangas wrote:\n> > Could the 'context' field be moved from GenerationChunk to GenerationBlock?\n> >\n>\n> Not easily, because GetMemoryChunkContext() expects the context to be\n> stored right before the chunk. In principle we could add \"get context\"\n> callback to MemoryContextMethods, so that different implementations can\n> override that.\n\nhmm, but we need to know the context first before we can know which\ncallback to call.\n\nDavid\n\n\n",
"msg_date": "Tue, 24 May 2022 08:50:47 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "\n\nOn 5/23/22 22:47, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> On 5/20/22 12:01, Heikki Linnakangas wrote:\n>>> Could the 'context' field be moved from GenerationChunk to GenerationBlock?\n> \n>> Not easily, because GetMemoryChunkContext() expects the context to be\n>> stored right before the chunk. In principle we could add \"get context\"\n>> callback to MemoryContextMethods, so that different implementations can\n>> override that.\n> \n> How would you know which context type to consult for that?\n> \n\nD'oh! I knew there has to be some flaw in that idea, but I forgot about\nthis chicken-or-egg issue.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 23 May 2022 22:52:10 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On Tue, 24 May 2022 at 08:52, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 5/23/22 22:47, Tom Lane wrote:\n> > How would you know which context type to consult for that?\n> >\n>\n> D'oh! I knew there has to be some flaw in that idea, but I forgot about\n> this chicken-or-egg issue.\n\nHandy wavy idea: It's probably too complex for now, and it also might\nbe too much overhead, but having GenerationPointerGetChunk() do a\nbinary search on a sorted-by-memory-address array of block pointers\nmight be a fast enough way to find the block that the pointer belongs\nto. There should be far fewer blocks now since generation.c now grows\nthe block sizes. N in O(log2 N) the search complexity should never be\nexcessively high.\n\nHowever, it would mean a binary search for every pfree, which feels\npretty horrible. My feeling is that it seems unlikely that saving 8\nbytes by not storing the GenerationBlock would be a net win here.\nThere may be something to claw back as for the tuplesort.c case as I\nthink the heap_free_minimal_tuple() call in writetup_heap() is not\nneeded in many cases as we reset the tuple context directly afterward\nwriting the tuples out.\n\nDavid\n\n\n",
"msg_date": "Tue, 24 May 2022 09:20:36 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Handy wavy idea: It's probably too complex for now, and it also might\n> be too much overhead, but having GenerationPointerGetChunk() do a\n> binary search on a sorted-by-memory-address array of block pointers\n> might be a fast enough way to find the block that the pointer belongs\n> to. There should be far fewer blocks now since generation.c now grows\n> the block sizes. N in O(log2 N) the search complexity should never be\n> excessively high.\n\n> However, it would mean a binary search for every pfree, which feels\n> pretty horrible. My feeling is that it seems unlikely that saving 8\n> bytes by not storing the GenerationBlock would be a net win here.\n\nI think probably that could be made to work, since a Generation\ncontext should not contain all that many live blocks at any one time.\n\nHowever, here's a different idea: how badly do we need the \"size\"\nfield in GenerationChunk? We're not going to ever recycle the\nchunk, IIUC, so it doesn't matter exactly how big it is. When\ndoing MEMORY_CONTEXT_CHECKING we'll still want requested_size,\nbut that's not relevant to performance-critical cases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 May 2022 17:36:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On Tue, 24 May 2022 at 09:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> However, here's a different idea: how badly do we need the \"size\"\n> field in GenerationChunk? We're not going to ever recycle the\n> chunk, IIUC, so it doesn't matter exactly how big it is. When\n> doing MEMORY_CONTEXT_CHECKING we'll still want requested_size,\n> but that's not relevant to performance-critical cases.\n\nInteresting idea. However, I do see a couple of usages of the \"size\"\nfield away from MEMORY_CONTEXT_CHECKING builds:\n\nGenerationRealloc: uses \"size\" to figure out if the new size is\nsmaller than the old size. Maybe we could just always move to a new\nchunk regardless of if the new size is smaller or larger than the old\nsize.\n\nGenerationGetChunkSpace: Uses \"size\" to figure out the amount of\nmemory is used by the chunk. I'm not sure how we'd work around the\nfact that USEMEM() uses that extensively in tuplesort.c to figure out\nhow much memory we're using.\n\nDavid\n\n\n",
"msg_date": "Tue, 24 May 2022 09:56:46 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "I wrote:\n> However, here's a different idea: how badly do we need the \"size\"\n> field in GenerationChunk? We're not going to ever recycle the\n> chunk, IIUC, so it doesn't matter exactly how big it is. When\n> doing MEMORY_CONTEXT_CHECKING we'll still want requested_size,\n> but that's not relevant to performance-critical cases.\n\nRefining that a bit: we could provide the size field only when\nMEMORY_CONTEXT_CHECKING and/or CLOBBER_FREED_MEMORY are defined.\nThat would leave us with GenerationRealloc and GenerationGetChunkSpace\nnot being supportable operations, but I wonder how much we need either.\n\nBTW, shouldn't GenerationCheck be ifdef'd out if MEMORY_CONTEXT_CHECKING\nisn't set? aset.c does things that way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 May 2022 18:02:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> GenerationRealloc: uses \"size\" to figure out if the new size is\n> smaller than the old size. Maybe we could just always move to a new\n> chunk regardless of if the new size is smaller or larger than the old\n> size.\n\nI had the same idea ... but we need to know the old size to know how much\nto copy.\n\n> GenerationGetChunkSpace: Uses \"size\" to figure out the amount of\n> memory is used by the chunk. I'm not sure how we'd work around the\n> fact that USEMEM() uses that extensively in tuplesort.c to figure out\n> how much memory we're using.\n\nUgh, that seems like a killer.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 May 2022 18:04:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On Tue, 24 May 2022 at 10:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> BTW, shouldn't GenerationCheck be ifdef'd out if MEMORY_CONTEXT_CHECKING\n> isn't set? aset.c does things that way.\n\nIsn't it done in generation.c:954?\n\nDavid\n\n\n",
"msg_date": "Tue, 24 May 2022 10:09:18 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Tue, 24 May 2022 at 10:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> BTW, shouldn't GenerationCheck be ifdef'd out if MEMORY_CONTEXT_CHECKING\n>> isn't set? aset.c does things that way.\n\n> Isn't it done in generation.c:954?\n\nAh, sorry, didn't look that far up ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 May 2022 18:30:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "I had another, possibly-crazy idea. I think that the API requirement\nthat the word before a chunk's start point to a MemoryContext is\noverly strong. What we need is that it point to something in which\na MemoryContextMethods pointer can be found (at a predefined offset).\nThus, if generation.c is willing to add the overhead of a\nMemoryContextMethods pointer in GenerationBlock, it could dispense\nwith the per-chunk context field and just have the GenerationBlock\nlink there. I guess most likely we'd also need a back-link to\nthe GenerationContext from GenerationBlock. Still, two more\npointers per GenerationBlock is an easy tradeoff to make to save\none pointer per GenerationChunk.\n\nI've not trawled the code to make sure that *only* the methods\npointer is touched by context-type-independent code, but it\nseems like we could get to that even if we're not there today.\n\nWhether this idea is something we could implement post-beta,\nI'm not sure. I'm inclined to think that we can't change the layout\nof MemoryContextData post-beta, as people may already be building\nextensions for production use. We could bloat GenerationBlock even\nmore so that the methods pointer is at the right offset for today's\nlayout of MemoryContextData, though obviously going forward it'd be\nbetter if there wasn't extra overhead needed to make this happen.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 May 2022 19:05:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On 5/20/22 1:56 AM, David Rowley wrote:\r\n> Hackers,\r\n> \r\n> Over the past few days I've been gathering some benchmark results\r\n> together to show the sort performance improvements in PG15 [1].\r\n\r\n> So it basically looks like I discovered a very bad case that causes a\r\n> significant slowdown. Yet other cases that are not an exact power of\r\n> 2 stand to gain significantly from this change.\r\n> \r\n> I'm unsure exactly what I should do about this right now.\r\n\r\nWhile this is being actively investigated, I added this into open items.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Mon, 23 May 2022 20:49:32 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On Tue, 24 May 2022 at 09:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > Handy wavy idea: It's probably too complex for now, and it also might\n> > be too much overhead, but having GenerationPointerGetChunk() do a\n> > binary search on a sorted-by-memory-address array of block pointers\n> > might be a fast enough way to find the block that the pointer belongs\n> > to. There should be far fewer blocks now since generation.c now grows\n> > the block sizes. N in O(log2 N) the search complexity should never be\n> > excessively high.\n>\n> > However, it would mean a binary search for every pfree, which feels\n> > pretty horrible. My feeling is that it seems unlikely that saving 8\n> > bytes by not storing the GenerationBlock would be a net win here.\n>\n> I think probably that could be made to work, since a Generation\n> context should not contain all that many live blocks at any one time.\n\nI've done a rough cut implementation of this and attached it here.\nI've not done that much testing yet, but it does seem to fix the\nperformance regression that I mentioned in the blog post that I linked\nin the initial post on this thread.\n\nThere are a couple of things to note about the patch:\n\n1. I quickly realised that there's no good place to store the\nsorted-by-memory-address array of GenerationBlocks. In the patch, I've\nhad to malloc() this array and also had to use a special case so that\nI didn't try to do another malloc() inside GenerationContextCreate().\nIt's possible that the malloc() / repalloc of this array fails. In\nwhich case, I think I've coded things in such a way that there will be\nno memory leaks of the newly added block.\n2. I did see GenerationFree() pop up in perf top a little more than it\nused to. I considered that we might want to have\nGenerationGetBlockFromChunk() cache the last found block for the set\nand then check that one first. We expect generation contexts to\npfree() in an order that would likely make us hit this case most of\nthe time. I added a few lines to the attached v2 patch to add a\nlast_pfree_block field to the context struct. However, this seems to\nhinder performance more than it helps. It can easily be removed from\nthe v2 patch.\n\nIn the results you can see the PG14 + PG15 results the same as I\nreported on the blog post I linked earlier. It seems that for\nPG15_patched virtually all work_mem sizes produce results that are\nfaster than PG14. The exception here is the 16GB test where\nPG15_patched is 0.8% slower, which seems within the noise threshold.\n\nDavid",
"msg_date": "Wed, 25 May 2022 00:20:59 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Tue, 24 May 2022 at 09:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think probably that could be made to work, since a Generation\n>> context should not contain all that many live blocks at any one time.\n\n> I've done a rough cut implementation of this and attached it here.\n> I've not done that much testing yet, but it does seem to fix the\n> performance regression that I mentioned in the blog post that I linked\n> in the initial post on this thread.\n\nHere's a draft patch for the other way of doing it. I'd first tried\nto make the side-effects completely local to generation.c, but that\nfails in view of code like\n\n\tMemoryContextAlloc(GetMemoryChunkContext(x), ...)\n\nThus we pretty much have to have some explicit awareness of this scheme\nin GetMemoryChunkContext(). There's more than one way it could be\ndone, but I thought a clean way is to invent a separate NodeTag type\nto identify the indirection case.\n\nSo this imposes a distributed overhead of one additional test-and-branch\nper pfree or repalloc. I'm inclined to think that that's negligible,\nbut I've not done performance testing to try to prove it.\n\nFor back-patching into v14, we could put the new NodeTag type at the\nend of that enum list. The change in the inline GetMemoryChunkContext\nis probably an acceptable hazard.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 24 May 2022 12:01:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-24 12:01:58 -0400, Tom Lane wrote:\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > On Tue, 24 May 2022 at 09:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I think probably that could be made to work, since a Generation\n> >> context should not contain all that many live blocks at any one time.\n> \n> > I've done a rough cut implementation of this and attached it here.\n> > I've not done that much testing yet, but it does seem to fix the\n> > performance regression that I mentioned in the blog post that I linked\n> > in the initial post on this thread.\n> \n> Here's a draft patch for the other way of doing it. I'd first tried\n> to make the side-effects completely local to generation.c, but that\n> fails in view of code like\n> \n> \tMemoryContextAlloc(GetMemoryChunkContext(x), ...)\n> \n> Thus we pretty much have to have some explicit awareness of this scheme\n> in GetMemoryChunkContext(). There's more than one way it could be\n> done, but I thought a clean way is to invent a separate NodeTag type\n> to identify the indirection case.\n\nThat's interesting - I actually needed something vaguely similar recently. For\ndirect IO support we need to allocate memory with pagesize alignment\n(otherwise DMA doesn't work). Several places allocating such buffers also\npfree them.\n\nThe easiest way I could see to deal with that was to invent a different memory\ncontext node type that handles allocation / freeing by over-allocating\nsufficiently to ensure alignment, backed by an underlying memory context.\n\n\n\nA variation on your patch would be to only store the offset to the block\nheader - that should always fit into 32bit (huge allocations being their own\nblock, which is why this wouldn't work for storing an offset to the\ncontext). With a bit of care that'd allow aset.c to half it's overhead, by\nusing 4 bytes of space for all non-huge allocations. Of course, it'd increase\nthe cost of pfree() of small allocations, because AllocSetFree() currently\ndoesn't need to access the block for those. But I'd guess that'd be outweighed\nby the reduced memory usage.\n\nSorry for the somewhat off-topic musing - I was trying to see if the\nMemoryContextLink approach could be generalized or has disadvantages aside\nfrom the branch in GetMemoryChunkContext().\n\n\n\n> For back-patching into v14, we could put the new NodeTag type at the\n> end of that enum list. The change in the inline GetMemoryChunkContext\n> is probably an acceptable hazard.\n\nWhy would we backpatch this to 14? I don't think we have any regressions\nthere?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 24 May 2022 17:39:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-05-24 12:01:58 -0400, Tom Lane wrote:\n>> For back-patching into v14, we could put the new NodeTag type at the\n>> end of that enum list. The change in the inline GetMemoryChunkContext\n>> is probably an acceptable hazard.\n\n> Why would we backpatch this to 14? I don't think we have any regressions\n> there?\n\nOh, sorry, I meant v15. I'm operating on the assumption that we have\nat least a weak ABI freeze in v15 already.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 May 2022 21:23:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On Wed, 25 May 2022 at 04:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Here's a draft patch for the other way of doing it. I'd first tried\n> to make the side-effects completely local to generation.c, but that\n> fails in view of code like\n>\n> MemoryContextAlloc(GetMemoryChunkContext(x), ...)\n>\n> Thus we pretty much have to have some explicit awareness of this scheme\n> in GetMemoryChunkContext(). There's more than one way it could be\n> done, but I thought a clean way is to invent a separate NodeTag type\n> to identify the indirection case.\n\nThanks for coding that up. This seems like a much better idea than mine.\n\nI ran the same benchmark as I did in the blog on your patch and got\nthe attached sort_bench.png. You can see the patch fixes the 64MB\nwork_mem performance regression, as we'd expect.\n\nTo get an idea of the overhead of this I came up with the attached\nallocate_performance_function.patch which basically just adds a\nfunction named pg_allocate_generation_memory() which you can pass a\nchunk_size for the number of bytes to allocate at once, and also a\nkeep_memory to tell the function how much memory to keep around before\nstarting to pfree previous allocations. The final parameter defines\nthe total amount of memory to allocate.\n\nThe attached script calls the function with varying numbers of chunk\nsizes from 8 up to 2048, multiplying by 2 each step. It keeps 1MB of\nmemory and does a total of 1GB of allocations.\n\nI ran the script against today's master and master + the\ninvent-MemoryContextLink-1.patch and got the attached tps_results.txt.\n\nThe worst-case is the 8-byte allocation size where performance drops\naround 7%. For the larger chunk sizes, the drop is much less, mostly\njust around <1% to ~6%. For the 2048 byte size chunks, the performance\nseems to improve (?). Obviously, the test is pretty demanding as far\nas palloc and pfree go. I imagine we don't come close to anything like\nthat in the actual code. This test was just aimed to give us an idea\nof the overhead. It might not be enough information to judge if we\nshould be concerned about more realistic palloc/pfree workloads.\n\nI didn't test the performance of an aset.c context. I imagine it's\nlikely to be less overhead due to aset.c being generally slower from\nhaving to jump through a few more hoops during palloc/pfree.\n\nDavid",
"msg_date": "Wed, 25 May 2022 15:09:09 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On Wed, 25 May 2022 at 15:09, David Rowley <dgrowleyml@gmail.com> wrote:\n> I didn't test the performance of an aset.c context. I imagine it's\n> likely to be less overhead due to aset.c being generally slower from\n> having to jump through a few more hoops during palloc/pfree.\n\nI've attached the results from doing the same test with a standard\nallocset context.\n\nWith the exception of the 8 byte chunk size test, there just seems to\nbe a 3-4% slowdown on my machine.\n\nDavid",
"msg_date": "Thu, 26 May 2022 00:35:27 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "В Вт, 24/05/2022 в 17:39 -0700, Andres Freund пишет:\n> \n> A variation on your patch would be to only store the offset to the block\n> header - that should always fit into 32bit (huge allocations being their own\n> block, which is why this wouldn't work for storing an offset to the\n> context). With a bit of care that'd allow aset.c to half it's overhead, by\n> using 4 bytes of space for all non-huge allocations. Of course, it'd increase\n> the cost of pfree() of small allocations, because AllocSetFree() currently\n> doesn't need to access the block for those. But I'd guess that'd be outweighed\n> by the reduced memory usage.\n\nI'm +1 for this.\n\nAnd with this change every memory context kind can have same header:\n\n typedef struct MemoryChunk {\n#ifdef MEMORY_CONTEXT_CHECKING\n\tSize requested_size;\n#endif\n uint32 encoded_size; /* encoded allocation size */\n uint32 offset_to_block; /* backward offset to block header */\n }\n\nAllocated size always could be encoded into uint32 since it is rounded\nfor large allocations (I believe, large allocations certainly rounded\nto at least 4096 bytes):\n\n encoded_size = size < (1u<<31) ? size : (1u<<31)|(size>>12);\n /* and reverse */\n size = (encoded_size >> 31) ? ((Size)(encoded_size<<1)<<12) :\n\t (Size)encoded_size;\n\nThere is a glitch with Aset since it currently reuses `aset` pointer\nfor freelist link. With such change this link had to be encoded in\nchunk-body itself instead of header. I was confused with this, since\nthere are valgrind hooks, and I was not sure how to change it (I'm\nnot good at valgrind hooks). But after thinking more about I believe\nit is doable.\n\n\nregards\n\n-------\n\nYura Sokolov\n\n\n\n",
"msg_date": "Fri, 27 May 2022 16:12:37 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation\n context change"
},
{
"msg_contents": "Yura Sokolov <y.sokolov@postgrespro.ru> writes:\n> В Вт, 24/05/2022 в 17:39 -0700, Andres Freund пишет:\n>> A variation on your patch would be to only store the offset to the block\n>> header - that should always fit into 32bit (huge allocations being their own\n>> block, which is why this wouldn't work for storing an offset to the\n>> context).\n\n> I'm +1 for this.\n\nGiven David's results in the preceding message, I don't think I am.\nA scheme like this would add more arithmetic and at least one more\nindirection to GetMemoryChunkContext(), and we already know that\nadding even a test-and-branch there has measurable cost. (I wonder\nif using unlikely() on the test would help? But it's not unlikely\nin a generation-context-heavy use case.) There would also be a good\ndeal of complication and ensuing slowdown created by the need for\noversize chunks to be a completely different kind of animal with a\ndifferent header.\n\nI'm also not very happy about this:\n\n> And with this change every memory context kind can have same header:\n\nIMO that's a bug not a feature. It puts significant constraints on how\ncontext types can be designed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 May 2022 10:51:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On Sat, 28 May 2022 at 02:51, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Given David's results in the preceding message, I don't think I am.\n> A scheme like this would add more arithmetic and at least one more\n> indirection to GetMemoryChunkContext(), and we already know that\n> adding even a test-and-branch there has measurable cost.\n\nI also ran the same tests on my patch to binary search for the\ngeneration block and the performance is worse than with the\nMemoryContextLink patch, albeit, limited to generation contexts only.\nAlso disheartening. See attached bsearch_gen_blocks.txt\n\nI decided to run some more extensive benchmarking with the 10GB table\nwith varying numbers of BIGINT columns from 6 up to 14. 6 columns\nmeans 64-byte pallocs in the generation context and 14 means 128\nbytes. Once again, I tested work_mem values starting at 4MB and\ndoubled that each test until I got to 16GB. The results are attached\nboth in chart format and I've also included the complete results in a\nspreadsheet along with the script I ran to get the results.\n\nThe results compare PG14 @ 0adff38d against master @ b3fb16e8b. In\nthe chart, anything below 100% is a performance improvement over PG14\nand anything above 100% means PG15 is slower. You can see there's\nonly the 64-byte / 64MB work_mem test that gets significantly slower\nand that there are only a small amount of other tests that are\nslightly slower. Most are faster and on average PG15 takes 90% of the\ntime that PG14 took.\n\nLikely it would have been more relevant to have tested this against\nmaster with 40af10b57 reverted. I'm running those now.\n\nDavid",
"msg_date": "Tue, 31 May 2022 09:37:16 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On Mon, May 30, 2022 at 2:37 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> The results compare PG14 @ 0adff38d against master @ b3fb16e8b. In\n> the chart, anything below 100% is a performance improvement over PG14\n> and anything above 100% means PG15 is slower. You can see there's\n> only the 64-byte / 64MB work_mem test that gets significantly slower\n> and that there are only a small amount of other tests that are\n> slightly slower. Most are faster and on average PG15 takes 90% of the\n> time that PG14 took.\n\nShouldn't this be using the geometric mean rather than the arithmetic\nmean? That's pretty standard practice when summarizing a set of\nbenchmark results that are expressed as ratios to some baseline.\n\nIf I tweak your spreadsheet to use the geometric mean, the patch looks\nslightly better -- 89%.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 30 May 2022 14:48:18 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On Tue, 31 May 2022 at 09:48, Peter Geoghegan <pg@bowt.ie> wrote:\n> Shouldn't this be using the geometric mean rather than the arithmetic\n> mean? That's pretty standard practice when summarizing a set of\n> benchmark results that are expressed as ratios to some baseline.\n\nMaybe just comparing the SUM of the seconds of each version is the\nbest way. That comes out to 86.6%\n\nDavid\n\n\n",
"msg_date": "Tue, 31 May 2022 10:04:36 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On Tue, 31 May 2022 at 09:37, David Rowley <dgrowleyml@gmail.com> wrote:\n> Likely it would have been more relevant to have tested this against\n> master with 40af10b57 reverted. I'm running those now.\n\nMy machine just finished running the tests on master with the\ngeneration context in tuplesort.c commented out so that it always uses\nthe allocset context.\n\nIn the attached graph, anything below 100% means that using the\ngeneration context performs better than without.\n\nIn the test, each query runs 5 times. If I sum the average run time of\neach query, master takes 41 min 43.8 seconds and without the use of\ngeneration context it takes 43 mins 31.3 secs. So it runs in 95.88%\nwith the generation context.\n\nLooking at the graph, you can easily see the slower performance for\nthe 64-byte tuples with 64MB of work_mem. That's the only regression\nof note. Many other cases are much faster.\n\nI'm wondering if we should just do nothing about this. Any thoughts?\n\nDavid",
"msg_date": "Tue, 31 May 2022 17:39:51 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On Fri, May 27, 2022 at 10:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Given David's results in the preceding message, I don't think I am.\n> A scheme like this would add more arithmetic and at least one more\n> indirection to GetMemoryChunkContext(), and we already know that\n> adding even a test-and-branch there has measurable cost.\n\nI think you're being too negative here. It's a 7% regression on 8-byte\nallocations in a tight loop. In real life, people allocate memory\nbecause they want to do something with it, and the allocation overhead\ntherefore figures to be substantially less. They also nearly always\nallocate memory more than 8 bytes at a time, since there's very little\nof interest that can fit into an 8 byte allocation, and if you're\ndealing with one of the things that can, you're likely to allocate an\narray rather than each item individually. I think it's quite plausible\nthat saving space is going to be more important for performance than\nthe tiny cost of a test-and-branch here.\n\nI don't want to take the position that we ought necessarily to commit\nyour patch, because I don't really have a clear sense of what the wins\nand losses actually are. But, I am worried that our whole memory\nallocation infrastructure is stuck at a local maximum, and I think\nyour patch pushes in a generally healthy direction: let's optimize for\nwasting less space, instead of for the absolute minimum number of CPU\ncycles consumed.\n\naset.c's approach is almost unbeatable for small numbers of\nallocations in tiny contexts, and PostgreSQL does a lot of that. But\nwhen you do have cases where a lot of data needs to be stored in\nmemory, it starts to look pretty lame. To really get out from under\nthat problem, we'd need to find a way to remove the requirement of a\nper-allocation header altogether, and I don't think this patch really\nhelps us see how we could ever get all the way to that point.\nNonetheless, I like the fact that it puts more flexibility into the\nmechanism seemingly at very little real cost, and that it seems to\nmean less memory spent on header information rather than user data.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 31 May 2022 10:51:25 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I don't want to take the position that we ought necessarily to commit\n> your patch, because I don't really have a clear sense of what the wins\n> and losses actually are.\n\nYeah, we don't have any hard data here. It could be that it's a win to\nswitch to a rule that chunks must present an offset (instead of a pointer)\nback to a block header, which'd then be required to contain a link to the\nactual context, meaning that every context has to do something like what\nI proposed for generation.c. But nobody's coded that up let alone done\nany testing on it, and I feel like it's too late in the v15 cycle for\nchanges as invasive as that. Quite aside from that change in itself,\nyou wouldn't get any actual space savings in aset.c contexts unless\nyou did something with the chunk-size field too, and that seems a lot\nmessier.\n\nRight now my vote would be to leave things as they stand for v15 ---\nthe performance loss that started this thread occurs in a narrow\nenough set of circumstances that I don't feel too much angst about\nit being the price of winning in most other circumstances. We can\ninvestigate these options at leisure for v16 or later.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 May 2022 11:09:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On Tue, May 31, 2022 at 11:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, we don't have any hard data here. It could be that it's a win to\n> switch to a rule that chunks must present an offset (instead of a pointer)\n> back to a block header, which'd then be required to contain a link to the\n> actual context, meaning that every context has to do something like what\n> I proposed for generation.c. But nobody's coded that up let alone done\n> any testing on it, and I feel like it's too late in the v15 cycle for\n> changes as invasive as that. Quite aside from that change in itself,\n> you wouldn't get any actual space savings in aset.c contexts unless\n> you did something with the chunk-size field too, and that seems a lot\n> messier.\n>\n> Right now my vote would be to leave things as they stand for v15 ---\n> the performance loss that started this thread occurs in a narrow\n> enough set of circumstances that I don't feel too much angst about\n> it being the price of winning in most other circumstances. We can\n> investigate these options at leisure for v16 or later.\n\nI don't think it's all that narrow a case, but I think it's narrow\nenough that I agree that we can live with it if we don't feel like the\nrisk-reward trade-offs are looking favorable. I don't think tuples\nwhose size, after rounding to a multiple of 8, is exactly a power of 2\nare going to be terribly uncommon. It is not very likely that many\npeople will have tuples between 25 and 31 bytes, but tables with lots\nof 57-64 byte tuples are probably pretty common, and tables with lots\nof 121-128 and 249-256 byte tuples are somewhat plausible as well. I\nthink we will win more than we lose, but I think we will lose often\nenough that I wouldn't be very surprised if we get >0 additional\ncomplaints about this over the next 5 years. While it may seem risky\nto do something about that now, it will certainly seem a lot riskier\nonce the release is out, and there is some risk in doing nothing,\nbecause we don't know how many people are going to run into the bad\ncases or how severely they might be impacted.\n\nI think the biggest risk in your patch as presented is that slowing\ndown pfree() might suck in some set of workloads upon which we can't\neasily identify now. I don't really know how to rule out that\npossibility. In general terms, I think we're not doing ourselves any\nfavors by relying partly on memory context cleanup and partly on\npfree. The result is that pfree() has to work for every memory context\ntype and can't be unreasonably slow, which rules out a lot of really\nuseful ideas, both in terms of how to make allocation faster, and also\nin terms of how to make it denser. If you're ever of a mind to put\nsome efforts into really driving things forward in this area, I think\nfiguring out some way to break that hard dependence would be effort\nreally well-spent.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 31 May 2022 12:03:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "I ran a shorter version of David's script with just 6-9 attributes to\ntry to reproduce the problem area (spreadsheet with graph attached).\nMy test is also different in that I compare HEAD with just reverting\n40af10b57. This shows a 60% increase in HEAD in runtime for 64MB\nworkmem and 64 byte tuples. It also shows a 20% regression for 32MB\nworkmem and 64 byte tuples.\n\nI don't have anything to add to the discussion about whether something\nneeds to be done here for PG15. If anything, changing work_mem is an\neasy to understand (although sometimes not practical) workaround.\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 2 Jun 2022 15:20:40 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "Fr, 27/05/2022 в 10:51 -0400, Tom Lane writes:\n> Yura Sokolov <y.sokolov@postgrespro.ru> writes:\n> > В Вт, 24/05/2022 в 17:39 -0700, Andres Freund пишет:\n> > > A variation on your patch would be to only store the offset to the block\n> > > header - that should always fit into 32bit (huge allocations being their own\n> > > block, which is why this wouldn't work for storing an offset to the\n> > > context).\n> > I'm +1 for this.\n> \n> Given David's results in the preceding message, I don't think I am.\n\nBut David did the opposite: he removed pointer to block and remain\npointer to context. Then code have to do bsearch to find actual block.\n\n> A scheme like this would add more arithmetic and at least one more\n> indirection to GetMemoryChunkContext(), and we already know that\n> adding even a test-and-branch there has measurable cost. (I wonder\n> if using unlikely() on the test would help? But it's not unlikely\n> in a generation-context-heavy use case.)\n\nWell, it should be tested.\n\n> There would also be a good\n> deal of complication and ensuing slowdown created by the need for\n> oversize chunks to be a completely different kind of animal with a\n> different header.\n\nWhy? encoded_size could handle both small sizes and larges sizes\ngiven actual (not requested) allocation size is rounded to page size.\nThere's no need to different chunk header.\n\n> I'm also not very happy about this:\n>\n> > And with this change every memory context kind can have same header:\n> \n> IMO that's a bug not a feature. It puts significant constraints on how\n> context types can be designed.\n\nNothing prevents to add additional data before common header.\n\n\nregards\n\nYura\n\n\n\n",
"msg_date": "Thu, 02 Jun 2022 14:34:11 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation\n context change"
},
{
"msg_contents": "On Thu, 2 Jun 2022 at 20:20, John Naylor <john.naylor@enterprisedb.com> wrote:\n> If anything, changing work_mem is an\n> easy to understand (although sometimes not practical) workaround.\n\nI had a quick look at that for the problem case and we're very close\nin terms of work_mem size to better performance. A work_mem of just\n64.3MB brings the performance back to better than PG14.\n\npostgres=# set work_mem = '64.2MB';\npostgres=# \\i bench.sql\n c1 | c2 | c3 | c4 | c5 | c6\n----+----+----+----+----+----\n(0 rows)\n\nTime: 28949.942 ms (00:28.950)\npostgres=# set work_mem = '64.3MB';\npostgres=# \\i bench.sql\n c1 | c2 | c3 | c4 | c5 | c6\n----+----+----+----+----+----\n(0 rows)\n\nTime: 19759.552 ms (00:19.760)\n\nDavid\n\n\n",
"msg_date": "Fri, 3 Jun 2022 09:37:02 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On Thu, Jun 2, 2022 at 5:37 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I had a quick look at that for the problem case and we're very close\n> in terms of work_mem size to better performance. A work_mem of just\n> 64.3MB brings the performance back to better than PG14.\n\nThis is one of the things that I find super-frustrating about work_mem\nand sorting. I mean, we all know that work_mem is hard to tune because\nit's per-node rather than per-query or per-backend, but on top of\nthat, sort performance doesn't change smoothly as you vary it. I've\nseen really different work_mem settings produce only slightly\ndifferent performance, and here you have the opposite: only slightly\ndifferent work_mem settings produce significantly different\nperformance. It's not even the case that more memory is necessarily\nbetter than less.\n\nI have no idea what to do about this, and even if I did, it's too late\nto redesign v15. But I somehow feel like the whole model is just\nwrong. Sorting shouldn't use more memory unless it's actually going to\nspeed things up -- and not just any speed-up, but one that's\nsignificant compared to the additional expenditure of memory. But the\nfact that the sorting code just treats the memory budget as an input,\nand is not adaptive in any way, seems pretty bad. It means we use up\nall that memory even if a much smaller amount of memory would deliver\nthe same performance, or even better performance.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 2 Jun 2022 19:12:48 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On Wed, 1 Jun 2022 at 03:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Right now my vote would be to leave things as they stand for v15 ---\n> the performance loss that started this thread occurs in a narrow\n> enough set of circumstances that I don't feel too much angst about\n> it being the price of winning in most other circumstances. We can\n> investigate these options at leisure for v16 or later.\n\nI've been hesitating a little to put my views here as I wanted to see\nwhat the other views were first. My thoughts are generally in\nagreement with you, i.e., to do nothing for PG15 about this. My\nreasoning is:\n\n1. Most cases are faster as a result of using generation contexts for sorting.\n2. The slowdown cases seem rare and the speedup cases are much more common.\n3. There were performance cliffs in PG14 if a column was added to a\ntable to make the tuple size cross a power-of-2 boundary which I don't\nrecall anyone complaining about. PG15 makes the performance drop more\ngradual as tuple sizes increase. Performance is more predictable as a\nresult.\n4. As I just demonstrated in [1], if anyone is caught by this and has\na problem, the work_mem size increase required seems very small to get\nperformance back to better than in PG14. I found that setting work_mem\nto 64.3MB makes PG15 faster than PG14 for the problem case. If anyone\nhappened to hit this case and find the performance regression\nunacceptable then they have a way out... increase work_mem a little.\n\nAlso, in terms of what we might do to improve this situation for PG16:\nI was also discussing this off-list with Andres which resulted in him\nprototyping a patch [2] to store the memory context type in 3-bits in\nthe 64-bits prior to the pointer which is used to lookup a memory\ncontext method table so that we can call the correct function. I've\nbeen hacking around with this and I've added some optimisations and\ngot the memory allocation test [3] (modified to use aset.c rather than\ngeneration.c) showing very promising results when comparing this patch\nto master.\n\nThere are still a few slowdowns, but 16-byte allocations up to\n256-bytes allocations are looking pretty good. Up to ~10% faster\ncompared to master.\n\n(lower is better)\n\nsize compare\n8 114.86%\n16 89.04%\n32 90.95%\n64 94.17%\n128 93.36%\n256 96.57%\n512 101.25%\n1024 109.88%\n2048 100.87%\n\nThere's quite a bit more work to do for deciding how to handle large\nallocations and there's also likely more than can be done to further\nshrink the existing chunk headers for each of the 3 existing memory\nallocators.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvq8MoEMxHN+f=RcCfwCfr30An1w3uOKruUnnPLVRR3c_A@mail.gmail.com\n[2] https://github.com/anarazel/postgres/tree/mctx-chunk\n[3] https://www.postgresql.org/message-id/attachment/134021/allocate_performance_function.patch\n\n\n",
"msg_date": "Fri, 3 Jun 2022 15:13:43 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On Fri, Jun 3, 2022 at 10:14 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Wed, 1 Jun 2022 at 03:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Right now my vote would be to leave things as they stand for v15 ---\n> > the performance loss that started this thread occurs in a narrow\n> > enough set of circumstances that I don't feel too much angst about\n> > it being the price of winning in most other circumstances. We can\n> > investigate these options at leisure for v16 or later.\n>\n> I've been hesitating a little to put my views here as I wanted to see\n> what the other views were first. My thoughts are generally in\n> agreement with you, i.e., to do nothing for PG15 about this. My\n> reasoning is:\n>\n> 1. Most cases are faster as a result of using generation contexts for sorting.\n> 2. The slowdown cases seem rare and the speedup cases are much more common.\n> 3. There were performance cliffs in PG14 if a column was added to a\n> table to make the tuple size cross a power-of-2 boundary which I don't\n> recall anyone complaining about. PG15 makes the performance drop more\n> gradual as tuple sizes increase. Performance is more predictable as a\n> result.\n> 4. As I just demonstrated in [1], if anyone is caught by this and has\n> a problem, the work_mem size increase required seems very small to get\n> performance back to better than in PG14. I found that setting work_mem\n> to 64.3MB makes PG15 faster than PG14 for the problem case. If anyone\n> happened to hit this case and find the performance regression\n> unacceptable then they have a way out... increase work_mem a little.\n\nSince #4 is such a small lift, I'd be comfortable with closing the open item.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 3 Jun 2022 11:02:51 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On Fri, 3 Jun 2022 at 16:03, John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n> On Fri, Jun 3, 2022 at 10:14 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > 4. As I just demonstrated in [1], if anyone is caught by this and has\n> > a problem, the work_mem size increase required seems very small to get\n> > performance back to better than in PG14. I found that setting work_mem\n> > to 64.3MB makes PG15 faster than PG14 for the problem case. If anyone\n> > happened to hit this case and find the performance regression\n> > unacceptable then they have a way out... increase work_mem a little.\n>\n> Since #4 is such a small lift, I'd be comfortable with closing the open item.\n\nI also think that we should close off this open item for PG15. The\nmajority of cases are faster now and it is possible for anyone who\ndoes happen to hit a bad case to raise work_mem by some fairly small\nfraction.\n\nI posted a WIP patch on [1] that we aim to get into PG16 to improve\nthe situation here further. I'm hoping the fact that we do have some\nmeans to fix this for PG16 might mean we can leave this as a known\nissue for PG15.\n\nSo far only Robert has raised concerns with this regression for PG15\n(see [2]). Tom voted for leaving things as they are for PG15 in [3].\nJohn agrees, as quoted above. Does anyone else have any opinion?\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvpjauCRXcgcaL6+e3eqecEHoeRm9D-kcbuvBitgPnW=vw@mail.gmail.com\n[2] https://www.postgresql.org/message-id/CA+TgmoZoYxFBN+AEJGfjJCCbeW8MMkHfFVcC61kP2OncmeYDWA@mail.gmail.com\n[3] https://www.postgresql.org/message-id/180278.1654009751@sss.pgh.pa.us\n\n\n",
"msg_date": "Tue, 12 Jul 2022 17:15:08 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On Tue, 12 Jul 2022 at 17:15, David Rowley <dgrowleyml@gmail.com> wrote:\n> So far only Robert has raised concerns with this regression for PG15\n> (see [2]). Tom voted for leaving things as they are for PG15 in [3].\n> John agrees, as quoted above. Does anyone else have any opinion?\n\nLet me handle this slightly differently. I've moved the open item for\nthis into the \"won't fix\" section. If nobody shouts at me for that\nthen I'll let that end the debate. Otherwise, we can consider the\nargument when it arrives.\n\nDavid\n\n\n",
"msg_date": "Wed, 13 Jul 2022 16:13:58 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "Hi David,\r\n\r\nOn 7/13/22 12:13 AM, David Rowley wrote:\r\n> On Tue, 12 Jul 2022 at 17:15, David Rowley <dgrowleyml@gmail.com> wrote:\r\n>> So far only Robert has raised concerns with this regression for PG15\r\n>> (see [2]). Tom voted for leaving things as they are for PG15 in [3].\r\n>> John agrees, as quoted above. Does anyone else have any opinion?\r\n> \r\n> Let me handle this slightly differently. I've moved the open item for\r\n> this into the \"won't fix\" section. If nobody shouts at me for that\r\n> then I'll let that end the debate. Otherwise, we can consider the\r\n> argument when it arrives.\r\n\r\nThe RMT discussed this issue at its meeting today (and a few weeks back \r\n-- apologies for not writing sooner). While we agree with your analysis \r\nthat 1/ this issue does appear to be a corner case and 2/ the benefits \r\noutweigh the risks, we still don't know how prevalent it may be in the \r\nwild and the general impact to user experience.\r\n\r\nThe RMT suggests that you make one more pass at attempting to solve it. \r\nIf there does not appear to be a clear path forward, we should at least \r\ndocument how a user can detect and resolve the issue.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Wed, 13 Jul 2022 09:23:00 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-13 09:23:00 -0400, Jonathan S. Katz wrote:\n> On 7/13/22 12:13 AM, David Rowley wrote:\n> > On Tue, 12 Jul 2022 at 17:15, David Rowley <dgrowleyml@gmail.com> wrote:\n> > > So far only Robert has raised concerns with this regression for PG15\n> > > (see [2]). Tom voted for leaving things as they are for PG15 in [3].\n> > > John agrees, as quoted above. Does anyone else have any opinion?\n> > \n> > Let me handle this slightly differently. I've moved the open item for\n> > this into the \"won't fix\" section. If nobody shouts at me for that\n> > then I'll let that end the debate. Otherwise, we can consider the\n> > argument when it arrives.\n> \n> The RMT discussed this issue at its meeting today (and a few weeks back --\n> apologies for not writing sooner). While we agree with your analysis that 1/\n> this issue does appear to be a corner case and 2/ the benefits outweigh the\n> risks, we still don't know how prevalent it may be in the wild and the\n> general impact to user experience.\n> \n> The RMT suggests that you make one more pass at attempting to solve it.\n\nI think without a more concrete analysis from the RMT that's not really\nactionable. What risks are we willing to accept to resolve this? This is\nmostly a question of tradeoffs.\n\nSeveral \"senior\" postgres hackers looked at this and didn't find any solution\nthat makes sense to apply to 15. I don't think having David butt his head\nfurther against this code is likely to achieve much besides a headache. Note\nthat David already has a patch to address this for 16.\n\n\n> If there does not appear to be a clear path forward, we should at least\n> document how a user can detect and resolve the issue.\n\nTo me that doesn't really make sense. We have lots of places were performance\nchanges once you cross some threshold, and lots of those are related to\nwork_mem.\n\nWe don't, e.g., provide tooling to detect when performance in aggregation\nregresses due to crossing work_mem and could be fixed by a tiny increase in\nwork_mem.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 13 Jul 2022 08:32:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "\n\nOn 7/13/22 17:32, Andres Freund wrote:\n> Hi,\n> \n> On 2022-07-13 09:23:00 -0400, Jonathan S. Katz wrote:\n>> On 7/13/22 12:13 AM, David Rowley wrote:\n>>> On Tue, 12 Jul 2022 at 17:15, David Rowley <dgrowleyml@gmail.com> wrote:\n>>>> So far only Robert has raised concerns with this regression for PG15\n>>>> (see [2]). Tom voted for leaving things as they are for PG15 in [3].\n>>>> John agrees, as quoted above. Does anyone else have any opinion?\n>>>\n>>> Let me handle this slightly differently. I've moved the open item for\n>>> this into the \"won't fix\" section. If nobody shouts at me for that\n>>> then I'll let that end the debate. Otherwise, we can consider the\n>>> argument when it arrives.\n>>\n>> The RMT discussed this issue at its meeting today (and a few weeks back --\n>> apologies for not writing sooner). While we agree with your analysis that 1/\n>> this issue does appear to be a corner case and 2/ the benefits outweigh the\n>> risks, we still don't know how prevalent it may be in the wild and the\n>> general impact to user experience.\n>>\n>> The RMT suggests that you make one more pass at attempting to solve it.\n> \n> I think without a more concrete analysis from the RMT that's not really\n> actionable. What risks are we willing to accept to resolve this? This is\n> mostly a question of tradeoffs.\n> \n> Several \"senior\" postgres hackers looked at this and didn't find any solution\n> that makes sense to apply to 15. I don't think having David butt his head\n> further against this code is likely to achieve much besides a headache. Note\n> that David already has a patch to address this for 16.\n> \n\nI agree with this. It's not clear to me how we'd asses how prevalent it\nreally is (reports on a mailing list surely are not a very great way to\nmeasure this). My personal opinion is that it's a rare regression. Other\noptimization patches have similar rare regressions, except that David\nspent so much time investigating this one it seems more serious.\n\nI think it's fine to leave this as is. If we feel we have to fix this\nfor v15, it's probably best to just apply the v16. I doubt we'll find\nanything simpler.\n\n> \n>> If there does not appear to be a clear path forward, we should at least\n>> document how a user can detect and resolve the issue.\n> \n> To me that doesn't really make sense. We have lots of places were performance\n> changes once you cross some threshold, and lots of those are related to\n> work_mem.\n> \n> We don't, e.g., provide tooling to detect when performance in aggregation\n> regresses due to crossing work_mem and could be fixed by a tiny increase in\n> work_mem.\n> \n\nYeah. I find it entirely reasonable to tell people to increase work_mem\na bit to fix this. The problem is knowing you're affected :-(\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 15 Jul 2022 22:36:59 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On 7/15/22 4:36 PM, Tomas Vondra wrote:\r\n> \r\n> \r\n> On 7/13/22 17:32, Andres Freund wrote:\r\n>> Hi,\r\n>>\r\n>> On 2022-07-13 09:23:00 -0400, Jonathan S. Katz wrote:\r\n>>> On 7/13/22 12:13 AM, David Rowley wrote:\r\n>>>> On Tue, 12 Jul 2022 at 17:15, David Rowley <dgrowleyml@gmail.com> wrote:\r\n>>>>> So far only Robert has raised concerns with this regression for PG15\r\n>>>>> (see [2]). Tom voted for leaving things as they are for PG15 in [3].\r\n>>>>> John agrees, as quoted above. Does anyone else have any opinion?\r\n>>>>\r\n>>>> Let me handle this slightly differently. I've moved the open item for\r\n>>>> this into the \"won't fix\" section. If nobody shouts at me for that\r\n>>>> then I'll let that end the debate. Otherwise, we can consider the\r\n>>>> argument when it arrives.\r\n>>>\r\n>>> The RMT discussed this issue at its meeting today (and a few weeks back --\r\n>>> apologies for not writing sooner). While we agree with your analysis that 1/\r\n>>> this issue does appear to be a corner case and 2/ the benefits outweigh the\r\n>>> risks, we still don't know how prevalent it may be in the wild and the\r\n>>> general impact to user experience.\r\n>>>\r\n>>> The RMT suggests that you make one more pass at attempting to solve it.\r\n>>\r\n>> I think without a more concrete analysis from the RMT that's not really\r\n>> actionable. What risks are we willing to accept to resolve this? This is\r\n>> mostly a question of tradeoffs.\r\n>>\r\n>> Several \"senior\" postgres hackers looked at this and didn't find any solution\r\n>> that makes sense to apply to 15. I don't think having David butt his head\r\n>> further against this code is likely to achieve much besides a headache. Note\r\n>> that David already has a patch to address this for 16.\r\n>>\r\n> \r\n> I agree with this. It's not clear to me how we'd asses how prevalent it\r\n> really is (reports on a mailing list surely are not a very great way to\r\n> measure this). My personal opinion is that it's a rare regression. Other\r\n> optimization patches have similar rare regressions, except that David\r\n> spent so much time investigating this one it seems more serious.\r\n> \r\n> I think it's fine to leave this as is. If we feel we have to fix this\r\n> for v15, it's probably best to just apply the v16. I doubt we'll find\r\n> anything simpler.\r\n\r\nI think the above is reasonable.\r\n\r\n>>> If there does not appear to be a clear path forward, we should at least\r\n>>> document how a user can detect and resolve the issue.\r\n>>\r\n>> To me that doesn't really make sense. We have lots of places were performance\r\n>> changes once you cross some threshold, and lots of those are related to\r\n>> work_mem.\r\n\r\nYes, but in this case this is nonobvious to a user. A sort that may be \r\nperforming just fine on a pre-PG15 version is suddenly degraded, and the \r\nuser has no guidance as to why or how to remediate.\r\n\r\n>> We don't, e.g., provide tooling to detect when performance in aggregation\r\n>> regresses due to crossing work_mem and could be fixed by a tiny increase in\r\n>> work_mem.\r\n>>\r\n> \r\n> Yeah. I find it entirely reasonable to tell people to increase work_mem\r\n> a bit to fix this. The problem is knowing you're affected :-(\r\n\r\nThis is the concern the RMT discussed. Yes it is reasonable to say \r\n\"increase work_mem to XYZ\", but how does a user know how to detect and \r\napply that? This is the part that is worrisome, especially because we \r\ndon't have any sense of what the overall impact will be.\r\n\r\nMaybe it's not much, but we should document that there is the potential \r\nfor a regression.\r\n\r\nJonathan",
"msg_date": "Fri, 15 Jul 2022 16:42:14 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> ... My personal opinion is that it's a rare regression. Other\n> optimization patches have similar rare regressions, except that David\n> spent so much time investigating this one it seems more serious.\n\nYeah, this. I fear we're making a mountain out of a molehill. We have\ncommitted many optimizations that win on average but possibly lose\nin edge cases, and not worried too much about it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 15 Jul 2022 16:54:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-15 16:42:14 -0400, Jonathan S. Katz wrote:\n> On 7/15/22 4:36 PM, Tomas Vondra wrote:\n> > > > If there does not appear to be a clear path forward, we should at least\n> > > > document how a user can detect and resolve the issue.\n> > > \n> > > To me that doesn't really make sense. We have lots of places were performance\n> > > changes once you cross some threshold, and lots of those are related to\n> > > work_mem.\n> \n> Yes, but in this case this is nonobvious to a user. A sort that may be\n> performing just fine on a pre-PG15 version is suddenly degraded, and the\n> user has no guidance as to why or how to remediate.\n\nWe make minor changes affecting thresholds at which point things spill to disk\netc *all the time*. Setting the standard that all of those need to be\ndocumented seems not wise to me. Both because of the effort for us, and\nbecause it'll end up being a morass of documentation that nobody can make use\noff, potentially preventing people from upgrading because some minor perf\nchanges in some edge cases sound scary.\n\nI'm fairly certain there were numerous other changes with such effects. We\njust don't know because there wasn't as careful benchmarking.\n\n\n> > > We don't, e.g., provide tooling to detect when performance in aggregation\n> > > regresses due to crossing work_mem and could be fixed by a tiny increase in\n> > > work_mem.\n> > > \n> > \n> > Yeah. I find it entirely reasonable to tell people to increase work_mem\n> > a bit to fix this. The problem is knowing you're affected :-(\n> \n> This is the concern the RMT discussed. Yes it is reasonable to say \"increase\n> work_mem to XYZ\", but how does a user know how to detect and apply that?\n\nThe won't. Nor will they with any reasonable documentation we can give. Nor\nwill they know in any of the other cases some threshold is crossed.\n\n\n> This is the part that is worrisome, especially because we don't have any\n> sense of what the overall impact will be.\n> \n> Maybe it's not much, but we should document that there is the potential for\n> a regression.\n\n-1\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 15 Jul 2022 13:56:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On 7/15/22 4:54 PM, Tom Lane wrote:\r\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\r\n>> ... My personal opinion is that it's a rare regression. Other\r\n>> optimization patches have similar rare regressions, except that David\r\n>> spent so much time investigating this one it seems more serious.\r\n> \r\n> Yeah, this. I fear we're making a mountain out of a molehill. We have\r\n> committed many optimizations that win on average but possibly lose\r\n> in edge cases, and not worried too much about it.\r\n\r\nI disagree with the notion of this being a \"mountain out of a molehill.\" \r\nThe RMT looked at the situation, asked if we should make one more pass. \r\nThere were logical argument as to why not to (e.g. v16 efforts). I think \r\nthat is reasonable, and we can move on from any additional code changes \r\nfor v15.\r\n\r\nWhat I find interesting is the resistance to adding any documentation \r\naround this feature to guide users in case they hit the regression. I \r\nunderstand it can be difficult to provide guidance on issues related to \r\nadjusting work_mem, but even just a hint in the release notes to say \"if \r\nyou see a performance regression you may need to adjust work_mem\" would \r\nbe helpful. This would help people who are planning upgrades to at least \r\nknow what to watch out for.\r\n\r\nIf that still seems unreasonable, I'll agree to disagree so we can move \r\non with other parts of the release.\r\n\r\nJonathan",
"msg_date": "Fri, 15 Jul 2022 18:40:11 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On 7/15/22 6:40 PM, Jonathan S. Katz wrote:\r\n> On 7/15/22 4:54 PM, Tom Lane wrote:\r\n>> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\r\n>>> ... My personal opinion is that it's a rare regression. Other\r\n>>> optimization patches have similar rare regressions, except that David\r\n>>> spent so much time investigating this one it seems more serious.\r\n>>\r\n>> Yeah, this. I fear we're making a mountain out of a molehill. We have\r\n>> committed many optimizations that win on average but possibly lose\r\n>> in edge cases, and not worried too much about it.\r\n> \r\n> I disagree with the notion of this being a \"mountain out of a molehill.\" \r\n> The RMT looked at the situation, asked if we should make one more pass. \r\n> There were logical argument as to why not to (e.g. v16 efforts). I think \r\n> that is reasonable, and we can move on from any additional code changes \r\n> for v15.\r\n> \r\n> What I find interesting is the resistance to adding any documentation \r\n> around this feature to guide users in case they hit the regression. I \r\n> understand it can be difficult to provide guidance on issues related to \r\n> adjusting work_mem, but even just a hint in the release notes to say \"if \r\n> you see a performance regression you may need to adjust work_mem\" would \r\n> be helpful. This would help people who are planning upgrades to at least \r\n> know what to watch out for.\r\n> \r\n> If that still seems unreasonable, I'll agree to disagree so we can move \r\n> on with other parts of the release.\r\n\r\nFor completeness, I marked the open item as closed.\r\n\r\nJonathan",
"msg_date": "Fri, 15 Jul 2022 18:41:49 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-15 18:40:11 -0400, Jonathan S. Katz wrote:\n> What I find interesting is the resistance to adding any documentation around\n> this feature to guide users in case they hit the regression. I understand it\n> can be difficult to provide guidance on issues related to adjusting\n> work_mem, but even just a hint in the release notes to say \"if you see a\n> performance regression you may need to adjust work_mem\" would be helpful.\n> This would help people who are planning upgrades to at least know what to\n> watch out for.\n\nIf we want to add that as boilerplate for every major release - ok, although\nit's not really actionable. What I'm against is adding it specifically for\nthis release - we have stuff like this *all the time*, we just often don't\nbother to carefully find the specific point at which an optimization might\nhurt.\n\nHonestly, if this thread teaches anything, it's to hide relatively minor\ncorner case regressions. Which isn't good.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 15 Jul 2022 15:52:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On Sat, 16 Jul 2022 at 10:40, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> What I find interesting is the resistance to adding any documentation\n> around this feature to guide users in case they hit the regression. I\n> understand it can be difficult to provide guidance on issues related to\n> adjusting work_mem, but even just a hint in the release notes to say \"if\n> you see a performance regression you may need to adjust work_mem\" would\n> be helpful. This would help people who are planning upgrades to at least\n> know what to watch out for.\n\nLooking back at the final graph in the blog [1], l see that work_mem\nis a pretty surprising GUC. I'm sure many people would expect that\nsetting work_mem to some size that allows the sort to be entirely done\nin RAM would be the fastest way. And that does appear to be the case,\nas 16GB was the only setting which allowed that. However, I bet it\nwould surprise many people to see that 8GB wasn't 2nd fastest. Even\n128MB was faster than 8GB!\n\nMost likely that's because the machine I tested that on has lots of\nRAM spare for kernel buffers which would allow all that disk activity\nfor batching not actually to cause physical reads or writes. I bet\nthat would have looked different if I'd run a few concurrent sorts\nwith 128MB of work_mem. They'd all be competing for kernel buffers in\nthat case.\n\nSo I agree with Andres here. It seems weird to me to try to document\nthis new thing that I caused when we don't really make any attempt to\ndocument all the other weird stuff with work_mem.\n\nI think the problem can actually be worse with work_mem sizes in\nregards to hash tables. The probing phase of a hash join causes\nmemory access patterns that the CPU cannot determine which can result\nin poor performance when the hash table size is larger than the CPU's\nL3 cache size. If you have fast enough disks, it seems realistic that\ngiven the right workload (most likely much more than 1 probe per\nbucket) that you could also get better performance by having lower\nvalues of work_mem.\n\nIf we're going to document the generic context anomaly then we should\ngo all out and document all of the above, plus all the other weird\nstuff I've not thought of. However, I think, short of having an\nactual patch to review, it might be better to leave it until someone\ncan come up with some text that's comprehensive enough to be worthy of\nreading. I don't think I could do the topic justice. I'm also not\nsure any wisdom we write about this would be of much use in the real\nworld given that its likely concurrency has a larger effect, and we\ndon't have much ability to control that.\n\nFWIW, I think it would be better for us just to solve these problems\nin code instead. Having memory gating to control the work_mem from a\npool and teaching sort about CPU caches might be better than\nexplaining to users that tuning work_mem is hard.\n\nDavid\n\n[1] https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/speeding-up-sort-performance-in-postgres-15/ba-p/3396953\n\n\n",
"msg_date": "Sat, 16 Jul 2022 11:12:07 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "Thank you for the very detailed analysis. Comments inline.\r\n\r\nOn 7/15/22 7:12 PM, David Rowley wrote:\r\n> On Sat, 16 Jul 2022 at 10:40, Jonathan S. Katz <jkatz@postgresql.org> wrote:\r\n>> What I find interesting is the resistance to adding any documentation\r\n>> around this feature to guide users in case they hit the regression. I\r\n>> understand it can be difficult to provide guidance on issues related to\r\n>> adjusting work_mem, but even just a hint in the release notes to say \"if\r\n>> you see a performance regression you may need to adjust work_mem\" would\r\n>> be helpful. This would help people who are planning upgrades to at least\r\n>> know what to watch out for.\r\n> \r\n> Looking back at the final graph in the blog [1], l see that work_mem\r\n> is a pretty surprising GUC. I'm sure many people would expect that\r\n> setting work_mem to some size that allows the sort to be entirely done\r\n> in RAM would be the fastest way. And that does appear to be the case,\r\n> as 16GB was the only setting which allowed that. However, I bet it\r\n> would surprise many people to see that 8GB wasn't 2nd fastest. Even\r\n> 128MB was faster than 8GB!\r\n\r\nYeah that is interesting. And while some of those settings are less \r\nlikely in the wild, I do think we are going to see larger and larger \r\n\"work_mem\" settings as instance sizes continue to grow. That said, your \r\nPG15 benchmarks are overall faster than the PG14, and that is what I am \r\nlooking at in the context of this release.\r\n\r\n> Most likely that's because the machine I tested that on has lots of\r\n> RAM spare for kernel buffers which would allow all that disk activity\r\n> for batching not actually to cause physical reads or writes. I bet\r\n> that would have looked different if I'd run a few concurrent sorts\r\n> with 128MB of work_mem. They'd all be competing for kernel buffers in\r\n> that case.\r\n> \r\n> So I agree with Andres here. It seems weird to me to try to document\r\n> this new thing that I caused when we don't really make any attempt to\r\n> document all the other weird stuff with work_mem.\r\n\r\nI can't argue with this.\r\n\r\nMy note on the documentation was primarily around to seeing countless \r\nuser issues post-upgrade where queries that \"once performed well no \r\nlonger do so.\" I want to ensure that our users at least have a starting \r\npoint to work on resolving the issues, even if they end up being very \r\nnuanced.\r\n\r\nPerhaps a next step (and a separate step from this) is to assess the \r\nguidance we give on the upgrade page[1] about some common things they \r\nshould check for. Then we can have the \"boilerplate\" there.\r\n\r\n> I think the problem can actually be worse with work_mem sizes in\r\n> regards to hash tables. The probing phase of a hash join causes\r\n> memory access patterns that the CPU cannot determine which can result\r\n> in poor performance when the hash table size is larger than the CPU's\r\n> L3 cache size. If you have fast enough disks, it seems realistic that\r\n> given the right workload (most likely much more than 1 probe per\r\n> bucket) that you could also get better performance by having lower\r\n> values of work_mem.\r\n> \r\n> If we're going to document the generic context anomaly then we should\r\n> go all out and document all of the above, plus all the other weird\r\n> stuff I've not thought of. However, I think, short of having an\r\n> actual patch to review, it might be better to leave it until someone\r\n> can come up with some text that's comprehensive enough to be worthy of\r\n> reading. I don't think I could do the topic justice. I'm also not\r\n> sure any wisdom we write about this would be of much use in the real\r\n> world given that its likely concurrency has a larger effect, and we\r\n> don't have much ability to control that.\r\n\r\nUnderstood. I don't think that is fair to ask for this release, but \r\ndon't sell your short on explaining the work_mem nuances.\r\n\r\n> FWIW, I think it would be better for us just to solve these problems\r\n> in code instead. Having memory gating to control the work_mem from a\r\n> pool and teaching sort about CPU caches might be better than\r\n> explaining to users that tuning work_mem is hard.\r\n\r\n+1. Again thank you for taking the time for the thorough explanation and \r\nof course, working on the patch and fixes.\r\n\r\nJonathan\r\n\r\n[1] https://www.postgresql.org/docs/current/upgrading.html",
"msg_date": "Fri, 15 Jul 2022 19:27:47 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On 7/15/22 6:52 PM, Andres Freund wrote:\r\n> Hi,\r\n> \r\n> On 2022-07-15 18:40:11 -0400, Jonathan S. Katz wrote:\r\n>> What I find interesting is the resistance to adding any documentation around\r\n>> this feature to guide users in case they hit the regression. I understand it\r\n>> can be difficult to provide guidance on issues related to adjusting\r\n>> work_mem, but even just a hint in the release notes to say \"if you see a\r\n>> performance regression you may need to adjust work_mem\" would be helpful.\r\n>> This would help people who are planning upgrades to at least know what to\r\n>> watch out for.\r\n> \r\n> If we want to add that as boilerplate for every major release - ok, although\r\n> it's not really actionable. What I'm against is adding it specifically for\r\n> this release - we have stuff like this *all the time*, we just often don't\r\n> bother to carefully find the specific point at which an optimization might\r\n> hurt.\r\n> \r\n> Honestly, if this thread teaches anything, it's to hide relatively minor\r\n> corner case regressions. Which isn't good.\r\n\r\nI think it's OK to discuss ways we can better help our users. It's also \r\nOK to be wrong; I have certainly been so plenty of times.\r\n\r\nJonathan",
"msg_date": "Fri, 15 Jul 2022 20:01:54 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "On Fri, Jul 15, 2022 at 07:27:47PM -0400, Jonathan S. Katz wrote:\n> > So I agree with Andres here. It seems weird to me to try to document\n> > this new thing that I caused when we don't really make any attempt to\n> > document all the other weird stuff with work_mem.\n> \n> I can't argue with this.\n> \n> My note on the documentation was primarily around to seeing countless user\n> issues post-upgrade where queries that \"once performed well no longer do\n> so.\" I want to ensure that our users at least have a starting point to work\n> on resolving the issues, even if they end up being very nuanced.\n> \n> Perhaps a next step (and a separate step from this) is to assess the\n> guidance we give on the upgrade page[1] about some common things they should\n> check for. Then we can have the \"boilerplate\" there.\n\nI assume that if you set a GUC, you should also review and maintain it into the\nfuture. Non-default settings should be re-evaluated (at least) during major\nupgrades. That's typically more important than additionally fidding with any\nnewly-added GUCs.\n\nFor example, in v13, I specifically re-evaluated shared_buffers everywhere due\nto de-duplication.\n\nIn v13, hash agg was updated to spill to disk, and hash_mem_multiplier was\nadded to mitigate any performance issues (and documented as such in the release\nnotes).\n\nI've needed to disable JIT since it was enabled by default in v12, since it\n1) doesn't help; and 2) leaks memory enough to cause some customers' DBs to be\nkilled every 1-2 days. (The change in default was documented, so there's no\nmore documentation needed).\n\nI'm sure some people should/have variously revisted the parallel and\n\"asynchronous\" GUCs, if they changed the defaults. (When parallel query was\nenabled by default in v10, the change wasn't initially documented, which was a\nproblem).\n\nI'm suppose checkpoint_* and *wal* should be/have been revisited at various\npoints. Probably effective_cache_size too. Those are just the most common\nones to change.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 15 Jul 2022 19:25:00 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
}
] |
[
{
"msg_contents": "Hi\n\nRegarding the visibility of query information, the description for\n\"track_activities\" [1] says:\n\n> Note that even when enabled, this information is not visible to all users,\n> only to superusers and the user owning the session being reported on, so it\n> should not represent a security risk.\n\n[1] https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-ACTIVITIES\n\nIt seems reasonable to mention here that the information is also visible to\nmembers of \"pg_read_all_stats\", similar to what is done in the\npg_stat_statements\ndocs [2].\n\n[2] https://www.postgresql.org/docs/current/pgstatstatements.html#PGSTATSTATEMENTS-COLUMNS\n\nSuggested wording:\n\n> Note that even when enabled, this information is only visible to superusers,\n> members of the <literal>pg_read_all_stats</literal> role and the user owning\n> the session being reported on, so it should not represent a security risk.\n\nPatch (for HEAD) with suggested wording attached; the change should\nIMO be applied\nall the way back to v10 (though as-is the patch only applies to HEAD,\ncan provide\nothers if needed).\n\n\nRegards\n\nIan Barwick\n\n\n-- \nEnterpriseDB: https://www.enterprisedb.com",
"msg_date": "Fri, 20 May 2022 15:17:29 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": true,
"msg_subject": "docs: mention \"pg_read_all_stats\" in \"track_activities\" description"
},
{
"msg_contents": "On Fri, May 20, 2022 at 03:17:29PM +0900, Ian Lawrence Barwick wrote:\n> It seems reasonable to mention here that the information is also visible to\n> members of \"pg_read_all_stats\", similar to what is done in the\n> pg_stat_statements\n> docs [2].\n> \n> [2] https://www.postgresql.org/docs/current/pgstatstatements.html#PGSTATSTATEMENTS-COLUMNS\n> \n> Suggested wording:\n> \n>> Note that even when enabled, this information is only visible to superusers,\n>> members of the <literal>pg_read_all_stats</literal> role and the user owning\n>> the session being reported on, so it should not represent a security risk.\n> \n> Patch (for HEAD) with suggested wording attached; the change should\n> IMO be applied\n> all the way back to v10 (though as-is the patch only applies to HEAD,\n> can provide\n> others if needed).\n\nLGTM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 20 May 2022 16:08:37 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: docs: mention \"pg_read_all_stats\" in \"track_activities\"\n description"
},
{
"msg_contents": "On Fri, May 20, 2022 at 04:08:37PM -0700, Nathan Bossart wrote:\n> LGTM\n\nIndeed, it is a good idea to add this information. Will apply and\nbackpatch accordingly.\n--\nMichael",
"msg_date": "Sat, 21 May 2022 12:28:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: docs: mention \"pg_read_all_stats\" in \"track_activities\"\n description"
},
{
"msg_contents": "On Sat, May 21, 2022 at 12:28:58PM +0900, Michael Paquier wrote:\n> Indeed, it is a good idea to add this information. Will apply and\n> backpatch accordingly.\n\nSorry, I should've noticed this yesterday. This should probably follow\n6198420's example and say \"roles with privileges of the pg_read_all_stats\nrole\" instead of \"members of the pg_read_all_stats role.\" Also, I think we\nshould mention that this information is visible to roles with privileges of\nthe session user being reported on, too. Patch attached.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 21 May 2022 11:57:43 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: docs: mention \"pg_read_all_stats\" in \"track_activities\"\n description"
},
{
"msg_contents": "On Sat, May 21, 2022 at 11:57:43AM -0700, Nathan Bossart wrote:\n> Sorry, I should've noticed this yesterday. This should probably follow\n> 6198420's example and say \"roles with privileges of the pg_read_all_stats\n> role\" instead of \"members of the pg_read_all_stats role.\"\n\nYes, I saw that, but that sounds pretty much the same to me, while we\nmention membership of a role in other places. I don't mind tweaking\nthat more, FWIW, while we are on it.\n\n> Also, I think we\n> should mention that this information is visible to roles with privileges of\n> the session user being reported on, too. Patch attached.\n\n default. Note that even when enabled, this information is only\n- visible to superusers, members of the\n- <literal>pg_read_all_stats</literal> role and the user owning the\n- session being reported on, so it should not represent a security risk.\n- Only superusers and users with the appropriate <literal>SET</literal>\n- privilege can change this setting.\n+ visible to superusers, roles with privileges of the\n+ <literal>pg_read_all_stats</literal> role, and roles with privileges of\n+ the user owning the session being reported on, so it should not\n+ represent a security risk. Only superusers and users with the\n+ appropriate <literal>SET</literal> privilege can change this setting.\n\nRegarding the fact that a user can see its own information, the last\npart of the description would be right, still a bit confusing perhaps\nwhen it comes to one's own information?\n--\nMichael",
"msg_date": "Sun, 22 May 2022 09:59:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: docs: mention \"pg_read_all_stats\" in \"track_activities\"\n description"
},
{
"msg_contents": "On Sun, May 22, 2022 at 09:59:47AM +0900, Michael Paquier wrote:\n> + visible to superusers, roles with privileges of the\n> + <literal>pg_read_all_stats</literal> role, and roles with privileges of\n> + the user owning the session being reported on, so it should not\n> + represent a security risk. Only superusers and users with the\n> + appropriate <literal>SET</literal> privilege can change this setting.\n> \n> Regarding the fact that a user can see its own information, the last\n> part of the description would be right, still a bit confusing perhaps\n> when it comes to one's own information?\n\nYeah, this crossed my mind. I thought that \"superusers, roles with\nprivileges of the pg_read_all_stats_role, roles with privileges of the user\nowning the session being reported on, and the user owning the session being\nreported on\" might be too long-winded and redundant. But I see your point\nthat it might be a bit confusing. Perhaps it could be trimmed down to\nsomething like this:\n\n\t... superusers, roles with privileges of the pg_read_all_stats role,\n\tand roles with privileges of the user owning the session being reported\n\ton (including the session owner).\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 22 May 2022 13:26:08 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: docs: mention \"pg_read_all_stats\" in \"track_activities\"\n description"
},
{
"msg_contents": "On Sun, May 22, 2022 at 01:26:08PM -0700, Nathan Bossart wrote:\n> Yeah, this crossed my mind. I thought that \"superusers, roles with\n> privileges of the pg_read_all_stats_role, roles with privileges of the user\n> owning the session being reported on, and the user owning the session being\n> reported on\" might be too long-winded and redundant. But I see your point\n> that it might be a bit confusing. Perhaps it could be trimmed down to\n> something like this:\n> \n> \t... superusers, roles with privileges of the pg_read_all_stats role,\n> \tand roles with privileges of the user owning the session being reported\n> \ton (including the session owner).\n\nYeah, that sounds better to me. monitoring.sgml has a different way\nof wording what looks like the same thing for pg_stat_xact_*_tables:\n\"Ordinary users can only see all the information about their own\nsessions (sessions belonging to a role that they are a member of)\".\n\nSo you could say instead something like: this information is only\nvisible to superusers, roles with privileges of the pg_read_all_stats\nrole, and the user owning the sessionS being reported on (including\nsessions belonging to a role that they are a member of).\n--\nMichael",
"msg_date": "Mon, 23 May 2022 08:53:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: docs: mention \"pg_read_all_stats\" in \"track_activities\"\n description"
},
{
"msg_contents": "On Mon, May 23, 2022 at 08:53:24AM +0900, Michael Paquier wrote:\n> On Sun, May 22, 2022 at 01:26:08PM -0700, Nathan Bossart wrote:\n>> \t... superusers, roles with privileges of the pg_read_all_stats role,\n>> \tand roles with privileges of the user owning the session being reported\n>> \ton (including the session owner).\n> \n> Yeah, that sounds better to me. monitoring.sgml has a different way\n> of wording what looks like the same thing for pg_stat_xact_*_tables:\n> \"Ordinary users can only see all the information about their own\n> sessions (sessions belonging to a role that they are a member of)\".\n> \n> So you could say instead something like: this information is only\n> visible to superusers, roles with privileges of the pg_read_all_stats\n> role, and the user owning the sessionS being reported on (including\n> sessions belonging to a role that they are a member of).\n\nI think we need to be careful about saying \"member of\" when we really mean\n\"roles with privileges of.\" Unless I am mistaken, role membership alone is\nnot sufficient for viewing this information. You also need to inherit the\nrole's privileges via INHERIT.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 23 May 2022 09:41:42 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: docs: mention \"pg_read_all_stats\" in \"track_activities\"\n description"
},
{
"msg_contents": "On Mon, May 23, 2022 at 09:41:42AM -0700, Nathan Bossart wrote:\n> I think we need to be careful about saying \"member of\" when we really mean\n> \"roles with privileges of.\" Unless I am mistaken, role membership alone is\n> not sufficient for viewing this information. You also need to inherit the\n> role's privileges via INHERIT.\n\nGood point. So this would give, to be exact:\n\"This information is only visible to superusers, roles with privileges\nof the pg_read_all_stats role, and and the user owning the sessionS\nbeing reported on (including sessions belonging to a role they have\nthe privileges of).\"\n\nOpinions?\n--\nMichael",
"msg_date": "Wed, 25 May 2022 13:04:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: docs: mention \"pg_read_all_stats\" in \"track_activities\"\n description"
},
{
"msg_contents": "On Wed, May 25, 2022 at 01:04:04PM +0900, Michael Paquier wrote:\n> Good point. So this would give, to be exact:\n> \"This information is only visible to superusers, roles with privileges\n> of the pg_read_all_stats role, and and the user owning the sessionS\n> being reported on (including sessions belonging to a role they have\n> the privileges of).\"\n\nNathan, Ian, if you think that this could be worded better, please\nfeel free to let me know. Thanks.\n--\nMichael",
"msg_date": "Sat, 28 May 2022 17:50:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: docs: mention \"pg_read_all_stats\" in \"track_activities\"\n description"
},
{
"msg_contents": "On Sat, May 28, 2022 at 05:50:35PM +0900, Michael Paquier wrote:\n> On Wed, May 25, 2022 at 01:04:04PM +0900, Michael Paquier wrote:\n>> Good point. So this would give, to be exact:\n>> \"This information is only visible to superusers, roles with privileges\n>> of the pg_read_all_stats role, and and the user owning the sessionS\n>> being reported on (including sessions belonging to a role they have\n>> the privileges of).\"\n> \n> Nathan, Ian, if you think that this could be worded better, please\n> feel free to let me know. Thanks.\n\nSorry, I missed this one earlier. I'm okay with something along those\nlines. I'm still trying to think of ways to make the last part a little\nclearer, but I don't have any ideas beyond what we've discussed upthread.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 28 May 2022 06:10:31 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: docs: mention \"pg_read_all_stats\" in \"track_activities\"\n description"
},
{
"msg_contents": "On Sat, May 28, 2022 at 06:10:31AM -0700, Nathan Bossart wrote:\n> Sorry, I missed this one earlier. I'm okay with something along those\n> lines. I'm still trying to think of ways to make the last part a little\n> clearer, but I don't have any ideas beyond what we've discussed upthread.\n\nOkay. I have used the wording of upthread then. Thanks!\n--\nMichael",
"msg_date": "Mon, 30 May 2022 11:34:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: docs: mention \"pg_read_all_stats\" in \"track_activities\"\n description"
},
{
"msg_contents": "Hi\n\nApologies for the delayed response, was caught up in a minor life diversion\nover the past couple of weeks.\n\n2022年5月21日(土) 12:29 Michael Paquier <michael@paquier.xyz>:\n>\n> On Fri, May 20, 2022 at 04:08:37PM -0700, Nathan Bossart wrote:\n> > LGTM\n>\n> Indeed, it is a good idea to add this information. Will apply and\n> backpatch accordingly.\n\nThanks!\n\n2022年5月30日(月) 11:34 Michael Paquier <michael@paquier.xyz>:\n>\n> On Sat, May 28, 2022 at 06:10:31AM -0700, Nathan Bossart wrote:\n> > Sorry, I missed this one earlier. I'm okay with something along those\n> > lines. I'm still trying to think of ways to make the last part a little\n> > clearer, but I don't have any ideas beyond what we've discussed upthread.\n>\n> Okay. I have used the wording of upthread then. Thanks!\n\nA little late to the party, but as an alternative suggestion for the last\npart:\n\n \"... and users who either own the session being reported on, or who have\n privileges of the role to which the session belongs,\"\n\nso the whole sentence would read:\n\n Note that even when enabled, this information is only visible to superusers,\n roles with privileges of the pg_read_all_stats role, and users who either own\n the session being reported on or who have privileges of the role to which the\n session belongs, so it should not represent a security risk.\n\nor with some parentheses to break it up a little:\n\n Note that even when enabled, this information is only visible to superusers,\n roles with privileges of the pg_read_all_stats role, and users who either own\n the session being reported on (or who have privileges of the role to which the\n session belongs), so it should not represent a security risk.\n\nI'm not sure if it really improves on the latest committed change, so just a\nsuggestion.\n\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Tue, 7 Jun 2022 22:08:21 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: docs: mention \"pg_read_all_stats\" in \"track_activities\"\n description"
},
{
"msg_contents": "On Tue, Jun 07, 2022 at 10:08:21PM +0900, Ian Lawrence Barwick wrote:\n> A little late to the party, but as an alternative suggestion for the last\n> part:\n> \n> \"... and users who either own the session being reported on, or who have\n> privileges of the role to which the session belongs,\"\n> \n> so the whole sentence would read:\n> \n> Note that even when enabled, this information is only visible to superusers,\n> roles with privileges of the pg_read_all_stats role, and users who either own\n> the session being reported on or who have privileges of the role to which the\n> session belongs, so it should not represent a security risk.\n\nThis seems clearer to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 15 Jun 2022 10:39:25 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: docs: mention \"pg_read_all_stats\" in \"track_activities\"\n description"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nThe attached patch modifies the pg_stat_statements view documentation updated in PostgreSQL 15 Beta 1.\nThe data type of the following columns in the pg_stat_statements view is bigint in the current document, \nbut it is actually double precision.\n\tjit_generation_time\n\tjit_inlining_time\n\tjit_optimization_time\n\tjit_emission_time\n\nRegards,\nNoriyoshi Shinoda",
"msg_date": "Fri, 20 May 2022 12:46:03 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>",
"msg_from_op": true,
"msg_subject": "PG15 beta1 fix pg_stat_statements view document"
},
{
"msg_contents": "On Fri, May 20, 2022 at 12:46:03PM +0000, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n> The attached patch modifies the pg_stat_statements view documentation updated in PostgreSQL 15 Beta 1.\n> The data type of the following columns in the pg_stat_statements view is bigint in the current document, \n> but it is actually double precision.\n> \tjit_generation_time\n> \tjit_inlining_time\n> \tjit_optimization_time\n> \tjit_emission_time\n\nI think there is a typo in the change to the jit_optimization_time section,\nbut otherwise it looks good to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 20 May 2022 16:04:29 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 fix pg_stat_statements view document"
},
{
"msg_contents": "On Fri, May 20, 2022 at 04:04:29PM -0700, Nathan Bossart wrote:\n> I think there is a typo in the change to the jit_optimization_time section,\n> but otherwise it looks good to me.\n\nYes, as of \"double precisiodouble precision\". All these four fields\nare indeed doubles in the code, for what looks like a copy-pasto from\n57d6aea. Will fix.\n--\nMichael",
"msg_date": "Sat, 21 May 2022 12:32:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 fix pg_stat_statements view document"
},
{
"msg_contents": "Hi,\n\nThank you for your comment.\nI attached the fixed patch.\n\n-----Original Message-----\nFrom: Michael Paquier <michael@paquier.xyz> \nSent: Saturday, May 21, 2022 12:33 PM\nTo: Nathan Bossart <nathandbossart@gmail.com>\nCc: Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com>; PostgreSQL-development <pgsql-hackers@postgresql.org>; magnus@hagander.net\nSubject: Re: PG15 beta1 fix pg_stat_statements view document\n\nOn Fri, May 20, 2022 at 04:04:29PM -0700, Nathan Bossart wrote:\n> I think there is a typo in the change to the jit_optimization_time \n> section, but otherwise it looks good to me.\n\nYes, as of \"double precisiodouble precision\". All these four fields are indeed doubles in the code, for what looks like a copy-pasto from 57d6aea. Will fix.\n--\nMichael",
"msg_date": "Sat, 21 May 2022 03:36:10 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>",
"msg_from_op": true,
"msg_subject": "RE: PG15 beta1 fix pg_stat_statements view document"
},
{
"msg_contents": "On Sat, May 21, 2022 at 03:36:10AM +0000, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n> Thank you for your comment.\n> I attached the fixed patch.\n\nThanks, applied.\n--\nMichael",
"msg_date": "Sat, 21 May 2022 18:57:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 fix pg_stat_statements view document"
}
] |
[
{
"msg_contents": "Hi,\nI was looking at the code in hash_record()\nof src/backend/utils/adt/rowtypes.c\n\nIt seems if nulls[i] is true, we don't need to look up the hash function.\n\nPlease take a look at the patch.\n\nThanks",
"msg_date": "Fri, 20 May 2022 11:41:57 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "check for null value before looking up the hash function"
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> I was looking at the code in hash_record()\n> of src/backend/utils/adt/rowtypes.c\n> It seems if nulls[i] is true, we don't need to look up the hash function.\n\nI don't think this is worth changing. It complicates the logic,\nrendering it unlike quite a few other functions written in the same\nstyle. In cases where the performance actually matters, the hash\nfunction is cached across multiple calls anyway. You might save\nsomething if you have many calls in a query and not one of them\nreceives a non-null input, but how likely is that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 May 2022 16:33:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: check for null value before looking up the hash function"
}
] |
[
{
"msg_contents": "Hello, hackers.\n\nToday I was doing some aggregates over pg_stat_statements in order to\nfind types of queries consuming most of the CPU. Aggregates were made\non two pg_state_statement snapshots within 30 sec delay.\n\nThe sum(total_time) had the biggest value for a very frequent query\nwith about 10ms execution. I was thinking it is the biggest CPU\nconsumer.\n\nBut after reducing the frequency of queries a lot I was unable to see\nany significant difference in server CPU usage...\n\nSo, looks like clock_gettime is not so accurate to measure real CPU\nusage for some OLTP workloads. I suppose it is caused by the wall time\nvs CPU time difference (IO, thread switch, etc).\n\nBut what do you think about adding cpu_time (by calling getrusage) to\npg_stat_statements? Seems it could be very useful for CPU profiling.\n\nI am probably able to prepare the patch, but it is always better to\nget some feedback on the idea first :)\n\nBest regards,\nMichail.\n\n\n",
"msg_date": "Fri, 20 May 2022 21:50:32 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "CPU time for pg_stat_statement"
},
{
"msg_contents": "Michail Nikolaev <michail.nikolaev@gmail.com> writes:\n> So, looks like clock_gettime is not so accurate to measure real CPU\n> usage for some OLTP workloads. I suppose it is caused by the wall time\n> vs CPU time difference (IO, thread switch, etc).\n\nThis is a pretty broad claim to make on the basis of one undocumented\ntest case on one unmentioned platform.\n\n> But what do you think about adding cpu_time (by calling getrusage) to\n> pg_stat_statements? Seems it could be very useful for CPU profiling.\n\nOn what grounds do you claim getrusage will be better? One thing we\ncan be pretty certain of is that it will be slower, since it has to\nreturn many more pieces of information. And the API for it only allows\ntime info to be specified to microseconds, versus nanoseconds for\nclock_gettime, so it's also going to be taking a precision hit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 May 2022 16:39:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CPU time for pg_stat_statement"
},
{
"msg_contents": "On Sat, May 21, 2022 at 6:50 AM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n> But what do you think about adding cpu_time (by calling getrusage) to\n> pg_stat_statements? Seems it could be very useful for CPU profiling.\n>\n> I am probably able to prepare the patch, but it is always better to\n> get some feedback on the idea first :)\n\nThis might be interesting:\n\nhttps://github.com/powa-team/pg_stat_kcache\n\n\n",
"msg_date": "Sat, 21 May 2022 08:53:47 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CPU time for pg_stat_statement"
},
{
"msg_contents": "Hello, Thomas.\n\n> This might be interesting:\n> https://github.com/powa-team/pg_stat_kcache\n\nOh, nice, looks like it could help me to reduce CPU and test my\nassumption (using exec_user_time and exec_system_time).\n\nBWT, do you know why extension is not in standard contrib (looks mature)?\n\nBest regards,\nMichail.\n\n\n",
"msg_date": "Sat, 21 May 2022 00:21:49 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: CPU time for pg_stat_statement"
},
{
"msg_contents": "Hello, Tom.\n\n> This is a pretty broad claim to make on the basis of one undocumented\n> test case on one unmentioned platform.\n\nI'll try to use pg_stat_kcache to check the difference between Wall\nand CPU for my case.\n\n> On what grounds do you claim getrusage will be better? One thing we\n> can be pretty certain of is that it will be slower, since it has to\n> return many more pieces of information. And the API for it only allows\n> time info to be specified to microseconds, versus nanoseconds for\n> clock_gettime, so it's also going to be taking a precision hit.\n\nMy idea was to not replace wall-clock (clock_gettime) by cpu-clock (getrusage).\nI think about adding getrusage as an additional column (with flag to\nenable actual measuring).\nLooks like I need to be more precise in words :)\n\nIt is just two different clocks - and sometimes you need physical\ntime, sometimes CPU time (and sometimes, for example, amount of WAL\nwritten).\n\nBest regards,\nMichail.\n\n\n",
"msg_date": "Sat, 21 May 2022 00:32:47 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: CPU time for pg_stat_statement"
},
{
"msg_contents": "Hi,\n\nOn Sat, May 21, 2022 at 12:21:49AM +0300, Michail Nikolaev wrote:\n>\n> > This might be interesting:\n> > https://github.com/powa-team/pg_stat_kcache\n>\n> Oh, nice, looks like it could help me to reduce CPU and test my\n> assumption (using exec_user_time and exec_system_time).\n>\n> BWT, do you know why extension is not in standard contrib (looks mature)?\n\nBecause contrib isn't meant to eventually contain all possible extensions.\n\nThere is an official postgres extension network, and also community deb/rpm\nrepositories that are intended to handle postgres extensibility, and this\nextension is available on all of that, same as a lot of other extensions, which\nare at least as mature.\n\n\n",
"msg_date": "Sat, 21 May 2022 17:21:18 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CPU time for pg_stat_statement"
},
{
"msg_contents": "Hello, Tom.\n\n>> This is a pretty broad claim to make on the basis of one undocumented\n>> test case on one unmentioned platform.\n\n> I'll try to use pg_stat_kcache to check the difference between Wall\n> and CPU for my case.\n\nIn my case I see pretty high correlation of pg_stat_kcache and\npg_stat_statement (clock_gettime vs getrusage).\nLooks like CPU usage is hidden somewhere else (planning probably, not\nmeasured on postgres 11, but I see really high\n*clauselist_selectivity* in perf).\n\nThanks,\nMichail.\n\n\n",
"msg_date": "Wed, 8 Jun 2022 10:15:19 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: CPU time for pg_stat_statement"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nPresently, if you want to only build trusted PL/Perl and PL/Tcl, you need\nto make a couple of code changes to compile out the untrusted parts. I\nsuspect many users (e.g., anyone who wants to disallow file system access)\nwould benefit from a better supported way to do this. Thus, I've attached\nsome patches that introduce an optional argument for the --with-perl and\n--with-tcl configuration options. This new argument can be used to build\nonly the trusted or untrusted version of the language. If the argument is\nnot provided, both the trusted and untrusted versions are built, so this\nchange is backward compatible.\n\nThe PL/Tcl patch (0003) is relatively straightforward, as there are already\nseparate handler functions for the trusted and untrusted versions of the\nlanguage. PL/Perl, however, is slightly more complicated. 0001 first\nmodifies PL/Perl to use separate handle/validator functions for the trusted\nand untrusted versions. 0002 then adds support for building only trusted\nor untrusted PL/Perl in a similar fashion to 0003. Since a few contrib\nmodules depend on PL/Perl, 0002 also modifies some modules' Makefiles to\nhandle whether trusted and/or untrusted PL/Perl is built.\n\nI haven't made the required changes (if any) for MSVC, as I do not\ncurrently have a way to test it. For now, I am parking these patches in\nthe July commitfest while I gauge interest in this feature and await any\nfeedback on the proposed approach.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 20 May 2022 15:56:19 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> Presently, if you want to only build trusted PL/Perl and PL/Tcl, you need\n> to make a couple of code changes to compile out the untrusted parts. I\n> suspect many users (e.g., anyone who wants to disallow file system access)\n> would benefit from a better supported way to do this. Thus, I've attached\n> some patches that introduce an optional argument for the --with-perl and\n> --with-tcl configuration options. This new argument can be used to build\n> only the trusted or untrusted version of the language. If the argument is\n> not provided, both the trusted and untrusted versions are built, so this\n> change is backward compatible.\n\nI do not believe that this is worth the extra complication. Nobody has\never asked for it before, so I estimate the number of people who would\nuse it as near zero, and those folk are entirely capable of removing\nthe relevant extension files from their installations.\n\nMoreover, if we accept this as a useful configure option, what other\nthings will we be called on to change? It surely makes no sense to\ninstall contrib/adminpack, for instance, if you're afraid of having\nplperlu installed.\n\nLastly, you've offered no reason to think this would provide any real\nsecurity improvement. Someone who's gained the ability to issue CREATE\nEXTENSION on untrusted extensions has already got all the privileges he\nneeds; leaving out a few extension files is at most going to slow him\ndown a bit on the way to full filesystem access. (See, eg, COPY TO\nPROGRAM.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 May 2022 20:20:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "On Fri, May 20, 2022 at 08:20:11PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> Presently, if you want to only build trusted PL/Perl and PL/Tcl, you need\n>> to make a couple of code changes to compile out the untrusted parts. I\n>> suspect many users (e.g., anyone who wants to disallow file system access)\n>> would benefit from a better supported way to do this. Thus, I've attached\n>> some patches that introduce an optional argument for the --with-perl and\n>> --with-tcl configuration options. This new argument can be used to build\n>> only the trusted or untrusted version of the language. If the argument is\n>> not provided, both the trusted and untrusted versions are built, so this\n>> change is backward compatible.\n> \n> I do not believe that this is worth the extra complication. Nobody has\n> ever asked for it before, so I estimate the number of people who would\n> use it as near zero, and those folk are entirely capable of removing\n> the relevant extension files from their installations.\n\nOf course, if there is no interest in this feature, I'll withdraw the patch\nfrom consideration. However, I will note that moving the extension files\naside is not sufficient for blocking all use of untrusted languages, since\nthe symbols for their handler/validator functions will still be present.\n\n> Moreover, if we accept this as a useful configure option, what other\n> things will we be called on to change? It surely makes no sense to\n> install contrib/adminpack, for instance, if you're afraid of having\n> plperlu installed.\n> \n> Lastly, you've offered no reason to think this would provide any real\n> security improvement. Someone who's gained the ability to issue CREATE\n> EXTENSION on untrusted extensions has already got all the privileges he\n> needs; leaving out a few extension files is at most going to slow him\n> down a bit on the way to full filesystem access. (See, eg, COPY TO\n> PROGRAM.)\n\nI'd like to provide the ability to disallow these other things, too. This\nis intended to be a first step in that direction.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 23 May 2022 09:54:03 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Fri, May 20, 2022 at 08:20:11PM -0400, Tom Lane wrote:\n>> Lastly, you've offered no reason to think this would provide any real\n>> security improvement. Someone who's gained the ability to issue CREATE\n>> EXTENSION on untrusted extensions has already got all the privileges he\n>> needs; leaving out a few extension files is at most going to slow him\n>> down a bit on the way to full filesystem access. (See, eg, COPY TO\n>> PROGRAM.)\n\n> I'd like to provide the ability to disallow these other things, too. This\n> is intended to be a first step in that direction.\n\nThere would probably be some interest in a \"--disable-disk-access\"\nconfigure option that did all of this stuff (and some more things\ntoo), with the aim of locking down *all* known paths to filesystem\naccess. I don't see much value in retail options that do some of that.\nIn fact, what they might mostly accomplish is to give people a false\nsense of security.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 May 2022 13:17:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "On Mon, May 23, 2022 at 01:17:08PM -0400, Tom Lane wrote:\n> There would probably be some interest in a \"--disable-disk-access\"\n> configure option that did all of this stuff (and some more things\n> too), with the aim of locking down *all* known paths to filesystem\n> access. I don't see much value in retail options that do some of that.\n> In fact, what they might mostly accomplish is to give people a false\n> sense of security.\n\nThat's a reasonable point. I'll go ahead an explore some options for\nsomething along those lines. A couple of questions immediately come to\nmind. For example, should this configuration option just cause these\nfunctions to ERROR, or should it compile them out?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 23 May 2022 10:38:05 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> That's a reasonable point. I'll go ahead an explore some options for\n> something along those lines. A couple of questions immediately come to\n> mind. For example, should this configuration option just cause these\n> functions to ERROR, or should it compile them out?\n\nLetting them be present but throw error is likely to be far less\npainful than the other way, because then you don't need a separate\nset of SQL-visible object definitions. You could, in fact, imagine\njacking up an existing database and driving a set of locked-down\nbinaries under it --- or vice versa. If there have to be different\nversions of the extension SQL files for the two cases then everything\ngets way hairier, both for developers and users.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 May 2022 14:20:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "On Mon, May 23, 2022 at 02:20:02PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> That's a reasonable point. I'll go ahead an explore some options for\n>> something along those lines. A couple of questions immediately come to\n>> mind. For example, should this configuration option just cause these\n>> functions to ERROR, or should it compile them out?\n> \n> Letting them be present but throw error is likely to be far less\n> painful than the other way, because then you don't need a separate\n> set of SQL-visible object definitions. You could, in fact, imagine\n> jacking up an existing database and driving a set of locked-down\n> binaries under it --- or vice versa. If there have to be different\n> versions of the extension SQL files for the two cases then everything\n> gets way hairier, both for developers and users.\n\nAgreed. I'll do it that way.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 23 May 2022 11:34:57 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "On Mon, May 23, 2022 at 1:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> There would probably be some interest in a \"--disable-disk-access\"\n> configure option that did all of this stuff (and some more things\n> too), with the aim of locking down *all* known paths to filesystem\n> access. I don't see much value in retail options that do some of that.\n> In fact, what they might mostly accomplish is to give people a false\n> sense of security.\n\nI definitely think there's a need for a user who can manipulate\nobjects in the database much like a superuser (i.e. access all\nobjects, grant and revoke all privileges, etc.) but who can't break\nout into the OS user account and assume it's privileges. I'm not sure\nwhether it's best to try to get there by creating a mode where the\nsuperuser's privileges are trimmed back, or to get there by still\nhaving a super-user who is just as super as at present but then also\nhave the ability to create other users who are not superusers but have\nmany of the same privileges with respect to in-database objects.\n\nIt seems to me that you've got to think not only about vectors for\nexecuting arbitrary C code and/or shell commands, but also the\nsuperuser's power to mess with the catalogs. If you can UPDATE\npg_proc, you can certainly hack the system, I think. But that isn't\nreally implied by --disable-disk-access, which makes me think that's\nnot really the right way of thinking about it. In my mind, it's\nreasonable as a matter of security policy to decide that you don't\never want plperlu on your system, only plperl. And it's reasonable to\ndecide whether or not you also need some kind of restricted super-user\nfacility. They're just two different issues.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 23 May 2022 16:23:56 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I definitely think there's a need for a user who can manipulate\n> objects in the database much like a superuser (i.e. access all\n> objects, grant and revoke all privileges, etc.) but who can't break\n> out into the OS user account and assume it's privileges. I'm not sure\n> whether it's best to try to get there by creating a mode where the\n> superuser's privileges are trimmed back, or to get there by still\n> having a super-user who is just as super as at present but then also\n> have the ability to create other users who are not superusers but have\n> many of the same privileges with respect to in-database objects.\n\nMaybe I shouldn't be putting words into Nathan's mouth, but I think\nwhat he is after is a mode intended for use by cloud service providers,\nwho would like to offer locked-down database services where there's\njust no way to get to the disk from inside the DB, superuser or no.\n\nWhat you're talking about is perhaps interesting to a different set of\npeople, but it doesn't offer any guarantees because it's always possible\nthat $attacker manages to hack his way into access to a superuser role.\n\n> It seems to me that you've got to think not only about vectors for\n> executing arbitrary C code and/or shell commands, but also the\n> superuser's power to mess with the catalogs. If you can UPDATE\n> pg_proc, you can certainly hack the system, I think.\n\nI think if all the functions that would let you get to the disk are\ndisabled at the C-code level, it doesn't much matter what the catalogs\nsay about them.\n\nThe main flaw I'm aware of in that argument is that it used to be possible\nfor superusers to create C-language pg_proc entries pointing at random C\nentry point symbols, eg open(2) or write(2), and then invoke those\nfunctions from SQL --- maybe with only restricted possibilities for the\narguments, but you just need to find one combination that works.\nWhen we got rid of v0 function call support, that became at least far\nmore difficult to exploit, but I'm not sure if it's entirely impossible.\nA component of this exercise would need to be making sure that that's\nbulletproof, ie you can't make a usable pg_proc entry that points at\nsomething that wasn't meant to be a SQL-callable function.\n\n> In my mind, it's\n> reasonable as a matter of security policy to decide that you don't\n> ever want plperlu on your system, only plperl.\n\nAbsolutely, but for that you can just not install plperlu's extension\nsupport files.\n\nIf you're concerned about whether that decision is un-hackable, then\nyou soon realize that you need a bulletproof no-disk-access restriction.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 May 2022 16:46:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "On Mon, May 23, 2022 at 4:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Maybe I shouldn't be putting words into Nathan's mouth, but I think\n> what he is after is a mode intended for use by cloud service providers,\n> who would like to offer locked-down database services where there's\n> just no way to get to the disk from inside the DB, superuser or no.\n\nThe cloud service provider use case is also what I was thinking about.\n\n> What you're talking about is perhaps interesting to a different set of\n> people, but it doesn't offer any guarantees because it's always possible\n> that $attacker manages to hack his way into access to a superuser role.\n\nI mean, you can hypothesize that any sort of restriction can be\nbypassed, regardless of how they're implemented. I don't think this is\na valid way of discriminating among possible solutions.\n\n> The main flaw I'm aware of in that argument is that it used to be possible\n> for superusers to create C-language pg_proc entries pointing at random C\n> entry point symbols, eg open(2) or write(2), and then invoke those\n> functions from SQL --- maybe with only restricted possibilities for the\n> arguments, but you just need to find one combination that works.\n> When we got rid of v0 function call support, that became at least far\n> more difficult to exploit, but I'm not sure if it's entirely impossible.\n> A component of this exercise would need to be making sure that that's\n> bulletproof, ie you can't make a usable pg_proc entry that points at\n> something that wasn't meant to be a SQL-callable function.\n\nIt's not just a question of whether it was meant to be SQL-callable --\nit's also a question of what arguments it was expecting to be called\nwith. At the very least, you can cause the server to core dump if you\npass something that isn't a valid pointer to a function that is\nexpecting a pointer, which is something a CSP very likely does not\nwant a customer to be able to do. I think, however, that there's every\npossibility that you can create more havoc than that. You can\nbasically call a function that's expecting a pointer with a pointer to\nanything you can find or guess the memory address of. Maybe that's not\nenough control to cause anything worse than a server crash, but I sure\nwouldn't bet on it. There's a lot of functions floating around, and if\nnone of them can be tricked into doing filesystem access today, well\nsomeone might add a new one tomorrow.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 23 May 2022 17:51:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> It's not just a question of whether it was meant to be SQL-callable --\n> it's also a question of what arguments it was expecting to be called\n> with. At the very least, you can cause the server to core dump if you\n> pass something that isn't a valid pointer to a function that is\n> expecting a pointer, which is something a CSP very likely does not\n> want a customer to be able to do. I think, however, that there's every\n> possibility that you can create more havoc than that. You can\n> basically call a function that's expecting a pointer with a pointer to\n> anything you can find or guess the memory address of. Maybe that's not\n> enough control to cause anything worse than a server crash, but I sure\n> wouldn't bet on it. There's a lot of functions floating around, and if\n> none of them can be tricked into doing filesystem access today, well\n> someone might add a new one tomorrow.\n\n[ shrug... ] So is your point that we shouldn't bother to do anything?\nI don't personally have a problem with leaving things where they stand\nin this area. However, if we're going to do something, I think at\nminimum it should involve blocking off everything we can identify as\nstraightforward reproducible methods to get disk access.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 May 2022 18:42:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > It's not just a question of whether it was meant to be SQL-callable --\n> > it's also a question of what arguments it was expecting to be called\n> > with. At the very least, you can cause the server to core dump if you\n> > pass something that isn't a valid pointer to a function that is\n> > expecting a pointer, which is something a CSP very likely does not\n> > want a customer to be able to do. I think, however, that there's every\n> > possibility that you can create more havoc than that. You can\n> > basically call a function that's expecting a pointer with a pointer to\n> > anything you can find or guess the memory address of. Maybe that's not\n> > enough control to cause anything worse than a server crash, but I sure\n> > wouldn't bet on it. There's a lot of functions floating around, and if\n> > none of them can be tricked into doing filesystem access today, well\n> > someone might add a new one tomorrow.\n> \n> [ shrug... ] So is your point that we shouldn't bother to do anything?\n> I don't personally have a problem with leaving things where they stand\n> in this area. However, if we're going to do something, I think at\n> minimum it should involve blocking off everything we can identify as\n> straightforward reproducible methods to get disk access.\n\nI have a hard time seeing the value in allowing catalog hacking, even\nfor a cloud provider, and making sure to cut off all possible ways that\ncould be abused strikes me as unlikely to be successful.\n\nInstead, I'd argue that we should be continuing to work in the direction\nof splitting up what can only be done by a superuser today using\npredefined roles and other methods along those lines. How that lines up\nwith this latest ask around untrusted languages is something I'm not\nexactly sure about, but a magic configure option that is\n\"--don't-allow-what-AWS-doesn't-want-to-allow\" certainly doesn't seem\nlike it's going in the right direction (and, no, not every cloud\nprovider is going to want the exact same thing when it comes to whatever\nthis option is that we're talking about, so we'd end up having to have\nconfigure options for each if we start going down this road...).\n\nI agree with the general idea of \"has all of today's superuser rights\nexcept the ability to hack catalogs or do disk access\" being one\nuse-case we should be thinking about, along with \"also can't do network\naccess\" and \"allowed to do network or disk access but can't directly\nhack up the catalog\", but I don't see us growing configure options for\nall these things and would much rather we have a way to let folks\nconfigure their systems along these different lines, ideally without\nhaving to make that decision at build time.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 23 May 2022 19:09:03 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "On Mon, May 23, 2022 at 07:09:03PM -0400, Stephen Frost wrote:\n> Instead, I'd argue that we should be continuing to work in the direction\n> of splitting up what can only be done by a superuser today using\n> predefined roles and other methods along those lines. How that lines up\n> with this latest ask around untrusted languages is something I'm not\n> exactly sure about, but a magic configure option that is\n> \"--don't-allow-what-AWS-doesn't-want-to-allow\" certainly doesn't seem\n> like it's going in the right direction (and, no, not every cloud\n> provider is going to want the exact same thing when it comes to whatever\n> this option is that we're talking about, so we'd end up having to have\n> configure options for each if we start going down this road...).\n\nI guess I'd like to do both. I agree with continuing the work with\npredefined roles, etc., but I also think there is value in being able to\ncompile out things that allow arbitrary disk/network access. My intent\nwith this thread is the latter, and I'm trying to tackle this in a way that\nis generically useful even beyond the cloud provider use case.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 23 May 2022 17:04:27 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "On 5/23/22 8:04 PM, Nathan Bossart wrote:\r\n> On Mon, May 23, 2022 at 07:09:03PM -0400, Stephen Frost wrote:\r\n>> Instead, I'd argue that we should be continuing to work in the direction\r\n>> of splitting up what can only be done by a superuser today using\r\n>> predefined roles and other methods along those lines. How that lines up\r\n>> with this latest ask around untrusted languages is something I'm not\r\n>> exactly sure about, but a magic configure option that is\r\n>> \"--don't-allow-what-AWS-doesn't-want-to-allow\" certainly doesn't seem\r\n>> like it's going in the right direction (and, no, not every cloud\r\n>> provider is going to want the exact same thing when it comes to whatever\r\n>> this option is that we're talking about, so we'd end up having to have\r\n>> configure options for each if we start going down this road...).\r\n> \r\n> I guess I'd like to do both. I agree with continuing the work with\r\n> predefined roles, etc., but I also think there is value in being able to\r\n> compile out things that allow arbitrary disk/network access. My intent\r\n> with this thread is the latter, and I'm trying to tackle this in a way that\r\n> is generically useful even beyond the cloud provider use case.\r\n\r\n(+1 on continuing to split up superuser into other predefined roles and \r\nother privs)\r\n\r\nFor other use cases, I suggest considering PostgreSQL deployments in \r\nenvironments that run on restricted filesystems, e.g. containers. When \r\nconfigured properly, a containerized filesystem will disallow writes \r\noutside of the data directory. However, a) they typically only restrict \r\nwrites (which is often good enough) and b) this model holds so long as \r\nthere are no exploits in the container itself.\r\n\r\nThe latter may not be our problem, but we can provide an additional risk \r\nmitigation for folks who deploy PostgreSQL in containers or other \r\nrestricted environments the option to compile out features that do allow \r\narbitrary disk access.\r\n\r\nI agree with a bunch of the upthread sentiment, but I would ask if the \r\ncurrent proposal provides acceptable risk mitigation for PostgreSQL \r\ndeployments who want to restrict users having access to the filesystem?\r\n\r\nJonathan",
"msg_date": "Mon, 23 May 2022 22:49:42 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "On Mon, May 23, 2022 at 6:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> [ shrug... ] So is your point that we shouldn't bother to do anything?\n> I don't personally have a problem with leaving things where they stand\n> in this area. However, if we're going to do something, I think at\n> minimum it should involve blocking off everything we can identify as\n> straightforward reproducible methods to get disk access.\n\nNo, my point is that one size doesn't fit all. Bundling everything\ntogether that could result in a disk access is going to suck too many\nmarginally-related into the same bucket. It's much better to have\nindividual switches controlling individual behaviors, so that people\ncan opt into or out of the behavior that they want.\n\nI would argue that Stephen's proposal (that is, using predefined roles\nmore) and Nathan's proposal (that is, making it possible to build only\nthe trusted version of some PL) are tackling this problem are far\nsuperior to your idea (that is, a flag to disable all disk access)\nprecisely because they are more granular. Your idea appears to\npresuppose that there is exactly one thing in this area that anybody\nwants and that we know what that thing is. I think people want a bunch\nof slightly different things and that we're probably unaware of many\nof them. Letting them pick which behaviors they want seems to me to\nmake a lot of sense.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 May 2022 12:39:16 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "On Tue, May 24, 2022 at 12:39:16PM -0400, Robert Haas wrote:\n> On Mon, May 23, 2022 at 6:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> [ shrug... ] So is your point that we shouldn't bother to do anything?\n>> I don't personally have a problem with leaving things where they stand\n>> in this area. However, if we're going to do something, I think at\n>> minimum it should involve blocking off everything we can identify as\n>> straightforward reproducible methods to get disk access.\n> \n> No, my point is that one size doesn't fit all. Bundling everything\n> together that could result in a disk access is going to suck too many\n> marginally-related into the same bucket. It's much better to have\n> individual switches controlling individual behaviors, so that people\n> can opt into or out of the behavior that they want.\n> \n> I would argue that Stephen's proposal (that is, using predefined roles\n> more) and Nathan's proposal (that is, making it possible to build only\n> the trusted version of some PL) are tackling this problem are far\n> superior to your idea (that is, a flag to disable all disk access)\n> precisely because they are more granular. Your idea appears to\n> presuppose that there is exactly one thing in this area that anybody\n> wants and that we know what that thing is. I think people want a bunch\n> of slightly different things and that we're probably unaware of many\n> of them. Letting them pick which behaviors they want seems to me to\n> make a lot of sense.\n\nCan we do both? That is, can we add retail options for untrusted\nlanguages, generic file access functions, etc., and then also introduce a\n--disable-disk-access configuration option? The latter might even just be\na combination of retail options. This would allow for more granular\nconfigurations, but it also could help address Tom's concerns.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 24 May 2022 10:28:51 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Tue, May 24, 2022 at 12:39:16PM -0400, Robert Haas wrote:\n>> No, my point is that one size doesn't fit all. Bundling everything\n>> together that could result in a disk access is going to suck too many\n>> marginally-related into the same bucket. It's much better to have\n>> individual switches controlling individual behaviors, so that people\n>> can opt into or out of the behavior that they want.\n\n> Can we do both? That is, can we add retail options for untrusted\n> languages, generic file access functions, etc., and then also introduce a\n> --disable-disk-access configuration option? The latter might even just be\n> a combination of retail options. This would allow for more granular\n> configurations, but it also could help address Tom's concerns.\n\nDon't see why not.\n\nI'm a bit skeptical of Robert's position, mainly because I don't think\nhe's offered any credible threat model that would justify disabling\nindividual features of this sort but not all of them. However, if what\nit takes to have consensus is some individual knobs in addition to an\n\"easy button\", let's do it that way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 May 2022 13:38:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "On Tue, May 24, 2022 at 1:28 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Can we do both? That is, can we add retail options for untrusted\n> languages, generic file access functions, etc., and then also introduce a\n> --disable-disk-access configuration option? The latter might even just be\n> a combination of retail options. This would allow for more granular\n> configurations, but it also could help address Tom's concerns.\n\nOh, sure. We're in charge around here. We can do whatever we want! The\nonly trick is agreeing with each other to a sufficient degree to get\nsomething done ... and of course the small matter of writing the code.\n\nI guess one question is at what level we want to disable these various\nthings. Your original proposal seemed reasonable to me because I feel\nlike users who are compiling PostgreSQL ought to have control over\nwhich things they compile. If you can turn plperl and plperlu off\ntogether, and you can, then why shouldn't you be able to turn them on\nand off separately? I can't think of a good reason why we shouldn't\nmake that possible if people want it, and evidently at least one\nperson does: you. I'm even willing to assume that you represent the\ninterests of some larger group of people. :-)\n\nBut it's not evident to me that it's useful to disable everything\nspecifically at compile time. I have long thought that it's pretty\nbizarre that we permit DML on system catalogs even with\nallow_system_table_mods=off, and if I were going to provide a way to\nlock that down, I would think of doing it via a new GUC, or a\nmodification to the existing GUC, or something like that, rather than\na compile-time option -- because we might easily discover a problem in\na future release that requires catalog DML to fix, and you wouldn't\nwant to have to replace the binaries or even bounce the server to do\nthat.\n\nAnd similarly, is it really want we want here to categorically disable\nall functions that permit file access for all users under all\ncircumstances? Or do we maybe want that to be something that can be\nreconfigured at runtime? Or even just make it a privilege extended to\nsome users but not others? What about COPY TO/FROM PROGRAM?\n\nAnyway, I'm not desperately opposed to the idea of having a PostgreSQL\nmode that locks a whole lotta crap down at configure time, but I bet\nit's going to be (1) hard to get agreement that all of the relevant\nstuff is actually worth including and (2) kinda inconvenient not to be\nable to change any of that behavior without replacing the binaries. I\ndo agree that there are SOME things where people are going to\nexplicitly want that stuff to be unchangeable without replacing the\nbinaries, and that's fine. I'm just not sure that's what people are\ngoing to want in all cases.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 May 2022 14:10:19 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "On Tue, May 24, 2022 at 02:10:19PM -0400, Robert Haas wrote:\n> I guess one question is at what level we want to disable these various\n> things. Your original proposal seemed reasonable to me because I feel\n> like users who are compiling PostgreSQL ought to have control over\n> which things they compile. If you can turn plperl and plperlu off\n> together, and you can, then why shouldn't you be able to turn them on\n> and off separately? I can't think of a good reason why we shouldn't\n> make that possible if people want it, and evidently at least one\n> person does: you. I'm even willing to assume that you represent the\n> interests of some larger group of people. :-)\n\n:)\n\nFWIW this was my original thinking. I can choose to build/install\nextensions separately, but when it comes to PL/Tcl and PL/Perl, you've\ngot to build the trusted and untrusted stuff at the same time, and the\nuntrusted symbols remain even if you remove the control file and\ninstallation scripts. Of course, this isn't a complete solution for\nremoving the ability to do any sort of random file system access, though.\n\n> But it's not evident to me that it's useful to disable everything\n> specifically at compile time. I have long thought that it's pretty\n> bizarre that we permit DML on system catalogs even with\n> allow_system_table_mods=off, and if I were going to provide a way to\n> lock that down, I would think of doing it via a new GUC, or a\n> modification to the existing GUC, or something like that, rather than\n> a compile-time option -- because we might easily discover a problem in\n> a future release that requires catalog DML to fix, and you wouldn't\n> want to have to replace the binaries or even bounce the server to do\n> that.\n\nYeah, for certain things, a GUC probably makes more sense.\n\n> And similarly, is it really want we want here to categorically disable\n> all functions that permit file access for all users under all\n> circumstances? Or do we maybe want that to be something that can be\n> reconfigured at runtime? Or even just make it a privilege extended to\n> some users but not others? What about COPY TO/FROM PROGRAM?\n\nI guess I'd ask again whether we can do both... We've got predefined roles\nlike pg_execute_server_program that allow access to COPY TO/FROM PROGRAM,\nbut I have no way to categorically disable that ѕort of thing if I wanted\nto really lock things down, even for superusers. I'm not suggesting that\nevery predefined role needs a corresponding configure option, but basic\nthings like arbitrary disk/network/program access seem like reasonable\nproposals.\n\nI have about 50% of a generic --disable-disk-access patch coded up which\nI'll share soon to help inform the discussion.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 24 May 2022 13:58:41 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "On Tue, May 24, 2022 at 02:10:19PM -0400, Robert Haas wrote:\n> I guess one question is at what level we want to disable these various\n> things. Your original proposal seemed reasonable to me because I feel\n> like users who are compiling PostgreSQL ought to have control over\n> which things they compile. If you can turn plperl and plperlu off\n> together, and you can, then why shouldn't you be able to turn them on\n> and off separately? I can't think of a good reason why we shouldn't\n> make that possible if people want it, and evidently at least one\n> person does: you. I'm even willing to assume that you represent the\n> interests of some larger group of people. :-)\n\nI always thought if pg_proc is able to call an arbitrary function in an\narbitrary library, it could access to the file system, and if that is\ntrue, locking the super-user from file system access seems impossible\nand unwise to try because it would give a false sense of security.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 24 May 2022 19:54:18 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I always thought if pg_proc is able to call an arbitrary function in an\n> arbitrary library, it could access to the file system, and if that is\n> true, locking the super-user from file system access seems impossible\n> and unwise to try because it would give a false sense of security.\n\nThat was the situation when we had v0 function call semantics. ISTM\nwe are at least a lot closer now to being able to say it's locked down:\n\"internal\" functions can only reach things that are in the fmgrtab\ntable, and \"C\" functions can only reach things that have associated\nPG_FUNCTION_INFO_V1 symbols. Plus we won't load shared libraries\nthat don't have PG_MODULE_MAGIC blocks. Maybe there's still a way\naround all that, but it's sure a lot less obvious than it once was,\nand there are probably things we could do to make it even harder.\n\nI think would-be hackers are now reduced to doing what Robert\nsuggested, which is trying to find a way to subvert a validly\nSQL-callable function by passing it bogus arguments. Maybe there's\na way to gain filesystem access by doing that, but it's not going\nto be easy if the function is not one that intended to allow such\noperations.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 May 2022 21:19:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "On Tue, May 24, 2022 at 09:19:40PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > I always thought if pg_proc is able to call an arbitrary function in an\n> > arbitrary library, it could access to the file system, and if that is\n> > true, locking the super-user from file system access seems impossible\n> > and unwise to try because it would give a false sense of security.\n> \n> That was the situation when we had v0 function call semantics. ISTM\n> we are at least a lot closer now to being able to say it's locked down:\n> \"internal\" functions can only reach things that are in the fmgrtab\n> table, and \"C\" functions can only reach things that have associated\n> PG_FUNCTION_INFO_V1 symbols. Plus we won't load shared libraries\n> that don't have PG_MODULE_MAGIC blocks. Maybe there's still a way\n> around all that, but it's sure a lot less obvious than it once was,\n> and there are probably things we could do to make it even harder.\n\nOkay, good to know.\n\n> I think would-be hackers are now reduced to doing what Robert\n> suggested, which is trying to find a way to subvert a validly\n> SQL-callable function by passing it bogus arguments. Maybe there's\n> a way to gain filesystem access by doing that, but it's not going\n> to be easy if the function is not one that intended to allow such\n> operations.\n\nYes, I think if we can say we are safe in standard superuser-changeable\nthings like modifying the system tables, we might have a chance. Are\nsettings like archive_command safe?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 24 May 2022 21:34:33 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "Greetings,\n\n* Nathan Bossart (nathandbossart@gmail.com) wrote:\n> I guess I'd ask again whether we can do both... We've got predefined roles\n> like pg_execute_server_program that allow access to COPY TO/FROM PROGRAM,\n> but I have no way to categorically disable that ѕort of thing if I wanted\n> to really lock things down, even for superusers. I'm not suggesting that\n> every predefined role needs a corresponding configure option, but basic\n> things like arbitrary disk/network/program access seem like reasonable\n> proposals.\n\nLocking things down \"even for superuser\" is something we've very\nexplicitly said we're not going to try and do. Even with v1 functions,\nthe ability to hack around with pg_proc strikes me as almost certainly\ngoing to provide a way for someone to gain enough control of execution\nto be able to 'break out', not to mention obvious other things like\nALTER SYSTEM to change archive_command to run whatever shell commands an\nattacker with superuser wants to..\n\n> I have about 50% of a generic --disable-disk-access patch coded up which\n> I'll share soon to help inform the discussion.\n\nDo you disable the ability of superusers to use ALTER SYSTEM with this?\n\nI really don't think this is going to be anywhere near as\nstraight-forward as it might appear to be to prevent a superuser from\nbeing able to break out of PG. Instead, we should be moving in the\ndirection of making it so that there doesn't need to be a superuser\nthat's ever logged into except under serious emergency situations where\nthe system is built to require multi-person access to do so.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 25 May 2022 13:49:40 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> I really don't think this is going to be anywhere near as\n> straight-forward as it might appear to be to prevent a superuser from\n> being able to break out of PG.\n\nThis gets back to the point I made before about it not being worthwhile\nto implement half-measures. There is a whole lot of history and code\ndetails associated with the presumption that superuser gives you OS\naccess, and I'm certainly prepared to believe that turning that off\nis a fool's errand.\n\nPerhaps a better answer for providers who need something like this\nis to sandbox the Postgres server using OS-provided facilities.\n\n> Instead, we should be moving in the\n> direction of making it so that there doesn't need to be a superuser\n> that's ever logged into except under serious emergency situations where\n> the system is built to require multi-person access to do so.\n\nI'm a little skeptical that our present design direction really moves\nthe needle very far in this area. We've sliced and diced superuser\naplenty, but that doesn't make individual capabilities such as\npg_write_all_data or ALTER SYSTEM any less dangerous from the standpoint\nof someone trying to prevent breaking out.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 25 May 2022 14:28:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > I really don't think this is going to be anywhere near as\n> > straight-forward as it might appear to be to prevent a superuser from\n> > being able to break out of PG.\n> \n> This gets back to the point I made before about it not being worthwhile\n> to implement half-measures. There is a whole lot of history and code\n> details associated with the presumption that superuser gives you OS\n> access, and I'm certainly prepared to believe that turning that off\n> is a fool's errand.\n\nRight.\n\n> Perhaps a better answer for providers who need something like this\n> is to sandbox the Postgres server using OS-provided facilities.\n\nI'm guessing they wouldn't feel that to be a very satisfactory answer\nbut if they want to give people PG superuser access then that does seem\nlike an approach which at least might be able to work.\n\n> > Instead, we should be moving in the\n> > direction of making it so that there doesn't need to be a superuser\n> > that's ever logged into except under serious emergency situations where\n> > the system is built to require multi-person access to do so.\n> \n> I'm a little skeptical that our present design direction really moves\n> the needle very far in this area. We've sliced and diced superuser\n> aplenty, but that doesn't make individual capabilities such as\n> pg_write_all_data or ALTER SYSTEM any less dangerous from the standpoint\n> of someone trying to prevent breaking out.\n\nI'm guessing you're referring to pg_write_server_files here, not\npg_write_all_data (as the latter should generally be 'safe' in these\nterms? If not, would be good to understand the concern there).\n\nI don't think that what they're actually looking for is a way to give a\nuser access to pg_write_server_files or to ALTER SYSTEM though- and what\nwe have today explicitly allows them to GRANT out lots of rights to\nnon-superusers without also giving those users access to\npg_write_all_data and ALTER SYSTEM and that's basically the point.\n\nAllowing non-superusers to create extensions which have C functions is\none example of moving in this direction of allowing the 'typical DBA'\nthings to be done by non-superusers. There's certainly a lot more that\nwe can do in that direction.\n\nAllowing users to create other users without being a superuser or\neffectively being able to gain superuser access strikes me as the next\nbig step in that same direction of splitting up what only superusers are\nable to do today. That's what the recent discussion about CREATEROLE\nwas really all about, just figuring out how to allow CREATEROLE and some\nlevel of control over those roles after they've been created (and by\nwhom).\n\nWhat isn't terribly clear to me is how what started this particular\nthread is moving things in that direction though, instead it seems to be\ntrying to go in the direction of having a system where superuser could\nbe \"safely\" given out and I am concerned about the project trying to\nprovide a way to guarantee that based on some configure switches. That\nstrikes me as unlikely to end up being successful and might also make it\nso that even a superuser isn't able to do what a superuser needs to be\nable to to do- and then do we need a super superuser..?\n\nThe very specific \"it'd be nice to build PG w/o having untrusted\nlanguages compiled in\" is at least reasonably clearly contained and\nreasonable to see if we are, in fact, doing what we claim we're doing\nwith such a switch. A switch that's \"--disable-disk-access\" seems to\nbe basically impossible for it to *really* do what a simple reading of\nthe option implies (clearly we're going to access the disk..) and even\nif we try to say \"well, not direct disk access\" then does that need to\ndisable ALTER SYSTEM (or just for certain GUCs..?) along with things\nlike pg_write_server_files and pg_execute_server_programs, and probably\nmodifying pg_proc and maybe modification of the other PG catalogs? But\nthen, what if you actually need to modify pg_proc due to what we say to\ndo in release notes or for other reasons? Would you have to replace the\nPG binaries to do so? That doesn't strike me as particularly\nreasonable.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 25 May 2022 16:07:17 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "On Wed, May 25, 2022 at 2:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm a little skeptical that our present design direction really moves\n> the needle very far in this area. We've sliced and diced superuser\n> aplenty, but that doesn't make individual capabilities such as\n> pg_write_all_data or ALTER SYSTEM any less dangerous from the standpoint\n> of someone trying to prevent breaking out.\n\nWe have really not sliced and diced superuser in any serious way. I'm\nnot here to say that the existing predefined roles are useless,\nespecially the more powerful ones like pg_read_all_data, but I don't\nthink \"primitive\" would be an unfair characterization. The problem is\ntwofold. On the one hand, you can't delegate all of the things that\nthe server can do - in particular, and I think this is the really\nimportant thing, the superuser's unique ability to administer objects\ninside the database. On the other hand, you can only delegate\nprivileges on an all-or-nothing basis. You either have the predefined\nrole or you don't. In the case of something like pg_read_all_data,\nthat's fine, because it's equivalent to SELECT privileges on every\ntable, which are separately grantable if you prefer. But it's a little\nless obviously sufficient for things like pg_read_server_files where,\nwe must hope, you're OK with granting access to all or none of them,\nand it's clearly insufficient for administration of objects in the\ndatabase.\n\nHence the whole \"CREATEROLE and role ownership hierarchies\" thread,\nwhich strikes me as as way to make some really meaningful progress\ntoward a future in which you don't have to be superuser to do a useful\namount of database administration. Unfortunately that discussion was\nless productive than I think it could have been.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 25 May 2022 16:09:27 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "On Wed, May 25, 2022 at 4:07 PM Stephen Frost <sfrost@snowman.net> wrote:\n> The very specific \"it'd be nice to build PG w/o having untrusted\n> languages compiled in\" is at least reasonably clearly contained and\n> reasonable to see if we are, in fact, doing what we claim we're doing\n> with such a switch. A switch that's \"--disable-disk-access\" seems to\n> be basically impossible for it to *really* do what a simple reading of\n> the option implies (clearly we're going to access the disk..) and even\n> if we try to say \"well, not direct disk access\" then does that need to\n> disable ALTER SYSTEM (or just for certain GUCs..?) along with things\n> like pg_write_server_files and pg_execute_server_programs, and probably\n> modifying pg_proc and maybe modification of the other PG catalogs? But\n> then, what if you actually need to modify pg_proc due to what we say to\n> do in release notes or for other reasons? Would you have to replace the\n> PG binaries to do so? That doesn't strike me as particularly\n> reasonable.\n\n+1 to all that. The original proposal was self-contained and\nreasonable on its face. Blowing it up into a general\n--disable-disk-access feature makes it both a lot more difficult and a\nlot less well-defined.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 25 May 2022 16:12:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> The very specific \"it'd be nice to build PG w/o having untrusted\n> languages compiled in\" is at least reasonably clearly contained and\n> reasonable to see if we are, in fact, doing what we claim we're doing\n> with such a switch.\n\nI agree that it's specific and easily measured. What I don't get is why\nit's worth troubling over, if we acknowledge that keeping superusers from\nbreaking out to OS access is infeasible. At most, not having access to\nplpythonu means you've got to kluge something up involving COPY TO\nPROGRAM 'python'.\n\nIf somebody else is excited enough about it to do the legwork, I won't\nstand in the way particularly. But it strikes me as a waste of effort,\nnot only for the patch author but for everyone who has to read about\nor maintain the resulting configure options etc.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 25 May 2022 16:20:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > The very specific \"it'd be nice to build PG w/o having untrusted\n> > languages compiled in\" is at least reasonably clearly contained and\n> > reasonable to see if we are, in fact, doing what we claim we're doing\n> > with such a switch.\n> \n> I agree that it's specific and easily measured. What I don't get is why\n> it's worth troubling over, if we acknowledge that keeping superusers from\n> breaking out to OS access is infeasible. At most, not having access to\n> plpythonu means you've got to kluge something up involving COPY TO\n> PROGRAM 'python'.\n\nI agree that this seems to need more discussion and explanation as it\nisn't actually sufficient by itself for \"anyone who wants to disallow\nfile system access\" as the initial post claimed. If there isn't\nsufficient explanation coming forward to support this change by itself\nthen we can reject it, but I don't think it makes sense to try and morph\nit into something a lot more generic and a lot harder to actually get\nright and document and guarantee.\n\n> If somebody else is excited enough about it to do the legwork, I won't\n> stand in the way particularly. But it strikes me as a waste of effort,\n> not only for the patch author but for everyone who has to read about\n> or maintain the resulting configure options etc.\n\nI agree that we need to be judicious in what configure options we add as\nnew options introduce additional maintenance effort.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 25 May 2022 16:27:15 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> I agree that this seems to need more discussion and explanation as it\n> isn't actually sufficient by itself for \"anyone who wants to disallow\n> file system access\" as the initial post claimed. If there isn't\n> sufficient explanation coming forward to support this change by itself\n> then we can reject it, but I don't think it makes sense to try and morph\n> it into something a lot more generic and a lot harder to actually get\n> right and document and guarantee.\n\nThe reason I pushed the discussion in that direction was that I was\ncurious to see if --disable-disk-access could actually be a thing.\nIf it could, it'd have clear utility for at least some service providers.\nBut it seems the (preliminary?) conclusion is \"no, we still can't do that\nin any way that's credibly bulletproof\". So yeah, that justification\nfor the currently-proposed patch doesn't seem to hold water.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 25 May 2022 16:34:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "On 24.05.22 22:58, Nathan Bossart wrote:\n> FWIW this was my original thinking. I can choose to build/install\n> extensions separately, but when it comes to PL/Tcl and PL/Perl, you've\n> got to build the trusted and untrusted stuff at the same time, and the\n> untrusted symbols remain even if you remove the control file and\n> installation scripts. Of course, this isn't a complete solution for\n> removing the ability to do any sort of random file system access, though.\n\nThis only makes sense to me if you install directly from the source tree \nto your production installation. Presumably, there is usually a \npackaging step in between. And you can decide at that point which files \nto install or not to install.\n\n\n",
"msg_date": "Fri, 27 May 2022 14:03:21 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "Given the discussion in this thread, I intend to mark the commitfest entry\nas Withdrawn shortly. Before I do, I thought I'd first check whether 0001\n[0] might be worthwhile independent of $SUBJECT. This change separates the\n[un]trusted handler and validator functions for PL/Perl so that we no\nlonger need to inspect pg_language to determine whether to use the trusted\nor untrusted code path. I was surprised to learn that you can end up with\nPL/PerlU even if you've specified the trusted handler/validator functions.\nBesides bringing things more in line with how PL/Tcl does things, this\nchange simplifies function lookup in plperl_proc_hash. I suppose such a\nchange might introduce a compatibility break for users who are depending on\nthis behavior, but I don't know if that's worth worrying about.\n\n[0] https://www.postgresql.org/message-id/attachment/133940/v1-0001-Do-not-use-pg_language-to-determine-whether-PL-Pe.patch\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 13 Jul 2022 11:53:50 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> Given the discussion in this thread, I intend to mark the commitfest entry\n> as Withdrawn shortly. Before I do, I thought I'd first check whether 0001\n> [0] might be worthwhile independent of $SUBJECT. This change separates the\n> [un]trusted handler and validator functions for PL/Perl so that we no\n> longer need to inspect pg_language to determine whether to use the trusted\n> or untrusted code path. I was surprised to learn that you can end up with\n> PL/PerlU even if you've specified the trusted handler/validator functions.\n> Besides bringing things more in line with how PL/Tcl does things, this\n> change simplifies function lookup in plperl_proc_hash. I suppose such a\n> change might introduce a compatibility break for users who are depending on\n> this behavior, but I don't know if that's worth worrying about.\n\nMeh. Avoiding the potential repeat hashtable lookup is worth something,\nbut I'm not sure I buy that this is a semantic improvement. ISTM that\nlanpltrusted *should* be the ultimate source of truth on this point.\n\nMy feelings about it are not terribly strong either way, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Jul 2022 15:49:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "On 7/13/22 12:49, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> Given the discussion in this thread, I intend to mark the commitfest entry\n>> as Withdrawn shortly.\n\nI'll mark this RwF rather than bring it forward; if you'd prefer a\ndifferent status please feel free (or let me know).\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Mon, 1 Aug 2022 14:41:21 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
},
{
"msg_contents": "On Mon, Aug 01, 2022 at 02:41:21PM -0700, Jacob Champion wrote:\n> On 7/13/22 12:49, Tom Lane wrote:\n>> Nathan Bossart <nathandbossart@gmail.com> writes:\n>>> Given the discussion in this thread, I intend to mark the commitfest entry\n>>> as Withdrawn shortly.\n> \n> I'll mark this RwF rather than bring it forward; if you'd prefer a\n> different status please feel free (or let me know).\n\nThanks. I think 0001 might still be worth considering, but that needn't be\ntracked with this commitfest entry.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 1 Aug 2022 21:29:33 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow building trusted languages without the untrusted versions"
}
] |
[
{
"msg_contents": "Hi hackers,\nThanks to all the developers. The attached patch updates the manual for the pg_stat_recovery_prefetch view.\nThe current pg_stat_recovery_prefetch view definition is missing the stats_reset column. The attached patch adds information in the stats_reset column.\n\nhttps://www.postgresql.org/docs/15/monitoring-stats.html#MONITORING-PG-STAT-RECOVERY-PREFETCH\n\nRegards,\nNoriyoshi Shinoda",
"msg_date": "Sat, 21 May 2022 04:07:37 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>",
"msg_from_op": true,
"msg_subject": "PG15 beta1 fix pg_stat_recovery_prefetch view manual"
},
{
"msg_contents": "On Sat, May 21, 2022 at 4:07 PM Shinoda, Noriyoshi (PN Japan FSIP)\n<noriyoshi.shinoda@hpe.com> wrote:\n> Thanks to all the developers. The attached patch updates the manual for the pg_stat_recovery_prefetch view.\n> The current pg_stat_recovery_prefetch view definition is missing the stats_reset column. The attached patch adds information in the stats_reset column.\n\nAhh, thank you. I will push this soon.\n\n\n",
"msg_date": "Sat, 21 May 2022 16:51:05 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 fix pg_stat_recovery_prefetch view manual"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on a feature, I came across an undefined\n(declaration-only) function CalculateMaxmumSafeLSN added by commit\nc655077 [1]. Attaching a patch to remove it.\n\n[1] commit c6550776394e25c1620bc8258427c8f1d448080d\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Tue Apr 7 18:35:00 2020 -0400\n\n Allow users to limit storage reserved by replication slots\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Sat, 21 May 2022 12:51:15 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Remove an undefined function CalculateMaxmumSafeLSN"
},
{
"msg_contents": "Hello\n\nYes, I already wrote about this artifact. And created CF app entry so it wouldn't get lost: https://commitfest.postgresql.org/38/3616/\n\nregards, Sergei\n\n\n",
"msg_date": "Sat, 21 May 2022 11:55:27 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re:Remove an undefined function CalculateMaxmumSafeLSN"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on a feature, I noticed that there are unnecessary\nincludes of \"use File::Path qw(rmtree);\" in some TAP test files and\nalso some wrong comments in 019_replslot_limit.pl. Attaching a patch\nto fix these.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Sat, 21 May 2022 13:08:37 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix unnecessary includes and comments in 019_replslot_limit.pl,\n 007_wal.pl and 004_timeline_switch.pl"
},
{
"msg_contents": "Hi!\n\nThis is an obvious change, I totally for it. Hope it will be commited soon.\n\n-- \nBest regards,\nMaxim Orlov.\n\nHi!This is an obvious change, I totally for it. Hope it will be commited soon.-- Best regards,Maxim Orlov.",
"msg_date": "Wed, 6 Jul 2022 16:05:03 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix unnecessary includes and comments in 019_replslot_limit.pl,\n 007_wal.pl and 004_timeline_switch.pl"
},
{
"msg_contents": "On Wed, Jul 6, 2022 at 5:05 PM Maxim Orlov <orlovmg@gmail.com> wrote:\n\n> Hi!\n>\n> This is an obvious change, I totally for it. Hope it will be commited soon.\n>\n\nI'm sorry for some nitpicking about changes in the comments:\n- The number of WAL segments advanced hasn't changed from 5 to 1, it just\nadvances as 1+4 as previously. So the original comment is right. I reverted\nthis in v2.\n- wal_segment_size is in bytes so comment \"(wal_segment_size * n) MB\" is\nincorrect. I corrected this to bytes.\n\nPFA v2 of a patch (only comments changed/reverted to original).\nOverall I completely agree with Maxim: the patch is good and simple enough\nto be RfC.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>",
"msg_date": "Wed, 6 Jul 2022 17:22:14 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix unnecessary includes and comments in 019_replslot_limit.pl,\n 007_wal.pl and 004_timeline_switch.pl"
},
{
"msg_contents": ">\n> I'm sorry for some nitpicking about changes in the comments:\n> - The number of WAL segments advanced hasn't changed from 5 to 1, it just\n> advances as 1+4 as previously. So the original comment is right. I reverted\n> this in v2.\n>\n\nYeah, it looks even better now.\n\n-- \nBest regards,\nMaxim Orlov.\n\nI'm sorry for some nitpicking about changes in the comments:- The number of WAL segments advanced hasn't changed from 5 to 1, it just advances as 1+4 as previously. So the original comment is right. I reverted this in v2.Yeah, it looks even better now. -- Best regards,Maxim Orlov.",
"msg_date": "Wed, 6 Jul 2022 16:26:38 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix unnecessary includes and comments in 019_replslot_limit.pl,\n 007_wal.pl and 004_timeline_switch.pl"
},
{
"msg_contents": "On Wed, Jul 06, 2022 at 04:26:38PM +0300, Maxim Orlov wrote:\n> Yeah, it looks even better now.\n\nEspecially knowing that the test uses a segment size of 1MB via initdb\nto be cheaper. v2 looks fine by itself, so applied.\n--\nMichael",
"msg_date": "Thu, 7 Jul 2022 10:24:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix unnecessary includes and comments in 019_replslot_limit.pl,\n 007_wal.pl and 004_timeline_switch.pl"
},
{
"msg_contents": ">\n> Especially knowing that the test uses a segment size of 1MB via initdb\n> to be cheaper. v2 looks fine by itself, so applied.\n>\n\nThanks, Michael!\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nEspecially knowing that the test uses a segment size of 1MB via initdb\nto be cheaper. v2 looks fine by itself, so applied.Thanks, Michael!-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com",
"msg_date": "Thu, 7 Jul 2022 12:50:00 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix unnecessary includes and comments in 019_replslot_limit.pl,\n 007_wal.pl and 004_timeline_switch.pl"
}
] |
[
{
"msg_contents": ">Zhihong Yu <zyu(at)yugabyte(dot)com> writes:\n>> I was looking at the code in hash_record()\n>> of src/backend/utils/adt/rowtypes.c\n>> It seems if nulls[i] is true, we don't need to look up the hash function.\n\n>I don't think this is worth changing. It complicates the logic,\n>rendering it unlike quite a few other functions written in the same\n>style. In cases where the performance actually matters, the hash\n>function is cached across multiple calls anyway. You might save\n>something if you have many calls in a query and not one of them\n>receives a non-null input, but how likely is that?\n\nI disagree.\nI think that is worth changing. The fact of complicating the logic\nis irrelevant.\nBut maybe the v2 attached would be a little better.\nMy doubt is the result calc when nulls are true.\n\nregards,\nRanier Vilela",
"msg_date": "Sat, 21 May 2022 10:06:36 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: check for null value before looking up the hash function"
},
{
"msg_contents": "Em sáb., 21 de mai. de 2022 às 10:06, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> >Zhihong Yu <zyu(at)yugabyte(dot)com> writes:\n> >> I was looking at the code in hash_record()\n> >> of src/backend/utils/adt/rowtypes.c\n> >> It seems if nulls[i] is true, we don't need to look up the hash\n> function.\n>\n> >I don't think this is worth changing. It complicates the logic,\n> >rendering it unlike quite a few other functions written in the same\n> >style. In cases where the performance actually matters, the hash\n> >function is cached across multiple calls anyway. You might save\n> >something if you have many calls in a query and not one of them\n> >receives a non-null input, but how likely is that?\n>\n> I disagree.\n> I think that is worth changing. The fact of complicating the logic\n> is irrelevant.\n> But maybe the v2 attached would be a little better.\n>\nOr v3.\n\nregards,\nRanier Vilela",
"msg_date": "Sat, 21 May 2022 10:21:48 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: check for null value before looking up the hash function"
},
{
"msg_contents": "\n\nOn 5/21/22 15:06, Ranier Vilela wrote:\n>>Zhihong Yu <zyu(at)yugabyte(dot)com> writes:\n>>> I was looking at the code in hash_record()\n>>> of src/backend/utils/adt/rowtypes.c\n>>> It seems if nulls[i] is true, we don't need to look up the hash function.\n> \n>>I don't think this is worth changing. It complicates the logic,\n>>rendering it unlike quite a few other functions written in the same\n>>style. In cases where the performance actually matters, the hash\n>>function is cached across multiple calls anyway. You might save\n>>something if you have many calls in a query and not one of them\n>>receives a non-null input, but how likely is that?\n> \n> I disagree.\n> I think that is worth changing. The fact of complicating the logic\n> is irrelevant.\n\nThat's a quite bold claim, and yet you haven't supported it by any\nargument whatsoever. Trade-offs between complexity and efficiency are a\ncrucial development task, so complicating the logic clearly does matter.\n\nIt might be out-weighted by efficiency benefits, but as Tom suggested\nthe cases that might benefit from this are extremely unlikely (data with\njust NULL values). And even for those cases no benchmark quantifying the\ndifference was presented.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 21 May 2022 17:05:55 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: check for null value before looking up the hash function"
},
{
"msg_contents": "Em sáb., 21 de mai. de 2022 às 12:05, Tomas Vondra <\ntomas.vondra@enterprisedb.com> escreveu:\n\n>\n>\n> On 5/21/22 15:06, Ranier Vilela wrote:\n> >>Zhihong Yu <zyu(at)yugabyte(dot)com> writes:\n> >>> I was looking at the code in hash_record()\n> >>> of src/backend/utils/adt/rowtypes.c\n> >>> It seems if nulls[i] is true, we don't need to look up the hash\n> function.\n> >\n> >>I don't think this is worth changing. It complicates the logic,\n> >>rendering it unlike quite a few other functions written in the same\n> >>style. In cases where the performance actually matters, the hash\n> >>function is cached across multiple calls anyway. You might save\n> >>something if you have many calls in a query and not one of them\n> >>receives a non-null input, but how likely is that?\n> >\n> > I disagree.\n> > I think that is worth changing. The fact of complicating the logic\n> > is irrelevant.\n>\n> That's a quite bold claim, and yet you haven't supported it by any\n> argument whatsoever. Trade-offs between complexity and efficiency are a\n> crucial development task, so complicating the logic clearly does matter.\n>\nWhat I meant is that complicating the logic in search of efficiency is\nworth it, and that's what everyone is looking for in this thread.\nLikewise, not complicating the logic, losing a little bit of efficiency,\napplied to all the code, leads to a big loss of efficiency.\nIn other words, I never miss an opportunity to gain efficiency.\n\nregards,\nRanier Vilela\n\nEm sáb., 21 de mai. de 2022 às 12:05, Tomas Vondra <tomas.vondra@enterprisedb.com> escreveu:\n\nOn 5/21/22 15:06, Ranier Vilela wrote:\n>>Zhihong Yu <zyu(at)yugabyte(dot)com> writes:\n>>> I was looking at the code in hash_record()\n>>> of src/backend/utils/adt/rowtypes.c\n>>> It seems if nulls[i] is true, we don't need to look up the hash function.\n> \n>>I don't think this is worth changing. It complicates the logic,\n>>rendering it unlike quite a few other functions written in the same\n>>style. In cases where the performance actually matters, the hash\n>>function is cached across multiple calls anyway. You might save\n>>something if you have many calls in a query and not one of them\n>>receives a non-null input, but how likely is that?\n> \n> I disagree.\n> I think that is worth changing. The fact of complicating the logic\n> is irrelevant.\n\nThat's a quite bold claim, and yet you haven't supported it by any\nargument whatsoever. Trade-offs between complexity and efficiency are a\ncrucial development task, so complicating the logic clearly does matter.What I meant is that complicating the logic in search of efficiency is worth it, and that's what everyone is looking for in this thread.Likewise, not complicating the logic, losing a little bit of efficiency, applied to all the code, leads to a big loss of efficiency.In other words, I never miss an opportunity to gain efficiency.regards,Ranier Vilela",
"msg_date": "Sat, 21 May 2022 12:32:20 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: check for null value before looking up the hash function"
},
{
"msg_contents": "On Sat, May 21, 2022 at 8:32 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> Em sáb., 21 de mai. de 2022 às 12:05, Tomas Vondra <\n> tomas.vondra@enterprisedb.com> escreveu:\n>\n>>\n>>\n>> On 5/21/22 15:06, Ranier Vilela wrote:\n>> >>Zhihong Yu <zyu(at)yugabyte(dot)com> writes:\n>> >>> I was looking at the code in hash_record()\n>> >>> of src/backend/utils/adt/rowtypes.c\n>> >>> It seems if nulls[i] is true, we don't need to look up the hash\n>> function.\n>> >\n>> >>I don't think this is worth changing. It complicates the logic,\n>> >>rendering it unlike quite a few other functions written in the same\n>> >>style. In cases where the performance actually matters, the hash\n>> >>function is cached across multiple calls anyway. You might save\n>> >>something if you have many calls in a query and not one of them\n>> >>receives a non-null input, but how likely is that?\n>> >\n>> > I disagree.\n>> > I think that is worth changing. The fact of complicating the logic\n>> > is irrelevant.\n>>\n>> That's a quite bold claim, and yet you haven't supported it by any\n>> argument whatsoever. Trade-offs between complexity and efficiency are a\n>> crucial development task, so complicating the logic clearly does matter.\n>>\n> What I meant is that complicating the logic in search of efficiency is\n> worth it, and that's what everyone is looking for in this thread.\n> Likewise, not complicating the logic, losing a little bit of efficiency,\n> applied to all the code, leads to a big loss of efficiency.\n> In other words, I never miss an opportunity to gain efficiency.\n>\n>\nI disliked the fact that 80% of the patch was adding indentation. Instead,\nremove indentation for the normal flow case and move the loop short-circuit\nto the top of the loop where it is traditionally found.\n\nThis seems like a win on both efficiency and complexity grounds. Having\nthe \"/* see hash_array() */\" comment and logic twice is a downside but a\nminor one that could be replaced with a function call if desired.\n\ndiff --git a/src/backend/utils/adt/rowtypes.c\nb/src/backend/utils/adt/rowtypes.c\nindex db843a0fbf..0bc28d1742 100644\n--- a/src/backend/utils/adt/rowtypes.c\n+++ b/src/backend/utils/adt/rowtypes.c\n@@ -1838,6 +1838,13 @@ hash_record(PG_FUNCTION_ARGS)\n TypeCacheEntry *typentry;\n uint32 element_hash;\n\n+ if (nulls[i])\n+ {\n+ /* see hash_array() */\n+ result = (result << 5) - result + 0;\n+ continue;\n+ }\n+\n att = TupleDescAttr(tupdesc, i);\n\n if (att->attisdropped)\n@@ -1860,24 +1867,16 @@ hash_record(PG_FUNCTION_ARGS)\n my_extra->columns[i].typentry = typentry;\n }\n\n- /* Compute hash of element */\n- if (nulls[i])\n- {\n- element_hash = 0;\n- }\n- else\n- {\n- LOCAL_FCINFO(locfcinfo, 1);\n+ LOCAL_FCINFO(locfcinfo, 1);\n\n- InitFunctionCallInfoData(*locfcinfo,\n&typentry->hash_proc_finfo, 1,\n- att->attcollation, NULL, NULL);\n- locfcinfo->args[0].value = values[i];\n- locfcinfo->args[0].isnull = false;\n- element_hash = DatumGetUInt32(FunctionCallInvoke(locfcinfo));\n+ InitFunctionCallInfoData(*locfcinfo, &typentry->hash_proc_finfo, 1,\n+ att->attcollation, NULL, NULL);\n+ locfcinfo->args[0].value = values[i];\n+ locfcinfo->args[0].isnull = false;\n+ element_hash = DatumGetUInt32(FunctionCallInvoke(locfcinfo));\n\n- /* We don't expect hash support functions to return null */\n- Assert(!locfcinfo->isnull);\n- }\n+ /* We don't expect hash support functions to return null */\n+ Assert(!locfcinfo->isnull);\n\n /* see hash_array() */\n result = (result << 5) - result + element_hash;\n\nOn Sat, May 21, 2022 at 8:32 AM Ranier Vilela <ranier.vf@gmail.com> wrote:Em sáb., 21 de mai. de 2022 às 12:05, Tomas Vondra <tomas.vondra@enterprisedb.com> escreveu:\n\nOn 5/21/22 15:06, Ranier Vilela wrote:\n>>Zhihong Yu <zyu(at)yugabyte(dot)com> writes:\n>>> I was looking at the code in hash_record()\n>>> of src/backend/utils/adt/rowtypes.c\n>>> It seems if nulls[i] is true, we don't need to look up the hash function.\n> \n>>I don't think this is worth changing. It complicates the logic,\n>>rendering it unlike quite a few other functions written in the same\n>>style. In cases where the performance actually matters, the hash\n>>function is cached across multiple calls anyway. You might save\n>>something if you have many calls in a query and not one of them\n>>receives a non-null input, but how likely is that?\n> \n> I disagree.\n> I think that is worth changing. The fact of complicating the logic\n> is irrelevant.\n\nThat's a quite bold claim, and yet you haven't supported it by any\nargument whatsoever. Trade-offs between complexity and efficiency are a\ncrucial development task, so complicating the logic clearly does matter.What I meant is that complicating the logic in search of efficiency is worth it, and that's what everyone is looking for in this thread.Likewise, not complicating the logic, losing a little bit of efficiency, applied to all the code, leads to a big loss of efficiency.In other words, I never miss an opportunity to gain efficiency.I disliked the fact that 80% of the patch was adding indentation. Instead, remove indentation for the normal flow case and move the loop short-circuit to the top of the loop where it is traditionally found.This seems like a win on both efficiency and complexity grounds. Having the \"/* see hash_array() */\" comment and logic twice is a downside but a minor one that could be replaced with a function call if desired.diff --git a/src/backend/utils/adt/rowtypes.c b/src/backend/utils/adt/rowtypes.cindex db843a0fbf..0bc28d1742 100644--- a/src/backend/utils/adt/rowtypes.c+++ b/src/backend/utils/adt/rowtypes.c@@ -1838,6 +1838,13 @@ hash_record(PG_FUNCTION_ARGS) TypeCacheEntry *typentry; uint32 element_hash; + if (nulls[i])+ {+ /* see hash_array() */+ result = (result << 5) - result + 0;+ continue;+ }+ att = TupleDescAttr(tupdesc, i); if (att->attisdropped)@@ -1860,24 +1867,16 @@ hash_record(PG_FUNCTION_ARGS) my_extra->columns[i].typentry = typentry; } - /* Compute hash of element */- if (nulls[i])- {- element_hash = 0;- }- else- {- LOCAL_FCINFO(locfcinfo, 1);+ LOCAL_FCINFO(locfcinfo, 1); - InitFunctionCallInfoData(*locfcinfo, &typentry->hash_proc_finfo, 1,- att->attcollation, NULL, NULL);- locfcinfo->args[0].value = values[i];- locfcinfo->args[0].isnull = false;- element_hash = DatumGetUInt32(FunctionCallInvoke(locfcinfo));+ InitFunctionCallInfoData(*locfcinfo, &typentry->hash_proc_finfo, 1,+ att->attcollation, NULL, NULL);+ locfcinfo->args[0].value = values[i];+ locfcinfo->args[0].isnull = false;+ element_hash = DatumGetUInt32(FunctionCallInvoke(locfcinfo)); - /* We don't expect hash support functions to return null */- Assert(!locfcinfo->isnull);- }+ /* We don't expect hash support functions to return null */+ Assert(!locfcinfo->isnull); /* see hash_array() */ result = (result << 5) - result + element_hash;",
"msg_date": "Sat, 21 May 2022 09:04:45 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: check for null value before looking up the hash function"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Em sáb., 21 de mai. de 2022 às 12:05, Tomas Vondra <\n> tomas.vondra@enterprisedb.com> escreveu:\n>> That's a quite bold claim, and yet you haven't supported it by any\n>> argument whatsoever. Trade-offs between complexity and efficiency are a\n>> crucial development task, so complicating the logic clearly does matter.\n\n> What I meant is that complicating the logic in search of efficiency is\n> worth it, and that's what everyone is looking for in this thread.\n> Likewise, not complicating the logic, losing a little bit of efficiency,\n> applied to all the code, leads to a big loss of efficiency.\n> In other words, I never miss an opportunity to gain efficiency.\n\n[ shrug... ] You quietly ignored Tomas' main point, which is that no\nevidence has been provided that there's actually any efficiency gain.\n\n(1) Sure, in the case where only null values are encountered during a\nquery, we can save a cache lookup, but will that be even micro-measurable\ncompared to general query overhead? Seems unlikely, especially if this is\nchanged in only one place. That ties into my complaint about how this is\njust one instance of a fairly widely used coding pattern.\n\n(2) What are the effects when we *do* eventually encounter a non-null\nvalue? The existing coding will perform all the necessary lookups\nat first call, but with the proposed change those may be spread across\nquery execution. It's not implausible that that leads to a net loss\nof efficiency, due to locality-of-access effects.\n\nI'm also concerned that this increases the size of the \"state space\"\nof this function, in that there are now more possible states of its\ncached information. While that probably doesn't add any new bugs,\nit does add complication and make things harder to reason about.\nSo the bottom line remains that I don't think it's worth changing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 21 May 2022 12:13:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: check for null value before looking up the hash function"
},
{
"msg_contents": "Em sáb., 21 de mai. de 2022 às 13:13, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > Em sáb., 21 de mai. de 2022 às 12:05, Tomas Vondra <\n> > tomas.vondra@enterprisedb.com> escreveu:\n> >> That's a quite bold claim, and yet you haven't supported it by any\n> >> argument whatsoever. Trade-offs between complexity and efficiency are a\n> >> crucial development task, so complicating the logic clearly does matter.\n>\n> > What I meant is that complicating the logic in search of efficiency is\n> > worth it, and that's what everyone is looking for in this thread.\n> > Likewise, not complicating the logic, losing a little bit of efficiency,\n> > applied to all the code, leads to a big loss of efficiency.\n> > In other words, I never miss an opportunity to gain efficiency.\n>\n> [ shrug... ] You quietly ignored Tomas' main point, which is that no\n> evidence has been provided that there's actually any efficiency gain.\n>\nIMHO, the point here, is for-non-commiters everything needs benchmarks.\nBut, I have been see many commits withouts benchmarks or any evidence gains.\nAnd of course, having benchmarks is better, but for micro-optimizations,\nIt doesn't seem like it's needed that much.\n\n\n> (1) Sure, in the case where only null values are encountered during a\n> query, we can save a cache lookup, but will that be even micro-measurable\n> compared to general query overhead? Seems unlikely, especially if this is\n> changed in only one place. That ties into my complaint about how this is\n> just one instance of a fairly widely used coding pattern.\n>\nOf course, changing only in one place, the gain is tiny, but, step by step,\nthe coding pattern is changing too, becoming new \"fairly widely\".\n\n\n>\n> (2) What are the effects when we *do* eventually encounter a non-null\n> value? The existing coding will perform all the necessary lookups\n> at first call, but with the proposed change those may be spread across\n> query execution. It's not implausible that that leads to a net loss\n> of efficiency, due to locality-of-access effects.\n>\nWeel the current code, test branch for nulls first.\nMost of the time, this is not true.\nSo, the current code has poor branch prediction, at least.\nWhat I proposed, improves the branch prediction, at least.\n\n\n> I'm also concerned that this increases the size of the \"state space\"\n> of this function, in that there are now more possible states of its\n> cached information. While that probably doesn't add any new bugs,\n> it does add complication and make things harder to reason about.\n> So the bottom line remains that I don't think it's worth changing.\n>\nOf course, your arguments are all valids.\nThat would all be clarified with benchmarks, maybe the OP is interested in\ndoing them.\n\nregards,\nRanier Vilela\n\nEm sáb., 21 de mai. de 2022 às 13:13, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> Em sáb., 21 de mai. de 2022 às 12:05, Tomas Vondra <\n> tomas.vondra@enterprisedb.com> escreveu:\n>> That's a quite bold claim, and yet you haven't supported it by any\n>> argument whatsoever. Trade-offs between complexity and efficiency are a\n>> crucial development task, so complicating the logic clearly does matter.\n\n> What I meant is that complicating the logic in search of efficiency is\n> worth it, and that's what everyone is looking for in this thread.\n> Likewise, not complicating the logic, losing a little bit of efficiency,\n> applied to all the code, leads to a big loss of efficiency.\n> In other words, I never miss an opportunity to gain efficiency.\n\n[ shrug... ] You quietly ignored Tomas' main point, which is that no\nevidence has been provided that there's actually any efficiency gain.IMHO, the point here, is for-non-commiters everything needs benchmarks.But, I have been see many commits withouts benchmarks or any evidence gains.And of course, having benchmarks is better, but for micro-optimizations, It doesn't seem like it's needed that much.\n\n(1) Sure, in the case where only null values are encountered during a\nquery, we can save a cache lookup, but will that be even micro-measurable\ncompared to general query overhead? Seems unlikely, especially if this is\nchanged in only one place. That ties into my complaint about how this is\njust one instance of a fairly widely used coding pattern.Of course, changing only in one place, the gain is tiny, but, step by step,the coding pattern is changing too, becoming new \"fairly widely\". \n\n(2) What are the effects when we *do* eventually encounter a non-null\nvalue? The existing coding will perform all the necessary lookups\nat first call, but with the proposed change those may be spread across\nquery execution. It's not implausible that that leads to a net loss\nof efficiency, due to locality-of-access effects.Weel the current code, test branch for nulls first.Most of the time, this is not true.So, the current code has poor branch prediction, at least.What I proposed, improves the branch prediction, at least. \n\nI'm also concerned that this increases the size of the \"state space\"\nof this function, in that there are now more possible states of its\ncached information. While that probably doesn't add any new bugs,\nit does add complication and make things harder to reason about.\nSo the bottom line remains that I don't think it's worth changing.Of course, your arguments are all valids.That would all be clarified with benchmarks, maybe the OP is interested in doing them.regards,Ranier Vilela",
"msg_date": "Sat, 21 May 2022 14:03:47 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: check for null value before looking up the hash function"
},
{
"msg_contents": "On Sat, May 21, 2022 at 10:04 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> Em sáb., 21 de mai. de 2022 às 13:13, Tom Lane <tgl@sss.pgh.pa.us>\n> escreveu:\n>\n>> Ranier Vilela <ranier.vf@gmail.com> writes:\n>> > Em sáb., 21 de mai. de 2022 às 12:05, Tomas Vondra <\n>> > tomas.vondra@enterprisedb.com> escreveu:\n>> >> That's a quite bold claim, and yet you haven't supported it by any\n>> >> argument whatsoever. Trade-offs between complexity and efficiency are a\n>> >> crucial development task, so complicating the logic clearly does\n>> matter.\n>>\n>> > What I meant is that complicating the logic in search of efficiency is\n>> > worth it, and that's what everyone is looking for in this thread.\n>> > Likewise, not complicating the logic, losing a little bit of efficiency,\n>> > applied to all the code, leads to a big loss of efficiency.\n>> > In other words, I never miss an opportunity to gain efficiency.\n>>\n>> [ shrug... ] You quietly ignored Tomas' main point, which is that no\n>> evidence has been provided that there's actually any efficiency gain.\n>>\n> IMHO, the point here, is for-non-commiters everything needs benchmarks.\n> But, I have been see many commits withouts benchmarks or any evidence\n> gains.\n> And of course, having benchmarks is better, but for micro-optimizations,\n> It doesn't seem like it's needed that much.\n>\n\nMostly because committers don't tend to do this kind of drive-by patching\nthat changes long-established code, which are fairly categorized as\npremature optimizations.\n\n\n>\n>> (1) Sure, in the case where only null values are encountered during a\n>> query, we can save a cache lookup, but will that be even micro-measurable\n>> compared to general query overhead? Seems unlikely, especially if this is\n>> changed in only one place. That ties into my complaint about how this is\n>> just one instance of a fairly widely used coding pattern.\n>>\n> Of course, changing only in one place, the gain is tiny, but, step by step,\n> the coding pattern is changing too, becoming new \"fairly widely\".\n>\n\nAgreed, but that isn't what was done here, there was no investigation of\nthe overall coding practice and suggestions to change them all to the\nimproved form.\n\n>\n>\n>>\n>> (2) What are the effects when we *do* eventually encounter a non-null\n>> value? The existing coding will perform all the necessary lookups\n>> at first call, but with the proposed change those may be spread across\n>> query execution. It's not implausible that that leads to a net loss\n>> of efficiency, due to locality-of-access effects.\n>>\n> Weel the current code, test branch for nulls first.\n> Most of the time, this is not true.\n> So, the current code has poor branch prediction, at least.\n> What I proposed, improves the branch prediction, at least.\n>\n\nPer my other reply, the v3 proposal did not, IMHO, do a good job of\nbranch prediction either.\n\nI find an improvement on code complexity grounds to be warranted, though\nthe benefit seems unlikely to outweigh the cost of doing it everywhere\n(fixing only one place actually increases the cost component).\n\nEven without the plausible locality-of-access argument the benefit here is\nlikely to be a micro-optimization that provides only minimal benefit.\nEvidence to the contrary is welcomed but, yes, the burden is going to be\nplaced squarely on the patch author(s) to demonstrate the benefit accrued\nfrom the code churn is worth the cost.\n\nDavid J.\n\nOn Sat, May 21, 2022 at 10:04 AM Ranier Vilela <ranier.vf@gmail.com> wrote:Em sáb., 21 de mai. de 2022 às 13:13, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> Em sáb., 21 de mai. de 2022 às 12:05, Tomas Vondra <\n> tomas.vondra@enterprisedb.com> escreveu:\n>> That's a quite bold claim, and yet you haven't supported it by any\n>> argument whatsoever. Trade-offs between complexity and efficiency are a\n>> crucial development task, so complicating the logic clearly does matter.\n\n> What I meant is that complicating the logic in search of efficiency is\n> worth it, and that's what everyone is looking for in this thread.\n> Likewise, not complicating the logic, losing a little bit of efficiency,\n> applied to all the code, leads to a big loss of efficiency.\n> In other words, I never miss an opportunity to gain efficiency.\n\n[ shrug... ] You quietly ignored Tomas' main point, which is that no\nevidence has been provided that there's actually any efficiency gain.IMHO, the point here, is for-non-commiters everything needs benchmarks.But, I have been see many commits withouts benchmarks or any evidence gains.And of course, having benchmarks is better, but for micro-optimizations, It doesn't seem like it's needed that much.Mostly because committers don't tend to do this kind of drive-by patching that changes long-established code, which are fairly categorized as premature optimizations.\n\n(1) Sure, in the case where only null values are encountered during a\nquery, we can save a cache lookup, but will that be even micro-measurable\ncompared to general query overhead? Seems unlikely, especially if this is\nchanged in only one place. That ties into my complaint about how this is\njust one instance of a fairly widely used coding pattern.Of course, changing only in one place, the gain is tiny, but, step by step,the coding pattern is changing too, becoming new \"fairly widely\".Agreed, but that isn't what was done here, there was no investigation of the overall coding practice and suggestions to change them all to the improved form. \n\n(2) What are the effects when we *do* eventually encounter a non-null\nvalue? The existing coding will perform all the necessary lookups\nat first call, but with the proposed change those may be spread across\nquery execution. It's not implausible that that leads to a net loss\nof efficiency, due to locality-of-access effects.Weel the current code, test branch for nulls first.Most of the time, this is not true.So, the current code has poor branch prediction, at least.What I proposed, improves the branch prediction, at least.Per my other reply, the v3 proposal did not, IMHO, do a good job of branch prediction either.I find an improvement on code complexity grounds to be warranted, though the benefit seems unlikely to outweigh the cost of doing it everywhere (fixing only one place actually increases the cost component).Even without the plausible locality-of-access argument the benefit here is likely to be a micro-optimization that provides only minimal benefit. Evidence to the contrary is welcomed but, yes, the burden is going to be placed squarely on the patch author(s) to demonstrate the benefit accrued from the code churn is worth the cost.David J.",
"msg_date": "Sat, 21 May 2022 10:24:46 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: check for null value before looking up the hash function"
},
{
"msg_contents": "On 5/21/22 19:24, David G. Johnston wrote:\n> On Sat, May 21, 2022 at 10:04 AM Ranier Vilela <ranier.vf@gmail.com\n> <mailto:ranier.vf@gmail.com>> wrote:\n> \n> Em sáb., 21 de mai. de 2022 às 13:13, Tom Lane <tgl@sss.pgh.pa.us\n> <mailto:tgl@sss.pgh.pa.us>> escreveu:\n> \n> Ranier Vilela <ranier.vf@gmail.com <mailto:ranier.vf@gmail.com>>\n> writes:\n> > Em sáb., 21 de mai. de 2022 às 12:05, Tomas Vondra <\n> > tomas.vondra@enterprisedb.com\n> <mailto:tomas.vondra@enterprisedb.com>> escreveu:\n> >> That's a quite bold claim, and yet you haven't supported it\n> by any\n> >> argument whatsoever. Trade-offs between complexity and\n> efficiency are a\n> >> crucial development task, so complicating the logic clearly\n> does matter.\n> \n> > What I meant is that complicating the logic in search of\n> efficiency is\n> > worth it, and that's what everyone is looking for in this thread.\n> > Likewise, not complicating the logic, losing a little bit of\n> efficiency,\n> > applied to all the code, leads to a big loss of efficiency.\n> > In other words, I never miss an opportunity to gain efficiency.\n> \n> [ shrug... ] You quietly ignored Tomas' main point, which is\n> that no\n> evidence has been provided that there's actually any efficiency\n> gain.\n> \n> IMHO, the point here, is for-non-commiters everything needs benchmarks.\n> But, I have been see many commits withouts benchmarks or any\n> evidence gains.\n> And of course, having benchmarks is better, but for\n> micro-optimizations,\n> It doesn't seem like it's needed that much.\n> \n> \n> Mostly because committers don't tend to do this kind of drive-by\n> patching that changes long-established code, which are fairly\n> categorized as premature optimizations.\n> \n\nFWIW I find the argument that committers are somehow absolved of having\nto demonstrate the benefits of a change rather misleading. Perhaps even\noffensive, as it hints committers are less demanding/careful when it\ncomes to their own patches. Which goes directly against my experience\nand understanding of what being a committer is.\n\nI'm not going to claim every \"optimization\" patch submitted by a\ncommitter had a benchmark - some patches certainly are pushed without\nit, with just some general reasoning why the change is beneficial.\n\nBut I'm sure that when someone suggest the reasoning is wrong, it's\ntaken seriously - the discussion continues, there's a benchmark etc. And\nI can't recall a committer suggesting it's fine because some other patch\ndidn't have a benchmark either.\n\n> \n> \n> (1) Sure, in the case where only null values are encountered\n> during a\n> query, we can save a cache lookup, but will that be even\n> micro-measurable\n> compared to general query overhead? Seems unlikely, especially\n> if this is\n> changed in only one place. That ties into my complaint about\n> how this is\n> just one instance of a fairly widely used coding pattern.\n> \n> Of course, changing only in one place, the gain is tiny, but, step\n> by step,\n> the coding pattern is changing too, becoming new \"fairly widely\".\n> \n> \n> Agreed, but that isn't what was done here, there was no investigation of\n> the overall coding practice and suggestions to change them all to the\n> improved form.\n> \n\nRight. If we think the coding pattern is an improvement, we should tweak\nall the places, not just one (and hope the other places will magically\nswitch on their own).\n\nMore importantly, though, I kinda doubt tweaking more places will\nactually make the difference more significant (assuming it actually does\nimprove things). How likely is it you need to hash the same data type\nmultiple times, with just NULL values? And even if you do, improvements\nlike this tend to sum, not multiply - i.e. if the improvement is 1% for\none place, it's still 1% even if the query hits 10 such places.\n\n> \n> (2) What are the effects when we *do* eventually encounter a\n> non-null\n> value? The existing coding will perform all the necessary lookups\n> at first call, but with the proposed change those may be spread\n> across\n> query execution. It's not implausible that that leads to a net loss\n> of efficiency, due to locality-of-access effects.\n> \n> Weel the current code, test branch for nulls first.\n> Most of the time, this is not true.\n> So, the current code has poor branch prediction, at least.\n> What I proposed, improves the branch prediction, at least.\n> \n> \n> Per my other reply, the v3 proposal did not, IMHO, do a good job of\n> branch prediction either.\n> \n> I find an improvement on code complexity grounds to be warranted, though\n> the benefit seems unlikely to outweigh the cost of doing it everywhere\n> (fixing only one place actually increases the cost component).\n> \n> Even without the plausible locality-of-access argument the benefit here\n> is likely to be a micro-optimization that provides only minimal\n> benefit. Evidence to the contrary is welcomed but, yes, the burden is\n> going to be placed squarely on the patch author(s) to demonstrate the\n> benefit accrued from the code churn is worth the cost.\n> \n\nIMHO questions like this are exactly why some actual benchmark results\nwould be the best thing to move this forward. I certainly can't look at\nC code and say how good it is for branch prediction.\n\n\nregards\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 21 May 2022 23:14:59 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: check for null value before looking up the hash function"
}
] |
[
{
"msg_contents": "It looks like the docs weren't updated in 6f6b99d13 for v11.\n\nThe docs also seem to omit \"FOR VALUES\" literal.\nAnd don't define partition_bound_spec (which I didn't fix here).\n\ndiff --git a/doc/src/sgml/ref/create_foreign_table.sgml b/doc/src/sgml/ref/create_foreign_table.sgml\nindex b374d8645db..1f1c4a52a2a 100644\n--- a/doc/src/sgml/ref/create_foreign_table.sgml\n+++ b/doc/src/sgml/ref/create_foreign_table.sgml\n@@ -35,7 +35,8 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] <replaceable class=\"parameter\">table_name\n { <replaceable class=\"parameter\">column_name</replaceable> [ WITH OPTIONS ] [ <replaceable class=\"parameter\">column_constraint</replaceable> [ ... ] ]\n | <replaceable>table_constraint</replaceable> }\n [, ... ]\n-) ] <replaceable class=\"parameter\">partition_bound_spec</replaceable>\n+) ]\n+ { FOR VALUES <replaceable class=\"parameter\">partition_bound_spec</replaceable> | DEFAULT }\n SERVER <replaceable class=\"parameter\">server_name</replaceable>\n [ OPTIONS ( <replaceable class=\"parameter\">option</replaceable> '<replaceable class=\"parameter\">value</replaceable>' [, ... ] ) ]\n \n\n\n",
"msg_date": "Sat, 21 May 2022 08:09:22 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "doc: CREATE FOREIGN TABLE .. PARTITION OF .. DEFAULT"
},
{
"msg_contents": "On Sat, May 21, 2022 at 9:09 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> It looks like the docs weren't updated in 6f6b99d13 for v11.\n\nIn my defense, that commit definitely contained documentation changes.\nIt updated alter_table.sgml and create_table.sgml. I guess we missed\ncreate_foreign_table.sgml, though.\n\n> The docs also seem to omit \"FOR VALUES\" literal.\n> And don't define partition_bound_spec (which I didn't fix here).\n>\n> diff --git a/doc/src/sgml/ref/create_foreign_table.sgml b/doc/src/sgml/ref/create_foreign_table.sgml\n> index b374d8645db..1f1c4a52a2a 100644\n> --- a/doc/src/sgml/ref/create_foreign_table.sgml\n> +++ b/doc/src/sgml/ref/create_foreign_table.sgml\n> @@ -35,7 +35,8 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] <replaceable class=\"parameter\">table_name\n> { <replaceable class=\"parameter\">column_name</replaceable> [ WITH OPTIONS ] [ <replaceable class=\"parameter\">column_constraint</replaceable> [ ... ] ]\n> | <replaceable>table_constraint</replaceable> }\n> [, ... ]\n> -) ] <replaceable class=\"parameter\">partition_bound_spec</replaceable>\n> +) ]\n> + { FOR VALUES <replaceable class=\"parameter\">partition_bound_spec</replaceable> | DEFAULT }\n> SERVER <replaceable class=\"parameter\">server_name</replaceable>\n> [ OPTIONS ( <replaceable class=\"parameter\">option</replaceable> '<replaceable class=\"parameter\">value</replaceable>' [, ... ] ) ]\n\nOK, makes sense. I guess we need to copy over the definition of\npartition_bound_spec from create_table.sgml here as well.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 25 May 2022 08:44:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: CREATE FOREIGN TABLE .. PARTITION OF .. DEFAULT"
},
{
"msg_contents": "On Wed, May 25, 2022 at 9:44 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Sat, May 21, 2022 at 9:09 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > It looks like the docs weren't updated in 6f6b99d13 for v11.\n>\n> In my defense, that commit definitely contained documentation changes.\n> It updated alter_table.sgml and create_table.sgml. I guess we missed\n> create_foreign_table.sgml, though.\n>\n> > The docs also seem to omit \"FOR VALUES\" literal.\n\nThat would be my mistake.\n\n> > And don't define partition_bound_spec (which I didn't fix here).\n> >\n> > diff --git a/doc/src/sgml/ref/create_foreign_table.sgml b/doc/src/sgml/ref/create_foreign_table.sgml\n> > index b374d8645db..1f1c4a52a2a 100644\n> > --- a/doc/src/sgml/ref/create_foreign_table.sgml\n> > +++ b/doc/src/sgml/ref/create_foreign_table.sgml\n> > @@ -35,7 +35,8 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] <replaceable class=\"parameter\">table_name\n> > { <replaceable class=\"parameter\">column_name</replaceable> [ WITH OPTIONS ] [ <replaceable class=\"parameter\">column_constraint</replaceable> [ ... ] ]\n> > | <replaceable>table_constraint</replaceable> }\n> > [, ... ]\n> > -) ] <replaceable class=\"parameter\">partition_bound_spec</replaceable>\n> > +) ]\n> > + { FOR VALUES <replaceable class=\"parameter\">partition_bound_spec</replaceable> | DEFAULT }\n> > SERVER <replaceable class=\"parameter\">server_name</replaceable>\n> > [ OPTIONS ( <replaceable class=\"parameter\">option</replaceable> '<replaceable class=\"parameter\">value</replaceable>' [, ... ] ) ]\n>\n> OK, makes sense. I guess we need to copy over the definition of\n> partition_bound_spec from create_table.sgml here as well.\n\nYes. a2a2205761 did that for alter_table.sgml and we evidently missed\nincluding create_foreign_table.sgml in that discussion.\n\nAttached 2 patches -- one for PG 11 onwards and another for PG 10.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 26 May 2022 14:49:55 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: CREATE FOREIGN TABLE .. PARTITION OF .. DEFAULT"
},
{
"msg_contents": "On Thu, May 26, 2022 at 1:50 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> Attached 2 patches -- one for PG 11 onwards and another for PG 10.\n\nCommitted, except I adjusted the v11 version so that the CREATE\nFOREIGN TABLE documentation would match the CREATE TABLE documentation\nin that branch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 26 May 2022 12:57:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: CREATE FOREIGN TABLE .. PARTITION OF .. DEFAULT"
},
{
"msg_contents": "On Fri, May 27, 2022 at 1:58 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, May 26, 2022 at 1:50 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Attached 2 patches -- one for PG 11 onwards and another for PG 10.\n>\n> Committed, except I adjusted the v11 version so that the CREATE\n> FOREIGN TABLE documentation would match the CREATE TABLE documentation\n> in that branch.\n\nThank you.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 27 May 2022 08:28:50 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: CREATE FOREIGN TABLE .. PARTITION OF .. DEFAULT"
},
{
"msg_contents": "On Fri, May 27, 2022 at 1:58 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Committed, except I adjusted the v11 version so that the CREATE\n> FOREIGN TABLE documentation would match the CREATE TABLE documentation\n> in that branch.\n\nI think we should fix the syntax synopsis in the Parameters section\nof the CREATE FOREIGN TABLE reference page as well. Attached is a\npatch for that.\n\nSorry for being late for this.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Fri, 27 May 2022 19:15:32 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: CREATE FOREIGN TABLE .. PARTITION OF .. DEFAULT"
},
{
"msg_contents": "On Fri, May 27, 2022 at 7:15 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Fri, May 27, 2022 at 1:58 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Committed, except I adjusted the v11 version so that the CREATE\n> > FOREIGN TABLE documentation would match the CREATE TABLE documentation\n> > in that branch.\n>\n> I think we should fix the syntax synopsis in the Parameters section\n> of the CREATE FOREIGN TABLE reference page as well.\n\nOops, good catch.\n\n> Attached is a patch for that.\n\nThank you.\n\nI think we should also rewrite the description to match the CREATE\nTABLE's text, as in the attached updated patch.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 27 May 2022 21:22:34 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: CREATE FOREIGN TABLE .. PARTITION OF .. DEFAULT"
},
{
"msg_contents": "Amit-san,\n\nOn Fri, May 27, 2022 at 9:22 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Fri, May 27, 2022 at 7:15 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > Attached is a patch for that.\n\n> I think we should also rewrite the description to match the CREATE\n> TABLE's text, as in the attached updated patch.\n\nActually, I thought the description would be OK as-is, because it says\n“See the similar form of CREATE TABLE for more details”, but I agree\nwith you; it’s much better to also rewrite the description as you\nsuggest.\n\nI’ll commit the patch unless Robert wants to.\n\nThanks for the review and patch!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Mon, 30 May 2022 15:27:42 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: CREATE FOREIGN TABLE .. PARTITION OF .. DEFAULT"
},
{
"msg_contents": "On Mon, May 30, 2022 at 2:27 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Fri, May 27, 2022 at 9:22 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Fri, May 27, 2022 at 7:15 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > Attached is a patch for that.\n>\n> > I think we should also rewrite the description to match the CREATE\n> > TABLE's text, as in the attached updated patch.\n>\n> Actually, I thought the description would be OK as-is, because it says\n> “See the similar form of CREATE TABLE for more details”, but I agree\n> with you; it’s much better to also rewrite the description as you\n> suggest.\n\nI would probably just update the synopsis. It's not very hard to\nfigure out what's likely to happen even without clicking through the\nlink, so it seems like it's just being long-winded to duplicate the\nstuff here. But I don't care much if you feel otherwise.\n\n> I’ll commit the patch unless Robert wants to.\n\nPlease go ahead.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 31 May 2022 08:35:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: CREATE FOREIGN TABLE .. PARTITION OF .. DEFAULT"
},
{
"msg_contents": "On Tue, May 31, 2022 at 9:35 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, May 30, 2022 at 2:27 AM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Fri, May 27, 2022 at 9:22 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Fri, May 27, 2022 at 7:15 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > > Attached is a patch for that.\n> >\n> > > I think we should also rewrite the description to match the CREATE\n> > > TABLE's text, as in the attached updated patch.\n> >\n> > Actually, I thought the description would be OK as-is, because it says\n> > “See the similar form of CREATE TABLE for more details”, but I agree\n> > with you; it’s much better to also rewrite the description as you\n> > suggest.\n>\n> I would probably just update the synopsis. It's not very hard to\n> figure out what's likely to happen even without clicking through the\n> link, so it seems like it's just being long-winded to duplicate the\n> stuff here. But I don't care much if you feel otherwise.\n\nIt looks like there are pros and cons. I think it’s a matter of\npreference, though.\n\nI thought it would be an improvement, but I agree that we can live\nwithout it, so I changed my mind; I'll go with my version. I think we\ncould revisit this later.\n\n> > I’ll commit the patch unless Robert wants to.\n>\n> Please go ahead.\n\nOK\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 1 Jun 2022 18:15:03 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: CREATE FOREIGN TABLE .. PARTITION OF .. DEFAULT"
},
{
"msg_contents": "On Wed, Jun 1, 2022 at 6:15 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Tue, May 31, 2022 at 9:35 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I would probably just update the synopsis. It's not very hard to\n> > figure out what's likely to happen even without clicking through the\n> > link, so it seems like it's just being long-winded to duplicate the\n> > stuff here. But I don't care much if you feel otherwise.\n>\n> It looks like there are pros and cons. I think it’s a matter of\n> preference, though.\n>\n> I thought it would be an improvement, but I agree that we can live\n> without it, so I changed my mind; I'll go with my version. I think we\n> could revisit this later.\n\nI guess I'm fine with leaving the text as-is, though slightly bothered\nby leaving the phrase \"partition of the given parent table with\nspecified partition bound values\" to also cover the DEFAULT partition\ncase.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 2 Jun 2022 10:23:00 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: CREATE FOREIGN TABLE .. PARTITION OF .. DEFAULT"
},
{
"msg_contents": "On Thu, Jun 2, 2022 at 10:23 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Jun 1, 2022 at 6:15 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Tue, May 31, 2022 at 9:35 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > I would probably just update the synopsis. It's not very hard to\n> > > figure out what's likely to happen even without clicking through the\n> > > link, so it seems like it's just being long-winded to duplicate the\n> > > stuff here. But I don't care much if you feel otherwise.\n> >\n> > It looks like there are pros and cons. I think it’s a matter of\n> > preference, though.\n> >\n> > I thought it would be an improvement, but I agree that we can live\n> > without it, so I changed my mind; I'll go with my version. I think we\n> > could revisit this later.\n>\n> I guess I'm fine with leaving the text as-is, though slightly bothered\n> by leaving the phrase \"partition of the given parent table with\n> specified partition bound values\" to also cover the DEFAULT partition\n> case.\n\nI think we should discuss this separately, maybe as a HEAD-only patch,\nso I pushed my version, leaving the description as-is.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 2 Jun 2022 18:14:47 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: CREATE FOREIGN TABLE .. PARTITION OF .. DEFAULT"
},
{
"msg_contents": "On Thu, Jun 2, 2022 at 6:14 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Thu, Jun 2, 2022 at 10:23 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Wed, Jun 1, 2022 at 6:15 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > > On Tue, May 31, 2022 at 9:35 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > I would probably just update the synopsis. It's not very hard to\n> > > > figure out what's likely to happen even without clicking through the\n> > > > link, so it seems like it's just being long-winded to duplicate the\n> > > > stuff here. But I don't care much if you feel otherwise.\n> > >\n> > > It looks like there are pros and cons. I think it’s a matter of\n> > > preference, though.\n> > >\n> > > I thought it would be an improvement, but I agree that we can live\n> > > without it, so I changed my mind; I'll go with my version. I think we\n> > > could revisit this later.\n> >\n> > I guess I'm fine with leaving the text as-is, though slightly bothered\n> > by leaving the phrase \"partition of the given parent table with\n> > specified partition bound values\" to also cover the DEFAULT partition\n> > case.\n>\n> I think we should discuss this separately, maybe as a HEAD-only patch,\n> so I pushed my version, leaving the description as-is.\n\nNo problem, thanks for the fix.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 2 Jun 2022 20:10:40 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: CREATE FOREIGN TABLE .. PARTITION OF .. DEFAULT"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently postgres allows setting any value for max_wal_size or\nmin_wal_size, not enforcing \"at least twice as wal_segment_size\" limit\n[1]. This isn't a problem if the server continues to run, however, the\nserver can't come up after a crash or restart or maintenance or\nupgrade (goes to a continuous restart loop with FATAL errors [1]).\n\nHow about we add GUC check hooks for both max_wal_size and\nmin_wal_size where we can either emit ERROR or WARNING if values are\nnot \"at least twice as wal_segment_size\"?\n\nThoughts?\n\n[1]\nFATAL: \"max_wal_size\" must be at least twice \"wal_segment_size\"\nFATAL: \"min_wal_size\" must be at least twice \"wal_segment_size\"\n\n[2]\n./initdb -D data\n./pg_ctl -D data -l logfile start\n./psql -c \"alter system set max_wal_size='2MB'\" postgres\n./psql -c \"alter system set min_wal_size='2MB'\" postgres\n./psql -c \"select pg_reload_conf()\" postgres\n./pg_ctl -D data -l logfile restart\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Sat, 21 May 2022 19:08:06 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Enforce \"max_wal_size/ min_wal_size must be at least twice\n wal_segment_size\" limit while setting GUCs"
},
{
"msg_contents": "Hi Bharath,\n\nCould you explain why min wal size must be at least twice but not\nequal to wal_segment_size ?\n\nthanks\nRajesh\n\nOn Sat, May 21, 2022 at 7:08 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> Currently postgres allows setting any value for max_wal_size or\n> min_wal_size, not enforcing \"at least twice as wal_segment_size\" limit\n> [1]. This isn't a problem if the server continues to run, however, the\n> server can't come up after a crash or restart or maintenance or\n> upgrade (goes to a continuous restart loop with FATAL errors [1]).\n>\n> How about we add GUC check hooks for both max_wal_size and\n> min_wal_size where we can either emit ERROR or WARNING if values are\n> not \"at least twice as wal_segment_size\"?\n>\n> Thoughts?\n>\n> [1]\n> FATAL: \"max_wal_size\" must be at least twice \"wal_segment_size\"\n> FATAL: \"min_wal_size\" must be at least twice \"wal_segment_size\"\n>\n> [2]\n> ./initdb -D data\n> ./pg_ctl -D data -l logfile start\n> ./psql -c \"alter system set max_wal_size='2MB'\" postgres\n> ./psql -c \"alter system set min_wal_size='2MB'\" postgres\n> ./psql -c \"select pg_reload_conf()\" postgres\n> ./pg_ctl -D data -l logfile restart\n>\n> Regards,\n> Bharath Rupireddy.\n>\n>\n\n\n",
"msg_date": "Sat, 21 May 2022 23:26:27 +0530",
"msg_from": "rajesh singarapu <rajesh.rs0541@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Enforce \"max_wal_size/ min_wal_size must be at least twice\n wal_segment_size\" limit while setting GUCs"
},
{
"msg_contents": "Hi Bharath,\n\n+1.\nThis seems to be good idea to have checks on upper bound for the\nmax_wal_size and min_wal_size. We have seen customers play with these\nparameters and ran into issues.\nIt will also be better to consider all the control parameters and have a\nmin/max checks on them as well.\n\nThanks,\nMahendrakar.\n\nOn Sat, 21 May 2022 at 23:26, rajesh singarapu <rajesh.rs0541@gmail.com>\nwrote:\n\n> Hi Bharath,\n>\n> Could you explain why min wal size must be at least twice but not\n> equal to wal_segment_size ?\n>\n> thanks\n> Rajesh\n>\n> On Sat, May 21, 2022 at 7:08 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Currently postgres allows setting any value for max_wal_size or\n> > min_wal_size, not enforcing \"at least twice as wal_segment_size\" limit\n> > [1]. This isn't a problem if the server continues to run, however, the\n> > server can't come up after a crash or restart or maintenance or\n> > upgrade (goes to a continuous restart loop with FATAL errors [1]).\n> >\n> > How about we add GUC check hooks for both max_wal_size and\n> > min_wal_size where we can either emit ERROR or WARNING if values are\n> > not \"at least twice as wal_segment_size\"?\n> >\n> > Thoughts?\n> >\n> > [1]\n> > FATAL: \"max_wal_size\" must be at least twice \"wal_segment_size\"\n> > FATAL: \"min_wal_size\" must be at least twice \"wal_segment_size\"\n> >\n> > [2]\n> > ./initdb -D data\n> > ./pg_ctl -D data -l logfile start\n> > ./psql -c \"alter system set max_wal_size='2MB'\" postgres\n> > ./psql -c \"alter system set min_wal_size='2MB'\" postgres\n> > ./psql -c \"select pg_reload_conf()\" postgres\n> > ./pg_ctl -D data -l logfile restart\n> >\n> > Regards,\n> > Bharath Rupireddy.\n> >\n> >\n>\n>\n>\n\nHi Bharath,+1.This seems to be good idea to have checks on upper bound for the max_wal_size and min_wal_size. We have seen customers play with these parameters and ran into issues.It will also be better to consider all the control parameters and have a min/max checks on them as well.Thanks,Mahendrakar.On Sat, 21 May 2022 at 23:26, rajesh singarapu <rajesh.rs0541@gmail.com> wrote:Hi Bharath,\n\nCould you explain why min wal size must be at least twice but not\nequal to wal_segment_size ?\n\nthanks\nRajesh\n\nOn Sat, May 21, 2022 at 7:08 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> Currently postgres allows setting any value for max_wal_size or\n> min_wal_size, not enforcing \"at least twice as wal_segment_size\" limit\n> [1]. This isn't a problem if the server continues to run, however, the\n> server can't come up after a crash or restart or maintenance or\n> upgrade (goes to a continuous restart loop with FATAL errors [1]).\n>\n> How about we add GUC check hooks for both max_wal_size and\n> min_wal_size where we can either emit ERROR or WARNING if values are\n> not \"at least twice as wal_segment_size\"?\n>\n> Thoughts?\n>\n> [1]\n> FATAL: \"max_wal_size\" must be at least twice \"wal_segment_size\"\n> FATAL: \"min_wal_size\" must be at least twice \"wal_segment_size\"\n>\n> [2]\n> ./initdb -D data\n> ./pg_ctl -D data -l logfile start\n> ./psql -c \"alter system set max_wal_size='2MB'\" postgres\n> ./psql -c \"alter system set min_wal_size='2MB'\" postgres\n> ./psql -c \"select pg_reload_conf()\" postgres\n> ./pg_ctl -D data -l logfile restart\n>\n> Regards,\n> Bharath Rupireddy.\n>\n>",
"msg_date": "Sun, 22 May 2022 10:14:17 +0530",
"msg_from": "mahendrakar s <mahendrakarforpg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Enforce \"max_wal_size/ min_wal_size must be at least twice\n wal_segment_size\" limit while setting GUCs"
},
{
"msg_contents": "On Sat, May 21, 2022 at 11:26 PM rajesh singarapu\n<rajesh.rs0541@gmail.com> wrote:\n>\n> On Sat, May 21, 2022 at 7:08 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Currently postgres allows setting any value for max_wal_size or\n> > min_wal_size, not enforcing \"at least twice as wal_segment_size\" limit\n> > [1]. This isn't a problem if the server continues to run, however, the\n> > server can't come up after a crash or restart or maintenance or\n> > upgrade (goes to a continuous restart loop with FATAL errors [1]).\n> >\n> > How about we add GUC check hooks for both max_wal_size and\n> > min_wal_size where we can either emit ERROR or WARNING if values are\n> > not \"at least twice as wal_segment_size\"?\n> >\n> > Thoughts?\n> >\n> > [1]\n> > FATAL: \"max_wal_size\" must be at least twice \"wal_segment_size\"\n> > FATAL: \"min_wal_size\" must be at least twice \"wal_segment_size\"\n> Hi Bharath,\n>\n> Could you explain why min wal size must be at least twice but not\n> equal to wal_segment_size ?\n\nIt is because postgres always needs/keeps at least one WAL file and\nthe usage of max_wal_size/min_wal_size is in terms of number of WAL\nsegments/WAL files. It doesn't make sense to set\nmax_wal_size/min_wal_size to, say, 20MB (where wal_segment_size =\n16MB) and expect the server to honor it and do something. Hence the\n'at least twice' requirement.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 23 May 2022 10:20:06 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Enforce \"max_wal_size/ min_wal_size must be at least twice\n wal_segment_size\" limit while setting GUCs"
},
{
"msg_contents": "At Sat, 21 May 2022 19:08:06 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> How about we add GUC check hooks for both max_wal_size and\n> min_wal_size where we can either emit ERROR or WARNING if values are\n> not \"at least twice as wal_segment_size\"?\n\nIt should be ERROR.\n\nAs you say, it should have been changed when the unit of them is\nchanged to MB and wal_segment_size became variable.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 23 May 2022 14:15:08 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Enforce \"max_wal_size/ min_wal_size must be at least twice\n wal_segment_size\" limit while setting GUCs"
},
{
"msg_contents": "On Mon, May 23, 2022 at 10:45 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Sat, 21 May 2022 19:08:06 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > How about we add GUC check hooks for both max_wal_size and\n> > min_wal_size where we can either emit ERROR or WARNING if values are\n> > not \"at least twice as wal_segment_size\"?\n>\n> It should be ERROR.\n>\n> As you say, it should have been changed when the unit of them is\n> changed to MB and wal_segment_size became variable.\n\nThanks. Having check hooks for min_wal_size and max_wal_size that\nthrow errors if they aren't at least twice the wal_segment_size has a\n\"BIG\" problem - ./initdb -D data --wal-segsize=1 (or a value < 16)\nfails. This is because during the bootstrap mode the min_wal_size is\ncalculated using the supplied wal-segsize and written to\npostgresql.conf file, but in the \"post-bootstrap initialization\" in\nsingle user mode, the min_wal_size's default value is calculated as\n80MB using default wal_segment_size 16MB\n(PostmasterMain->InitializeGUCOptions->InitializeOneGUCOption->check_hook)\nas wal_segment_size isn't read from control file yet, see [1] and [2]\nfor reference.\n\nMaybe we have a fundamental problem here that in single user mode we\naren't reading control file. I have no further thoughts to offer at\nthis moment\n\n[1]\nfoobaralicebob@foobaralicebob:~/postgres/inst/bin$ ./initdb -D data\n--wal-segsize=1\nThe files belonging to this database system will be owned by user\n\"foobaralicebob\".\nThis user must also own the server process.\n\nThe database cluster will be initialized with locale \"C.UTF-8\".\nThe default database encoding has accordingly been set to \"UTF8\".\nThe default text search configuration will be set to \"english\".\n\nData page checksums are disabled.\n\ncreating directory data ... ok\ncreating subdirectories ... ok\nselecting dynamic shared memory implementation ... posix\nselecting default max_connections ... 100\nselecting default shared_buffers ... 128MB\nselecting default time zone ... Etc/UTC\ncreating configuration files ... ok\nrunning bootstrap script ... 2022-05-23 11:57:35.999 GMT [3277331]\nLOG: min_wal_size 80, wal_segment_size 16777216\n2022-05-23 11:57:36.000 GMT [3277331] LOG: min_wal_size 5,\nwal_segment_size 1048576\nok\nperforming post-bootstrap initialization ... 2022-05-23 11:57:36.178\nGMT [3277333] LOG: min_wal_size 80, wal_segment_size 16777216\n2022-05-23 11:57:36.179 GMT [3277333] LOG: min_wal_size 5,\nwal_segment_size 16777216\n2022-05-23 11:57:36.179 GMT [3277333] LOG: \"min_wal_size\" must be at\nleast twice \"wal_segment_size\"\n2022-05-23 11:57:36.179 UTC [3277333] FATAL: configuration file\n\"/home/foobaralicebob/postgres/inst/bin/data/postgresql.conf\" contains\nerrors\nchild process exited with exit code 1\ninitdb: removing data directory \"data\"\n\n[2]\n(gdb) bt\n#0 0x00007f879105cfaa in __GI___select (nfds=0, readfds=0x0,\nwritefds=0x0, exceptfds=0x0, timeout=0x7ffd31e040c0) at\n../sysdeps/unix/sysv/linux/select.c:41\n#1 0x0000556cee068326 in pg_usleep (microsec=1000000) at pgsleep.c:56\n#2 0x0000556ced9cc06e in check_min_wal_size (newval=0x7ffd31e04240,\nextra=0x7ffd31e04248, source=PGC_S_FILE) at xlog.c:4327\n#3 0x0000556cee016e2e in call_int_check_hook (conf=0x556cee365c58\n<ConfigureNamesInt+9912>, newval=0x7ffd31e04240, extra=0x7ffd31e04248,\nsource=PGC_S_FILE, elevel=15) at guc.c:11786\n#4 0x0000556cee00eb28 in parse_and_validate_value\n(record=0x556cee365c58 <ConfigureNamesInt+9912>, name=0x556cef5a9778\n\"min_wal_size\", value=0x556cef5a97a0 \"5MB\", source=PGC_S_FILE,\n elevel=15, newval=0x7ffd31e04240, newextra=0x7ffd31e04248) at guc.c:7413\n#5 0x0000556cee00f908 in set_config_option (name=0x556cef5a9778\n\"min_wal_size\", value=0x556cef5a97a0 \"5MB\", context=PGC_POSTMASTER,\nsource=PGC_S_FILE, action=GUC_ACTION_SET,\n changeVal=true, elevel=15, is_reload=false) at guc.c:7922\n#6 0x0000556cee01b1b2 in ProcessConfigFileInternal\n(context=PGC_POSTMASTER, applySettings=true, elevel=15) at\nguc-file.l:441\n#7 0x0000556cee01ab2d in ProcessConfigFile (context=PGC_POSTMASTER)\nat guc-file.l:155\n#8 0x0000556cee00c859 in SelectConfigFiles (userDoption=0x0,\nprogname=0x556cef584eb0 \"postgres\") at guc.c:6196\n#9 0x0000556cede249c6 in PostgresSingleUserMain (argc=12,\nargv=0x556cef585800, username=0x556cef58cc60 \"foobaralicebob\") at\npostgres.c:3991\n#10 0x0000556cedc34a72 in main (argc=12, argv=0x556cef585800) at main.c:199\n(gdb)\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 23 May 2022 17:43:49 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Enforce \"max_wal_size/ min_wal_size must be at least twice\n wal_segment_size\" limit while setting GUCs"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> Thanks. Having check hooks for min_wal_size and max_wal_size that\n> throw errors if they aren't at least twice the wal_segment_size has a\n> \"BIG\" problem - ./initdb -D data --wal-segsize=1 (or a value < 16)\n> fails.\n\nIn general, you can't do that (i.e. try to enforce constraints between\nGUC values via check hooks). It's been tried in the past and failed\nmiserably, because the hooks can't know whether the other value is\nabout to be changed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 May 2022 10:08:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Enforce \"max_wal_size/ min_wal_size must be at least twice\n wal_segment_size\" limit while setting GUCs"
},
{
"msg_contents": "At Mon, 23 May 2022 10:08:54 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > Thanks. Having check hooks for min_wal_size and max_wal_size that\n> > throw errors if they aren't at least twice the wal_segment_size has a\n> > \"BIG\" problem - ./initdb -D data --wal-segsize=1 (or a value < 16)\n> > fails.\n> \n> In general, you can't do that (i.e. try to enforce constraints between\n> GUC values via check hooks). It's been tried in the past and failed\n> miserably, because the hooks can't know whether the other value is\n> about to be changed.\n\nI thought that wal_segment_size is a semi-constant for a server life.\nBut looking at the startup sequence closely, postmaster tries\nchanging max_wal_size before reading control file.\n\nCouldn't we use PGC_S_TEST for this purpose? AlterSystemSetConfigFile\nis calling parse_and_validate_value() with PGC_S_FILE, but it is\nactually a \"value to be used later\"(@guc.h:93). So it can be thought\nthat PG_S_TEST is the right choice there. If it is still not work\nperfectly, we could have a new source value, say PGC_S_ALTER_SYSTEM,\nexactly for this use. (but I don't see a following users comes in\nfuture..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 24 May 2022 10:19:53 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Enforce \"max_wal_size/ min_wal_size must be at least twice\n wal_segment_size\" limit while setting GUCs"
},
{
"msg_contents": "At Tue, 24 May 2022 10:19:53 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Mon, 23 May 2022 10:08:54 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> > Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > > Thanks. Having check hooks for min_wal_size and max_wal_size that\n> > > throw errors if they aren't at least twice the wal_segment_size has a\n> > > \"BIG\" problem - ./initdb -D data --wal-segsize=1 (or a value < 16)\n> > > fails.\n> > \n> > In general, you can't do that (i.e. try to enforce constraints between\n> > GUC values via check hooks). It's been tried in the past and failed\n> > miserably, because the hooks can't know whether the other value is\n> > about to be changed.\n> \n> I thought that wal_segment_size is a semi-constant for a server life.\n> But looking at the startup sequence closely, postmaster tries\n> changing max_wal_size before reading control file.\n> \n> Couldn't we use PGC_S_TEST for this purpose? AlterSystemSetConfigFile\n> is calling parse_and_validate_value() with PGC_S_FILE, but it is\n> actually a \"value to be used later\"(@guc.h:93). So it can be thought\n> that PG_S_TEST is the right choice there. If it is still not work\n> perfectly, we could have a new source value, say PGC_S_ALTER_SYSTEM,\n> exactly for this use. (but I don't see a following users comes in\n> future..)\n\nThis duscussion is based on the assumption that \"wal_segment_size can\nbe assumed to be a constant when a check function is called with\nPGC_S_FILE\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 24 May 2022 10:46:20 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Enforce \"max_wal_size/ min_wal_size must be at least twice\n wal_segment_size\" limit while setting GUCs"
},
{
"msg_contents": "On Mon, May 23, 2022 at 10:08:54AM -0400, Tom Lane wrote:\n> In general, you can't do that (i.e. try to enforce constraints between\n> GUC values via check hooks). It's been tried in the past and failed\n> miserably, because the hooks can't know whether the other value is\n> about to be changed.\n\n+1. Aka, cough, 41aadee followed by 414c2fd.\n--\nMichael",
"msg_date": "Tue, 24 May 2022 11:17:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Enforce \"max_wal_size/ min_wal_size must be at least twice\n wal_segment_size\" limit while setting GUCs"
}
] |
[
{
"msg_contents": "Greetings!Please I need to cancel my subscription to this mailing list, could you delete me from it, or tell me how to unsubscribe. Best Regards,Israa Odeh. Sent from Mail for Windows \n",
"msg_date": "Sat, 21 May 2022 21:30:33 +0300",
"msg_from": "Israa Odeh <israa.k.odeh@gmail.com>",
"msg_from_op": true,
"msg_subject": "Unsubscribing from this mailing list."
},
{
"msg_contents": "Greetings (everyone except Israa),\n\n* Israa Odeh (israa.k.odeh@gmail.com) wrote:\n> Please I need to cancel my subscription to this mailing list, could you\n> delete me from it, or tell me how to unsubscribe.\n\nTo hopefully forstall general replies and such to this- they've already\nbeen unsubscribed (and notified of that).\n\nI'll look at updating our unsubscribe-detection logic to pick up on\nvariations like this (\"Unsubscribing\" isn't detected, \"Unsubscribe\" is)\nto avoid these making it to the list in the future.\n\nThanks,\n\nStephen",
"msg_date": "Sun, 22 May 2022 10:12:58 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Unsubscribing from this mailing list."
}
] |
[
{
"msg_contents": "The initialization in PostmasterMain() blindly turns on LoadedSSL,\nirrespective of the outcome of secure_initialize(). I don't think\nthat's how it should behave, primarily because of the pattern followed\nby the other places that call secure_initialize().\n\nThis patch makes PostmasterMain() behave identical to other places\n(SIGHUP handler, and SubPostmasterMain()) where LoadedSSL is turned on\nafter checking success of secure_initialize() call. Upon failure of\nsecure_initialize(), it now emits a log message, instead of setting\nLoadedSSL to true.\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Sat, 21 May 2022 23:41:18 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": true,
"msg_subject": "Patch: Don't set LoadedSSL unless secure_initialize succeeds"
},
{
"msg_contents": "> On 22 May 2022, at 08:41, Gurjeet Singh <gurjeet@singh.im> wrote:\n\n> The initialization in PostmasterMain() blindly turns on LoadedSSL,\n> irrespective of the outcome of secure_initialize().\n\nThis call is invoked with isServerStart set to true so any error in\nsecure_initialize should error out with ereport FATAL (in be_tls_init()). That\ncould be explained in a comment though, which is currently isn't.\n\nDid you manage to get LoadedSSL set to true without SSL having been properly\ninitialized?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Sun, 22 May 2022 09:17:37 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Patch: Don't set LoadedSSL unless secure_initialize succeeds"
},
{
"msg_contents": "On Sun, May 22, 2022 at 09:17:37AM +0200, Daniel Gustafsson wrote:\n> This call is invoked with isServerStart set to true so any error in\n> secure_initialize should error out with ereport FATAL (in be_tls_init()). That\n> could be explained in a comment though, which is currently isn't.\n\nAll the inner routines of be_tls_init() would pull out a FATAL \"goto\nerror\", and it does not look like we have a hole here, so I am a bit\nsurprised by what's proposed, TBH.\n--\nMichael",
"msg_date": "Tue, 24 May 2022 11:33:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Patch: Don't set LoadedSSL unless secure_initialize succeeds"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 22 May 2022, at 08:41, Gurjeet Singh <gurjeet@singh.im> wrote:\n>> The initialization in PostmasterMain() blindly turns on LoadedSSL,\n>> irrespective of the outcome of secure_initialize().\n\n> This call is invoked with isServerStart set to true so any error in\n> secure_initialize should error out with ereport FATAL (in be_tls_init()). That\n> could be explained in a comment though, which is currently isn't.\n\nThe comments for secure_initialize() and be_tls_init() both explain\nthis already.\n\nIt's not great that be_tls_init() implements two different error\nhandling behaviors, perhaps. One could imagine separating those.\nBut we've pretty much bought into such messes with the very fact\nthat elog/ereport sometimes return and sometimes not.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 May 2022 23:51:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch: Don't set LoadedSSL unless secure_initialize succeeds"
},
{
"msg_contents": "On Sun, May 22, 2022 at 12:17 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 22 May 2022, at 08:41, Gurjeet Singh <gurjeet@singh.im> wrote:\n>\n> > The initialization in PostmasterMain() blindly turns on LoadedSSL,\n> > irrespective of the outcome of secure_initialize().\n>\n> This call is invoked with isServerStart set to true so any error in\n> secure_initialize should error out with ereport FATAL (in be_tls_init()). That\n> could be explained in a comment though, which is currently isn't.\n\nThat makes sense. I have attached a patch that adds a couple of lines\nof comments explaining this at call-site.\n\n> Did you manage to get LoadedSSL set to true without SSL having been properly\n> initialized?\n\nFortunately, no. I'm trying to add a new network protocol, and caught\nthis inconsistency while reading/adapting the code.\n\nIf a committer sees some value in it, in the attached patch I have\nalso attempted to improve the readability of the code a little bit in\nstartup-packet handling, and in SSL functions. These are purely\ncosmetic changes, but I think they reduce repetition, and improve\nreadability, by quite a bit. For example, every ereport call evaluates\nthe same 'isServerStart ? FATAL : LOG', over and over again; replacing\nthis with variable 'logLevel' reduces cognitive load for the reader.\nAnd I've replace one 'goto retry1' with a 'while' loop, like the\nGASSAPI does it below that occurrence.\n\nThere's an symmetry, almost a diametric opposition, between how SSL\ninitialization error is treated when it occurs during server startup,\nversus when the error occurs during a reload/SIGHUP. During startup an\nerror in SSL initialization leads to FATAL, whereas during a SIGHUP\nit's merely a LOG message.\n\nI found this difference in treatment of SSL initialization errors\nquite bothersome, and there is no ready explanation for this. Either a\nproperly initialized SSL stack is important for server operation, or\nit is not. What do we gain by letting the server operate normally\nafter a reload that failed to initialize SSL stack. Conversely, why do\nwe kill the server during startup on SSL initialization error, when\nit's okay to operate normally after a reload that is unable to\ninitialize SSL.\n\nI have added a comment to be_tls_init(), which I hope explains this\ndifference in treatment of errors. I have also added comments to\nbe_tls_init(), explaining why we don't destroy/free the global\nSSL_context variable in case of an error in re-initialization of SSL;\nit's not just an optimization, it's essential to normal server\noperation.\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Wed, 25 May 2022 22:05:10 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": true,
"msg_subject": "Re: Patch: Don't set LoadedSSL unless secure_initialize succeeds"
},
{
"msg_contents": "On Mon, May 23, 2022 at 8:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> >> On 22 May 2022, at 08:41, Gurjeet Singh <gurjeet@singh.im> wrote:\n> >> The initialization in PostmasterMain() blindly turns on LoadedSSL,\n> >> irrespective of the outcome of secure_initialize().\n>\n> > This call is invoked with isServerStart set to true so any error in\n> > secure_initialize should error out with ereport FATAL (in be_tls_init()). That\n> > could be explained in a comment though, which is currently isn't.\n>\n> The comments for secure_initialize() and be_tls_init() both explain\n> this already.\n\nThe comments above secure_initialize() do, but there are no comments\nabove be_tls_init(), and nothing in there attempts to explain the\nFATAL vs. LOG difference.\n\nI think it doesn't hurt, and may actively help, if we add a comment at\nthe call-site just alluding to the fact that the function call will\nnot return in case of an error, and that it's safe to assume certain\nstate has been initialized if the function returns.\n\nThe comments *inside* be_tls_init() attempt to explain allocation of a\nnew SSL_context, but end up making it sound like it's an optimization\nto prevent memory leak. In the patch proposed a few minutes ago in\nthis thread, I have tried to explain above be_tls_init() the error\nhandling behavior, as well as the reason to retain the active\nSSL_context, if any.\n\n> It's not great that be_tls_init() implements two different error\n> handling behaviors, perhaps. One could imagine separating those.\n> But we've pretty much bought into such messes with the very fact\n> that elog/ereport sometimes return and sometimes not.\n\nI don't find the dual mode handling startling; I feel it's common in\nPostgres code, but it's been a while since I've touched it.\n\nWhat I would love to see improved around ereport() calls in SSL\nfunctions would be to shrink the 'ereport(); goto error;' pattern into\none statement, so that we don't introduce an accidental \"goto fail\"\nbug [1].\n\n[1]: https://nakedsecurity.sophos.com/2014/02/24/anatomy-of-a-goto-fail-apples-ssl-bug-explained-plus-an-unofficial-patch/\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Wed, 25 May 2022 22:24:08 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": true,
"msg_subject": "Re: Patch: Don't set LoadedSSL unless secure_initialize succeeds"
},
{
"msg_contents": "On Wed, May 25, 2022 at 10:05 PM Gurjeet Singh <gurjeet@singh.im> wrote:\n> I have added a comment to be_tls_init(), which I hope explains this\n> difference in treatment of errors. I have also added comments to\n> be_tls_init(), explaining why we don't destroy/free the global\n> SSL_context variable in case of an error in re-initialization of SSL;\n> it's not just an optimization, it's essential to normal server\n> operation.\n\nPlease see attached patch that reverts the unintentional removal of a\ncomment in be_tls_init(). Forgot to put that comment back in after my\nedits.\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Wed, 25 May 2022 22:53:15 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": true,
"msg_subject": "Re: Patch: Don't set LoadedSSL unless secure_initialize succeeds"
},
{
"msg_contents": "Gurjeet Singh <gurjeet@singh.im> writes:\n> On Mon, May 23, 2022 at 8:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The comments for secure_initialize() and be_tls_init() both explain\n>> this already.\n\n> The comments above secure_initialize() do, but there are no comments\n> above be_tls_init(), and nothing in there attempts to explain the\n> FATAL vs. LOG difference.\n\nI was looking at the comments in libpq-be.h:\n\n/*\n * Initialize global SSL context.\n *\n * If isServerStart is true, report any errors as FATAL (so we don't return).\n * Otherwise, log errors at LOG level and return -1 to indicate trouble,\n * preserving the old SSL state if any. Returns 0 if OK.\n */\nextern int\tbe_tls_init(bool isServerStart);\n\nIt isn't our usual practice to put such API comments with the extern\nrather than the function definition, so maybe those comments in libpq-be.h\nshould be moved to their respective functions? In any case, I'm not\nexcited about having three separate comments covering the same point.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 May 2022 15:16:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch: Don't set LoadedSSL unless secure_initialize succeeds"
},
{
"msg_contents": "On Thu, May 26, 2022 at 1:05 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n> There's an symmetry, almost a diametric opposition, between how SSL\n> initialization error is treated when it occurs during server startup,\n> versus when the error occurs during a reload/SIGHUP. During startup an\n> error in SSL initialization leads to FATAL, whereas during a SIGHUP\n> it's merely a LOG message.\n>\n> I found this difference in treatment of SSL initialization errors\n> quite bothersome, and there is no ready explanation for this. Either a\n> properly initialized SSL stack is important for server operation, or\n> it is not. What do we gain by letting the server operate normally\n> after a reload that failed to initialize SSL stack. Conversely, why do\n> we kill the server during startup on SSL initialization error, when\n> it's okay to operate normally after a reload that is unable to\n> initialize SSL.\n\nI think you're overreacting to a behavior that isn't really very surprising.\n\nIf we don't initialize SSL the first time, we don't have a working SSL\nstack. If we didn't choose to die at that point, we'd be starting up a\nserver that could not accept any SSL connections. I don't think users\nwould like that.\n\nIf we don't *reinitialize* it, we *do* have a working SSL stack. We\nhaven't been able to load the updated configuration, but we still have\nthe old one. We could fall over and die anyway, but I don't think\nusers would like that either. People don't expect 'pg_ctl reload' to\nkill off a working server, even if the new configuration is bad.\n\nSo I don't really know what behavior, other than what is actually\nimplemented, would be reasonable.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 26 May 2022 16:00:26 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch: Don't set LoadedSSL unless secure_initialize succeeds"
},
{
"msg_contents": "On Thu, May 26, 2022 at 12:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Gurjeet Singh <gurjeet@singh.im> writes:\n> > On Mon, May 23, 2022 at 8:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> The comments for secure_initialize() and be_tls_init() both explain\n> >> this already.\n>\n> > The comments above secure_initialize() do, but there are no comments\n> > above be_tls_init(), and nothing in there attempts to explain the\n> > FATAL vs. LOG difference.\n>\n> I was looking at the comments in libpq-be.h:\n>\n> /*\n> * Initialize global SSL context.\n> *\n> * If isServerStart is true, report any errors as FATAL (so we don't return).\n> * Otherwise, log errors at LOG level and return -1 to indicate trouble,\n> * preserving the old SSL state if any. Returns 0 if OK.\n> */\n> extern int be_tls_init(bool isServerStart);\n>\n> It isn't our usual practice to put such API comments with the extern\n> rather than the function definition,\n\nYep, and I didn't notice these comments, or even bothered to look at\nthe extern declaration, precisely because my knowledge of Postgres\ncoding convention told me the comments are supposed to be on the\nfunction definition.\n\n> so maybe those comments in libpq-be.h\n> should be moved to their respective functions? In any case, I'm not\n> excited about having three separate comments covering the same point.\n\nBy 3 locations, I suppose you're referring to the definition of\nsecure_initialize(), extern declaration of be_tls_init(), and the\ndefinition of be_tls_init().\n\nThe comment on the extern declaration does not belong there, so that\none definitely needs go. The other two locations need descriptive\ncomments, even if they might sound duplicate. Because the one on\nsecure_initialize() describes the abstraction's expectations, and the\none on be_tls_init() should refer to it, and additionally mention any\nimplementation details.\n\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Thu, 26 May 2022 13:01:00 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": true,
"msg_subject": "Re: Patch: Don't set LoadedSSL unless secure_initialize succeeds"
},
{
"msg_contents": "On Thu, May 26, 2022 at 1:00 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, May 26, 2022 at 1:05 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n> > There's an symmetry, almost a diametric opposition, between how SSL\n\nI meant \"an asymmetry\".\n\n> > initialization error is treated when it occurs during server startup,\n> > versus when the error occurs during a reload/SIGHUP. During startup an\n> > error in SSL initialization leads to FATAL, whereas during a SIGHUP\n> > it's merely a LOG message.\n> >\n> > I found this difference in treatment of SSL initialization errors\n> > quite bothersome, and there is no ready explanation for this. Either a\n> > properly initialized SSL stack is important for server operation, or\n> > it is not. What do we gain by letting the server operate normally\n> > after a reload that failed to initialize SSL stack. Conversely, why do\n> > we kill the server during startup on SSL initialization error, when\n> > it's okay to operate normally after a reload that is unable to\n> > initialize SSL.\n>\n> I think you're overreacting to a behavior that isn't really very surprising.\n\nThe behaviour is not surprising. I developed those opposing views as I\nwas reading the code. And I understood the behaviour after I was done\nreading the code. But I was irked that it wasn't clearly explained\nsomewhere nearby in code. Hence my proposal:\n\n> > I have added a comment to be_tls_init(), which I hope explains this\n> > difference in treatment of errors.\n\n> So I don't really know what behavior, other than what is actually\n> implemented, would be reasonable.\n\nI just wasn't happy about the fact that I had wasted time trying to\nfind holes (security holes!) in the behaviour. So my proposal is to\nimprove the docs/comments about this behaviour, and not the behaviour\nitself.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Thu, 26 May 2022 13:21:15 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": true,
"msg_subject": "Re: Patch: Don't set LoadedSSL unless secure_initialize succeeds"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I think you're overreacting to a behavior that isn't really very surprising.\n\n> If we don't initialize SSL the first time, we don't have a working SSL\n> stack. If we didn't choose to die at that point, we'd be starting up a\n> server that could not accept any SSL connections. I don't think users\n> would like that.\n\n> If we don't *reinitialize* it, we *do* have a working SSL stack. We\n> haven't been able to load the updated configuration, but we still have\n> the old one. We could fall over and die anyway, but I don't think\n> users would like that either. People don't expect 'pg_ctl reload' to\n> kill off a working server, even if the new configuration is bad.\n\nThe larger context here is that this is (or at least is supposed to be)\nexactly the same as our reaction to any other misconfiguration: die if\nit's detected at server start, but if it's detected during a later SIGHUP\nreload, soldier on with the known-good previous settings. I believe\nthat's fairly well documented already.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 May 2022 17:40:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch: Don't set LoadedSSL unless secure_initialize succeeds"
},
{
"msg_contents": "Gurjeet Singh <gurjeet@singh.im> writes:\n> On Thu, May 26, 2022 at 12:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> so maybe those comments in libpq-be.h\n>> should be moved to their respective functions? In any case, I'm not\n>> excited about having three separate comments covering the same point.\n\n> By 3 locations, I suppose you're referring to the definition of\n> secure_initialize(), extern declaration of be_tls_init(), and the\n> definition of be_tls_init().\n\nNo, I was counting the third comment as the one you proposed to add to\nsecure_initialize's caller. I think it's not a great idea to add such\ncomments to call sites, as they're very likely to not get maintained\nwhen somebody adjusts the API of the function. (We have a hard enough\ntime getting people to update the comments directly next to the\nfunction :-(.)\n\nI think what we ought to do here is just move the oddly-placed comments\nin libpq-be.h to be adjacent to the function definitions, as attached.\n(I just deleted the .h comments for the GSSAPI functions, as they seem\nto have adequate comments in their .c file already.)\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 26 May 2022 19:13:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch: Don't set LoadedSSL unless secure_initialize succeeds"
},
{
"msg_contents": "On Thu, May 26, 2022 at 2:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I think you're overreacting to a behavior that isn't really very surprising.\n>\n> > If we don't initialize SSL the first time, we don't have a working SSL\n> > stack. If we didn't choose to die at that point, we'd be starting up a\n> > server that could not accept any SSL connections. I don't think users\n> > would like that.\n>\n> > If we don't *reinitialize* it, we *do* have a working SSL stack. We\n> > haven't been able to load the updated configuration, but we still have\n> > the old one. We could fall over and die anyway, but I don't think\n> > users would like that either. People don't expect 'pg_ctl reload' to\n> > kill off a working server, even if the new configuration is bad.\n>\n> The larger context here is that this is (or at least is supposed to be)\n> exactly the same as our reaction to any other misconfiguration: die if\n> it's detected at server start, but if it's detected during a later SIGHUP\n> reload, soldier on with the known-good previous settings. I believe\n> that's fairly well documented already.\n\nThis distinction (of server startup vs. reload) is precisely what I\nthink should be conveyed and addressed in the comments of functions\nresponsible for (re)initialization of resources. Such a comment,\nspecifically calling out processing/logging/error-handling differences\nbetween startup and reload, would've definitely helped when I was\ntrying to understand the code, and trying to figure out the different\ncontexts these functions may be executed in. The fact that\nProcessStartupPacket() function calls these functions, and then also\ncalls itself recursively, made understanding the code and intent much\nharder.\n\nAnd since variable/parameter/function names also convey intent, their\nnaming should also be as explicit as possible, rather than being\nvague.\n\nCalling variables/parameters 'isServerStart' leaves so much to\ninterpretation; how many and what other cases is the code called in\nwhen isServerStart == false? I think a better scheme would've been to\nname the parameter as 'reason', and use an enum/constant to convey\nthat there are exactly 2 higher-level cases that the code is called in\ncontext of: enum InitializationReason { ServerStartup, ServerReload }.\n\nIn these functions, it's not important to know the distinction of\nwhether the server is starting-up vs. already running (or whatever\nother states the server may be in) , instead it's important to know\nthe distinction of whether the server is starting-up or being\nreloaded; other states of the server operation, if any, do not matter\nhere.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Thu, 26 May 2022 17:07:34 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": true,
"msg_subject": "Re: Patch: Don't set LoadedSSL unless secure_initialize succeeds"
},
{
"msg_contents": "On Thu, May 26, 2022 at 4:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Gurjeet Singh <gurjeet@singh.im> writes:\n> > On Thu, May 26, 2022 at 12:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> so maybe those comments in libpq-be.h\n> >> should be moved to their respective functions? In any case, I'm not\n> >> excited about having three separate comments covering the same point.\n>\n> > By 3 locations, I suppose you're referring to the definition of\n> > secure_initialize(), extern declaration of be_tls_init(), and the\n> > definition of be_tls_init().\n>\n> No, I was counting the third comment as the one you proposed to add to\n> secure_initialize's caller.\n\nI think a comment at that call-site is definitely warranted. Consider\nthe code as it right now...\n\n if (EnableSSL)\n {\n (void) secure_initialize(true);\n LoadedSSL = true;\n }\n\n... as a first-time reader.\n\nReader> This is an important piece of code, not just because it is\ncalled from PostmasterMain(), early in the startup process, but also\nbecause it deals with SSL; it has 'SSL' and 'secure' plastered all\nover it. But wait, they don't care what the outcome of this important\nfunction call is??!! I might not have paid much attention to it, but\nthe call is adorned with '(void)', which (i) attracts my attention\nmore, and (ii) why are they choosing to throw away the result of such\nimportant function call?! And then, they declare SSL has been\n\"Loaded\"... somebody committed half-written code! Perhaps they we in a\nhurry. Perhaps this is a result of an automatic git-merge gone wrong.\nLet me dig through the code and see if I can find a vulnerability.\n<Many hours later, after learning about Postgres' weird\nereport/error-handling, startup vs. reload, getting bashed on IRC or\nelsewhere> Duh, there's nothing wrong here. <Moves on>.\n\nNow, consider the same code, and the ensuing thought-process of the reader:\n\n if (EnableSSL)\n {\n /* Any failure during SSL initialization here will result in\nFATAL error. */\n (void) secure_initialize(true);\n /* ... so here we know for sure that SSL was successfully\ninitialized. */\n LoadedSSL = true;\n }\n\nReader> This is an important piece of code, not just because it is\ncalled from PostmasterMain(), early in the startup process, but also\nbecause it deals with SSL; it has 'SSL' and 'secure' plastered all\nover it. But wait, they don't care what the outcome of this important\nfunction call is??!! That's okay, because the explanation in the\ncomment makes sense. <Learns about special-case handling of FATAL and\nabove> There's nothing wrong here. <Moves on>.\n\n> I think it's not a great idea to add such\n> comments to call sites, as they're very likely to not get maintained\n> when somebody adjusts the API of the function. (We have a hard enough\n> time getting people to update the comments directly next to the\n> function :-(.)\n\nThat's unfortunate. But I think we should continue to strive for more\nmaintainable, readable, extensible code.\n\n> I think what we ought to do here is just move the oddly-placed comments\n> in libpq-be.h to be adjacent to the function definitions, as attached.\n> (I just deleted the .h comments for the GSSAPI functions, as they seem\n> to have adequate comments in their .c file already.)\n\nPlease see if anything from my patches is usable. I did not get\nclarity around startup vs. reload handling until my previous email, so\nthere may not be much of use in my patches. But I think a few words\nmentioning the difference in resource (re)initialization during\nstartup vs reload would add a lot of value.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Thu, 26 May 2022 17:44:54 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": true,
"msg_subject": "Re: Patch: Don't set LoadedSSL unless secure_initialize succeeds"
}
] |
[
{
"msg_contents": "Hi David,\n\n>Over the past few days I've been gathering some benchmark results\n>together to show the sort performance improvements in PG15 [1].\n\n>One of the test cases I did was to demonstrate Heikki's change to use\n>a k-way merge (65014000b).\n\n>The test I did to try this out was along the lines of:\n\n>set max_parallel_workers_per_gather = 0;\n>create table t (a bigint not null, b bigint not null, c bigint not\n>null, d bigint not null, e bigint not null, f bigint not null);\n\n>insert into t select x,x,x,x,x,x from generate_Series(1,140247142) x; --\n10GB!\n>vacuum freeze t;\n\n>The query I ran was:\n\n>select * from t order by a offset 140247142;\nI redid this test here:\nWindows 10 64 bits\nmsvc 2019 64 bits\nRAM 8GB\nSSD 256 GB\n\nHEAD (default configuration)\nTime: 229396,551 ms (03:49,397)\nPATCHED:\nTime: 220887,346 ms (03:40,887)\n\n>I tested various sizes of work_mem starting at 4MB and doubled that\n>all the way to 16GB. For many of the smaller values of work_mem the\n>performance is vastly improved by Heikki's change, however for\n>work_mem = 64MB I detected quite a large slowdown. PG14 took 20.9\n>seconds and PG15 beta 1 took 29 seconds!\n\n>I've been trying to get to the bottom of this today and finally have\n>discovered this is due to the tuple size allocations in the sort being\n>exactly 64 bytes. Prior to 40af10b57 (Use Generation memory contexts\n>to store tuples in sorts) the tuple for the sort would be stored in an\n>aset context. After 40af10b57 we'll use a generation context. The\n>idea with that change is that the generation context does no\n>power-of-2 round ups for allocations, so we save memory in most cases.\n>However, due to this particular test having a tuple size of 64-bytes,\n>there was no power-of-2 wastage with aset.\n\n>The problem is that generation chunks have a larger chunk header than\n>aset do due to having to store the block pointer that the chunk\n>belongs to so that GenerationFree() can increment the nfree chunks in\n>the block. aset.c does not require this as freed chunks just go onto a\n>freelist that's global to the entire context.\n\n>Basically, for my test query, the slowdown is because instead of being\n>able to store 620702 tuples per tape over 226 tapes with an aset\n>context, we can now only store 576845 tuples per tape resulting in\n>requiring 244 tapes when using the generation context.\n\n>If I had added column \"g\" to make the tuple size 72 bytes causing\n>aset's code to round allocations up to 128 bytes and generation.c to\n>maintain the 72 bytes then the sort would have stored 385805 tuples\n>over 364 batches for aset and 538761 tuples over 261 batches using the\n>generation context. That would have been a huge win.\n\n>So it basically looks like I discovered a very bad case that causes a\n>significant slowdown. Yet other cases that are not an exact power of\n>2 stand to gain significantly from this change.\n\n>One thing 40af10b57 does is stops those terrible performance jumps\n>when the tuple size crosses a power-of-2 boundary. The performance\n>should be more aligned to the size of the data being sorted now...\n>Unfortunately, that seems to mean regressions for large sorts with\n>power-of-2 sized tuples.\n\nIt seems to me that the solution would be to use aset allocations\n\nwhen the size of the tuples is power-of-2?\n\nif (state->sortopt & TUPLESORT_ALLOWBOUNDED ||\n(state->memtupsize & (state->memtupsize - 1)) == 0)\nstate->tuplecontext = AllocSetContextCreate(state->sortcontext,\n\"Caller tuples\", ALLOCSET_DEFAULT_SIZES);\nelse\nstate->tuplecontext = GenerationContextCreate(state->sortcontext,\n \"Caller tuples\", ALLOCSET_DEFAULT_SIZES);\n\nI took a look and tried some improvements to see if I had a better result.\n\nWould you mind taking a look and testing?\n\nregards,\n\nRanier Vilela",
"msg_date": "Sun, 22 May 2022 16:11:46 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "FYI not sure why, but your responses seem to break threading quite\noften, due to missing headers identifying the message you're responding\nto (In-Reply-To, References). Not sure why or how to fix it, but this\nmakes it much harder to follow the discussion.\n\nOn 5/22/22 21:11, Ranier Vilela wrote:\n> Hi David,\n> \n>>Over the past few days I've been gathering some benchmark results\n>>together to show the sort performance improvements in PG15 [1].\n> \n>>One of the test cases I did was to demonstrate Heikki's change to use\n>>a k-way merge (65014000b).\n> \n>>The test I did to try this out was along the lines of:\n> \n>>set max_parallel_workers_per_gather = 0;\n>>create table t (a bigint not null, b bigint not null, c bigint not\n>>null, d bigint not null, e bigint not null, f bigint not null);\n> \n>>insert into t select x,x,x,x,x,x from generate_Series(1,140247142) x;\n> -- 10GB!\n>>vacuum freeze t;\n> \n>>The query I ran was:\n> \n>>select * from t order by a offset 140247142;\n> \n> I redid this test here:\n> Windows 10 64 bits\n> msvc 2019 64 bits\n> RAM 8GB\n> SSD 256 GB\n> \n> HEAD (default configuration)\n> Time: 229396,551 ms (03:49,397)\n> PATCHED:\n> Time: 220887,346 ms (03:40,887)\n> \n\nThis is 10x longer than reported by David. Presumably David used a\nmachine a lot of RAM, while your system has 8GB and so is I/O bound.\n\nAlso, what exactly does \"patched\" mean? The patch you attached?\n\n\n>>I tested various sizes of work_mem starting at 4MB and doubled that\n>>all the way to 16GB. For many of the smaller values of work_mem the\n>>performance is vastly improved by Heikki's change, however for\n>>work_mem = 64MB I detected quite a large slowdown. PG14 took 20.9\n>>seconds and PG15 beta 1 took 29 seconds!\n> \n>>I've been trying to get to the bottom of this today and finally have\n>>discovered this is due to the tuple size allocations in the sort being\n>>exactly 64 bytes. Prior to 40af10b57 (Use Generation memory contexts\n>>to store tuples in sorts) the tuple for the sort would be stored in an\n>>aset context. After 40af10b57 we'll use a generation context. The\n>>idea with that change is that the generation context does no\n>>power-of-2 round ups for allocations, so we save memory in most cases.\n>>However, due to this particular test having a tuple size of 64-bytes,\n>>there was no power-of-2 wastage with aset.\n> \n>>The problem is that generation chunks have a larger chunk header than\n>>aset do due to having to store the block pointer that the chunk\n>>belongs to so that GenerationFree() can increment the nfree chunks in\n>>the block. aset.c does not require this as freed chunks just go onto a\n>>freelist that's global to the entire context.\n> \n>>Basically, for my test query, the slowdown is because instead of being\n>>able to store 620702 tuples per tape over 226 tapes with an aset\n>>context, we can now only store 576845 tuples per tape resulting in\n>>requiring 244 tapes when using the generation context.\n> \n>>If I had added column \"g\" to make the tuple size 72 bytes causing\n>>aset's code to round allocations up to 128 bytes and generation.c to\n>>maintain the 72 bytes then the sort would have stored 385805 tuples\n>>over 364 batches for aset and 538761 tuples over 261 batches using the\n>>generation context. That would have been a huge win.\n> \n>>So it basically looks like I discovered a very bad case that causes a\n>>significant slowdown. Yet other cases that are not an exact power of\n>>2 stand to gain significantly from this change.\n> \n>>One thing 40af10b57 does is stops those terrible performance jumps\n>>when the tuple size crosses a power-of-2 boundary. The performance\n>>should be more aligned to the size of the data being sorted now...\n>>Unfortunately, that seems to mean regressions for large sorts with\n>>power-of-2 sized tuples.\n> \n> It seems to me that the solution would be to use aset allocations\n> \n> when the size of the tuples is power-of-2?\n> \n> if (state->sortopt & TUPLESORT_ALLOWBOUNDED ||\n> (state->memtupsize & (state->memtupsize - 1)) == 0)\n> state->tuplecontext = AllocSetContextCreate(state->sortcontext,\n> \"Caller tuples\", ALLOCSET_DEFAULT_SIZES);\n> else\n> state->tuplecontext = GenerationContextCreate(state->sortcontext,\n> \"Caller tuples\", ALLOCSET_DEFAULT_SIZES);\n> \n\nI'm pretty sure this is pointless, because memtupsize is the size of the\nmemtuples array. But the issue is about size of the tuples. After all,\nDavid was talking about 64B chunks, but the array is always at least\n1024 elements, so it obviously can't be the same thing.\n\nHow would we even know how large the tuples will be at this point,\nbefore we even see the first of them?\n\n> I took a look and tried some improvements to see if I had a better result.\n> \n\nIMHO special-casing should be the last resort, because it makes the\nbehavior much harder to follow. Also, we're talking about sort, but\ndon't other places using Generation context have the same issue?\nTreating prefferrable to find a fix addressing all those places,\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 23 May 2022 23:01:15 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
},
{
"msg_contents": "Em seg., 23 de mai. de 2022 às 18:01, Tomas Vondra <\ntomas.vondra@enterprisedb.com> escreveu:\n\n> FYI not sure why, but your responses seem to break threading quite\n> often, due to missing headers identifying the message you're responding\n> to (In-Reply-To, References). Not sure why or how to fix it, but this\n> makes it much harder to follow the discussion.\n>\n> On 5/22/22 21:11, Ranier Vilela wrote:\n> > Hi David,\n> >\n> >>Over the past few days I've been gathering some benchmark results\n> >>together to show the sort performance improvements in PG15 [1].\n> >\n> >>One of the test cases I did was to demonstrate Heikki's change to use\n> >>a k-way merge (65014000b).\n> >\n> >>The test I did to try this out was along the lines of:\n> >\n> >>set max_parallel_workers_per_gather = 0;\n> >>create table t (a bigint not null, b bigint not null, c bigint not\n> >>null, d bigint not null, e bigint not null, f bigint not null);\n> >\n> >>insert into t select x,x,x,x,x,x from generate_Series(1,140247142) x;\n> > -- 10GB!\n> >>vacuum freeze t;\n> >\n> >>The query I ran was:\n> >\n> >>select * from t order by a offset 140247142;\n> >\n> > I redid this test here:\n> > Windows 10 64 bits\n> > msvc 2019 64 bits\n> > RAM 8GB\n> > SSD 256 GB\n> >\n> > HEAD (default configuration)\n> > Time: 229396,551 ms (03:49,397)\n> > PATCHED:\n> > Time: 220887,346 ms (03:40,887)\n> >\n>\n> This is 10x longer than reported by David. Presumably David used a\n> machine a lot of RAM, while your system has 8GB and so is I/O bound.\n>\nProbably, but Windows is slower than Linux, certainly.\n\n>\n> Also, what exactly does \"patched\" mean? The patch you attached?\n>\nIt means the results of the benchmark with the patch applied.\n\n\n>\n> >>I tested various sizes of work_mem starting at 4MB and doubled that\n> >>all the way to 16GB. For many of the smaller values of work_mem the\n> >>performance is vastly improved by Heikki's change, however for\n> >>work_mem = 64MB I detected quite a large slowdown. PG14 took 20.9\n> >>seconds and PG15 beta 1 took 29 seconds!\n> >\n> >>I've been trying to get to the bottom of this today and finally have\n> >>discovered this is due to the tuple size allocations in the sort being\n> >>exactly 64 bytes. Prior to 40af10b57 (Use Generation memory contexts\n> >>to store tuples in sorts) the tuple for the sort would be stored in an\n> >>aset context. After 40af10b57 we'll use a generation context. The\n> >>idea with that change is that the generation context does no\n> >>power-of-2 round ups for allocations, so we save memory in most cases.\n> >>However, due to this particular test having a tuple size of 64-bytes,\n> >>there was no power-of-2 wastage with aset.\n> >\n> >>The problem is that generation chunks have a larger chunk header than\n> >>aset do due to having to store the block pointer that the chunk\n> >>belongs to so that GenerationFree() can increment the nfree chunks in\n> >>the block. aset.c does not require this as freed chunks just go onto a\n> >>freelist that's global to the entire context.\n> >\n> >>Basically, for my test query, the slowdown is because instead of being\n> >>able to store 620702 tuples per tape over 226 tapes with an aset\n> >>context, we can now only store 576845 tuples per tape resulting in\n> >>requiring 244 tapes when using the generation context.\n> >\n> >>If I had added column \"g\" to make the tuple size 72 bytes causing\n> >>aset's code to round allocations up to 128 bytes and generation.c to\n> >>maintain the 72 bytes then the sort would have stored 385805 tuples\n> >>over 364 batches for aset and 538761 tuples over 261 batches using the\n> >>generation context. That would have been a huge win.\n> >\n> >>So it basically looks like I discovered a very bad case that causes a\n> >>significant slowdown. Yet other cases that are not an exact power of\n> >>2 stand to gain significantly from this change.\n> >\n> >>One thing 40af10b57 does is stops those terrible performance jumps\n> >>when the tuple size crosses a power-of-2 boundary. The performance\n> >>should be more aligned to the size of the data being sorted now...\n> >>Unfortunately, that seems to mean regressions for large sorts with\n> >>power-of-2 sized tuples.\n> >\n> > It seems to me that the solution would be to use aset allocations\n> >\n> > when the size of the tuples is power-of-2?\n> >\n> > if (state->sortopt & TUPLESORT_ALLOWBOUNDED ||\n> > (state->memtupsize & (state->memtupsize - 1)) == 0)\n> > state->tuplecontext = AllocSetContextCreate(state->sortcontext,\n> > \"Caller tuples\", ALLOCSET_DEFAULT_SIZES);\n> > else\n> > state->tuplecontext = GenerationContextCreate(state->sortcontext,\n> > \"Caller tuples\", ALLOCSET_DEFAULT_SIZES);\n> >\n>\n> I'm pretty sure this is pointless, because memtupsize is the size of the\n> memtuples array. But the issue is about size of the tuples. After all,\n> David was talking about 64B chunks, but the array is always at least\n> 1024 elements, so it obviously can't be the same thing.\n>\nIt was more of a guessing attempt.\n\n\n> How would we even know how large the tuples will be at this point,\n> before we even see the first of them?\n>\nI don't know how.\n\n\n> > I took a look and tried some improvements to see if I had a better\n> result.\n> >\n>\n> IMHO special-casing should be the last resort, because it makes the\n> behavior much harder to follow.\n\nProbably, but in my tests, there has been some gain.\n\nAlso, we're talking about sort, but\n> don't other places using Generation context have the same issue?\n>\nWell, for PG15 is what was addressed, just sort.\n\n\n> Treating prefferrable to find a fix addressing all those places,\n>\nFor now, I'm just focused on sorts.\n\nregards,\nRanier Vilela\n\nEm seg., 23 de mai. de 2022 às 18:01, Tomas Vondra <tomas.vondra@enterprisedb.com> escreveu:FYI not sure why, but your responses seem to break threading quite\noften, due to missing headers identifying the message you're responding\nto (In-Reply-To, References). Not sure why or how to fix it, but this\nmakes it much harder to follow the discussion.\n\nOn 5/22/22 21:11, Ranier Vilela wrote:\n> Hi David,\n> \n>>Over the past few days I've been gathering some benchmark results\n>>together to show the sort performance improvements in PG15 [1].\n> \n>>One of the test cases I did was to demonstrate Heikki's change to use\n>>a k-way merge (65014000b).\n> \n>>The test I did to try this out was along the lines of:\n> \n>>set max_parallel_workers_per_gather = 0;\n>>create table t (a bigint not null, b bigint not null, c bigint not\n>>null, d bigint not null, e bigint not null, f bigint not null);\n> \n>>insert into t select x,x,x,x,x,x from generate_Series(1,140247142) x;\n> -- 10GB!\n>>vacuum freeze t;\n> \n>>The query I ran was:\n> \n>>select * from t order by a offset 140247142;\n> \n> I redid this test here:\n> Windows 10 64 bits\n> msvc 2019 64 bits\n> RAM 8GB\n> SSD 256 GB\n> \n> HEAD (default configuration)\n> Time: 229396,551 ms (03:49,397)\n> PATCHED:\n> Time: 220887,346 ms (03:40,887)\n> \n\nThis is 10x longer than reported by David. Presumably David used a\nmachine a lot of RAM, while your system has 8GB and so is I/O bound.Probably, but Windows is slower than Linux, certainly.\n\nAlso, what exactly does \"patched\" mean? The patch you attached?It means the results of the benchmark with the patch applied.\n\n\n>>I tested various sizes of work_mem starting at 4MB and doubled that\n>>all the way to 16GB. For many of the smaller values of work_mem the\n>>performance is vastly improved by Heikki's change, however for\n>>work_mem = 64MB I detected quite a large slowdown. PG14 took 20.9\n>>seconds and PG15 beta 1 took 29 seconds!\n> \n>>I've been trying to get to the bottom of this today and finally have\n>>discovered this is due to the tuple size allocations in the sort being\n>>exactly 64 bytes. Prior to 40af10b57 (Use Generation memory contexts\n>>to store tuples in sorts) the tuple for the sort would be stored in an\n>>aset context. After 40af10b57 we'll use a generation context. The\n>>idea with that change is that the generation context does no\n>>power-of-2 round ups for allocations, so we save memory in most cases.\n>>However, due to this particular test having a tuple size of 64-bytes,\n>>there was no power-of-2 wastage with aset.\n> \n>>The problem is that generation chunks have a larger chunk header than\n>>aset do due to having to store the block pointer that the chunk\n>>belongs to so that GenerationFree() can increment the nfree chunks in\n>>the block. aset.c does not require this as freed chunks just go onto a\n>>freelist that's global to the entire context.\n> \n>>Basically, for my test query, the slowdown is because instead of being\n>>able to store 620702 tuples per tape over 226 tapes with an aset\n>>context, we can now only store 576845 tuples per tape resulting in\n>>requiring 244 tapes when using the generation context.\n> \n>>If I had added column \"g\" to make the tuple size 72 bytes causing\n>>aset's code to round allocations up to 128 bytes and generation.c to\n>>maintain the 72 bytes then the sort would have stored 385805 tuples\n>>over 364 batches for aset and 538761 tuples over 261 batches using the\n>>generation context. That would have been a huge win.\n> \n>>So it basically looks like I discovered a very bad case that causes a\n>>significant slowdown. Yet other cases that are not an exact power of\n>>2 stand to gain significantly from this change.\n> \n>>One thing 40af10b57 does is stops those terrible performance jumps\n>>when the tuple size crosses a power-of-2 boundary. The performance\n>>should be more aligned to the size of the data being sorted now...\n>>Unfortunately, that seems to mean regressions for large sorts with\n>>power-of-2 sized tuples.\n> \n> It seems to me that the solution would be to use aset allocations\n> \n> when the size of the tuples is power-of-2?\n> \n> if (state->sortopt & TUPLESORT_ALLOWBOUNDED ||\n> (state->memtupsize & (state->memtupsize - 1)) == 0)\n> state->tuplecontext = AllocSetContextCreate(state->sortcontext,\n> \"Caller tuples\", ALLOCSET_DEFAULT_SIZES);\n> else\n> state->tuplecontext = GenerationContextCreate(state->sortcontext,\n> \"Caller tuples\", ALLOCSET_DEFAULT_SIZES);\n> \n\nI'm pretty sure this is pointless, because memtupsize is the size of the\nmemtuples array. But the issue is about size of the tuples. After all,\nDavid was talking about 64B chunks, but the array is always at least\n1024 elements, so it obviously can't be the same thing.It was more of a guessing attempt. \n\nHow would we even know how large the tuples will be at this point,\nbefore we even see the first of them?I don't know how.\n\n> I took a look and tried some improvements to see if I had a better result.\n> \n\nIMHO special-casing should be the last resort, because it makes the\nbehavior much harder to follow.Probably, but in my tests, there has been some gain. Also, we're talking about sort, but\ndon't other places using Generation context have the same issue?Well, for PG15 is what was addressed, just sort. \nTreating prefferrable to find a fix addressing all those places,For now, I'm just focused on sorts.regards,Ranier Vilela",
"msg_date": "Mon, 23 May 2022 19:02:00 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PG15 beta1 sort performance regression due to Generation context\n change"
}
] |
[
{
"msg_contents": "forking: <20220307191054.n5enrlf6kdn7zc42@alap3.anarazel.de>\n\nAn update.\n\nccache 4.6.1 was released which allows compiling postgres\nI submitted a request to update the package in chocolatey.\n\nBut with the existing build system, it's no faster anyway, I guess due to poor\nuse of parallelism.\nhttps://cirrus-ci.com/task/5972008205811712\n\nCurrently, meson doesn't (automatically) use ccache with MSVC - see\nmesonbuild/environment.py.\n\nAnd CC=ccache gives an error - I suppose it should not try to pop ccache off the\ncompiler list if the list has only one element.\n\n|[21:44:49.791] File \"C:\\python\\lib\\site-packages\\mesonbuild\\compilers\\detect.py\", line 375, in _detect_c_or_cpp_compiler\n|[21:44:49.791] compiler_name = os.path.basename(compiler[0])\n|[21:44:49.791] IndexError: list index out of range\n|...\n|[21:44:49.791] meson.build:1:0: ERROR: Unhandled python exception\n|[21:44:49.791] \n|[21:44:49.791] This is a Meson bug and should be reported!\n\nBut it can be convinced to use ccache by renaming the executable to \"pgccache\".\nWhich builds in 46sec: https://cirrus-ci.com/task/4862234995195904\nThis requires ccache 4.6, released in Feburary and already in choco.\nNote that ccache supports neither /Zi debugging nor precompiled headers.\nI'm not sure, but -Dc_args=/Z7 may do what's wanted here.\n\n\n",
"msg_date": "Sun, 22 May 2022 18:26:06 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "ccache, MSVC, and meson"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-22 18:26:06 -0500, Justin Pryzby wrote:\n> forking: <20220307191054.n5enrlf6kdn7zc42@alap3.anarazel.de>\n> \n> An update.\n> \n> ccache 4.6.1 was released which allows compiling postgres\n> I submitted a request to update the package in chocolatey.\n> \n> But with the existing build system, it's no faster anyway, I guess due to poor\n> use of parallelism.\n> https://cirrus-ci.com/task/5972008205811712\n\nNo, because it never uses caching, because the way we set the output director\ncauses ccache to never cache.\n\n\n> Currently, meson doesn't (automatically) use ccache with MSVC - see\n> mesonbuild/environment.py.\n> \n> And CC=ccache gives an error - I suppose it should not try to pop ccache off the\n> compiler list if the list has only one element.\n> [...]\n\n> But it can be convinced to use ccache by renaming the executable to \"pgccache\".\n> Which builds in 46sec: https://cirrus-ci.com/task/4862234995195904\n> This requires ccache 4.6, released in Feburary and already in choco.\n> Note that ccache supports neither /Zi debugging nor precompiled headers.\n> I'm not sure, but -Dc_args=/Z7 may do what's wanted here.\n\nThe spurious message should be fixed, of course. I suspect you dont need a\nwrapper, you can just set CC='ccache cl.exe' or similar? Afaics it's not\nmeaningful to do 'CC=ccache.exe' alone, because then it'll interpret arguments\nas ccache options, rather than compiler options.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 24 May 2022 12:30:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ccache, MSVC, and meson"
},
{
"msg_contents": "On Tue, May 24, 2022 at 12:30:59PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2022-05-22 18:26:06 -0500, Justin Pryzby wrote:\n> > forking: <20220307191054.n5enrlf6kdn7zc42@alap3.anarazel.de>\n> > \n> > An update.\n> > \n> > ccache 4.6.1 was released which allows compiling postgres\n> > I submitted a request to update the package in chocolatey.\n> > \n> > But with the existing build system, it's no faster anyway, I guess due to poor\n> > use of parallelism.\n> > https://cirrus-ci.com/task/5972008205811712\n> \n> No, because it never uses caching, because the way we set the output director\n> causes ccache to never cache.\n\nI think you're referring to the trailing backslash in the MSVC project file,\nmeaning \"write to a filename under this directory\":\n\nsrc/tools/msvc/MSBuildProject.pm:\n<ObjectFileName>.\\\\$cfgname\\\\$self->{name}\\\\</ObjectFileName>\n\nccache was fixed to handle that in 4.6, and could be worked around before that\nby adding \"%(Filename).obj\".\n\nhttps://github.com/ccache/ccache/issues/1018\n\nIn any case, it really is caching, but without any positive effect:\n\n[17:02:01.555] Hits: 1398 / 1398 (100.0 %)\n[17:02:01.555] Direct: 1398 / 1398 (100.0 %)\n[17:02:01.555] Preprocessed: 0 / 0\n[17:02:01.555] Misses: 0\n[17:02:01.555] Direct: 0\n[17:02:01.555] Preprocessed: 0\n[17:02:01.555] Primary storage:\n[17:02:01.555] Hits: 2796 / 2796 (100.0 %)\n[17:02:01.555] Misses: 0\n\n> > Currently, meson doesn't (automatically) use ccache with MSVC - see\n> > mesonbuild/environment.py.\n> > \n> > And CC=ccache gives an error - I suppose it should not try to pop ccache off the\n> > compiler list if the list has only one element.\n> > [...]\n> \n> > But it can be convinced to use ccache by renaming the executable to \"pgccache\".\n> > Which builds in 46sec: https://cirrus-ci.com/task/4862234995195904\n> > This requires ccache 4.6, released in Feburary and already in choco.\n> > Note that ccache supports neither /Zi debugging nor precompiled headers.\n> > I'm not sure, but -Dc_args=/Z7 may do what's wanted here.\n> \n> The spurious message should be fixed, of course. I suspect you dont need a\n> wrapper, you can just set CC='ccache cl.exe' or similar? Afaics it's not\n> meaningful to do 'CC=ccache.exe' alone, because then it'll interpret arguments\n> as ccache options, rather than compiler options.\n\nif meson didn't crash CC=ccache.exe might have worked, because I had set\nCCACHE_COMPILER.\n\nAs I recall, CC='ccache cl.exe' didn't work because it didn't attempt to do any\nargument splitting.\n\nThe copy of ccache.exe is necessary because otherwise ccache \"skips\" over any\nleading \"ccache[.exe]\" components while searching for the real compiler.\n\nThis is the only way I've gotten it to work (but feel free to comment at:\nhttps://github.com/ccache/ccache/issues/1039)\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 24 May 2022 14:52:02 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: ccache, MSVC, and meson"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-24 14:52:02 -0500, Justin Pryzby wrote:\n> > The spurious message should be fixed, of course. I suspect you dont need a\n> > wrapper, you can just set CC='ccache cl.exe' or similar? Afaics it's not\n> > meaningful to do 'CC=ccache.exe' alone, because then it'll interpret arguments\n> > as ccache options, rather than compiler options.\n> \n> if meson didn't crash CC=ccache.exe might have worked, because I had set\n> CCACHE_COMPILER.\n\nDid you report the issue? Should be simple enough to fix.\n\nI seriously doubt it's a good idea to use CCACHE_COMPILER - there's no way\nmeson (or autoconf or ..) can rely on the results of compiler tests that way,\nsince CCACHE_COMPILER can change at any time.\n\n\n> As I recall, CC='ccache cl.exe' didn't work because it didn't attempt to do any\n> argument splitting.\n\nI tried it, and it works for me when building with ninja (compiling with\ncl.exe). I assume you are using msbuild?\n\nA cached build takes 21s on my VM, fwiw, vs 199s uncached.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 24 May 2022 13:30:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ccache, MSVC, and meson"
},
{
"msg_contents": "On Tue, May 24, 2022 at 01:30:39PM -0700, Andres Freund wrote:\n> > As I recall, CC='ccache cl.exe' didn't work because it didn't attempt to do any\n> > argument splitting.\n> \n> I tried it, and it works for me when building with ninja (compiling with\n> cl.exe). I assume you are using msbuild?\n\nApparently it works to write \"ccache.exe\" but not just \"ccache\", which is what\nI used before.\n\nIt seems to work by fooling meson, which intends to strip off the leading\n\"ccache\" but fails due to the \"exe\", but then happens to do what's desired.\n\nIf I'm not wrong, pgccache.exe + CCACHE_COMPILER=cl runs 30sec faster on\ncirrus. I suppose it's because windows is running cmd.exe on the\nCC=\"ccache.exe cl\".\n\nAlso, /O2 cuts ~3 minutes off the test time on cirrus, which seems worth it,\nexcept that it omits frame pointers, which probably breaks debuggability. And\nwhen I pass /Oy- to \"avoid omitting\" frame pointers, several tests crash...\n\n\n",
"msg_date": "Tue, 24 May 2022 17:17:47 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: ccache, MSVC, and meson"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-24 17:17:47 -0500, Justin Pryzby wrote:\n> Also, /O2 cuts ~3 minutes off the test time on cirrus, which seems worth it,\n> except that it omits frame pointers, which probably breaks debuggability.\n\nIt likely also causes us to use the non-debug C runtime? Which IMO would be\nbad, it does detect quite a few problems that are otherwise hard to find.\n\n\n> And when I pass /Oy- to \"avoid omitting\" frame pointers, several tests crash...\n\nHuh. Do you get a backtrace containing anything remotely meaningful?\n\nThere's this helpful statement in the docs:\nhttps://docs.microsoft.com/en-us/cpp/build/reference/oy-frame-pointer-omission?view=msvc-170\n> If you specify a debug compiler option (/Z7, /Zi, /ZI), we recommend that you specify the /Oy- option after any other optimization compiler options.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 24 May 2022 21:48:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: ccache, MSVC, and meson"
},
{
"msg_contents": "On Tue, May 24, 2022 at 09:48:17PM -0700, Andres Freund wrote:\n> > And when I pass /Oy- to \"avoid omitting\" frame pointers, several tests crash...\n> \n> Huh. Do you get a backtrace containing anything remotely meaningful?\n\nThey are interesting, but maybe not meaningful.\n\nI tested a bunch of different optimization cases, this is just one. But the\nother failures are similar.\n\nhttps://cirrus-ci.com/task/5710595927310336\ncompiled with /Z7 /O1, test world fails in 15min (compared to ~18min without\n/O1)\n\nChild-SP RetAddr Call Site\n000000b6`e27fed80 00007ffa`93b5b6ba pg_stat_statements!pg_stat_statements_reset_1_7+0x8a98\n000000b6`e27fedb0 00007ffa`93b5a731 pg_stat_statements!pg_stat_statements_reset_1_7+0x8a5e\n000000b6`e27fedf0 00007ffa`93b5a48c pg_stat_statements!pg_stat_statements_reset_1_7+0x7ad5\n000000b6`e27feeb0 00007ffa`93b517b5 pg_stat_statements!pg_stat_statements_reset_1_7+0x7830\n000000b6`e27feee0 00007ffa`93b52c6f pg_stat_statements!PG_init+0x7ad\n000000b6`e27fef80 00007ff6`5544ebc3 pg_stat_statements!pg_stat_statements_reset_1_7+0x13\n000000b6`e27fefb0 00007ff6`5546fa07 postgres!ExecEvalXmlExpr+0x9b3\n000000b6`e27ff040 00007ff6`5530639c postgres!ExecReScanResult+0x1ef\n000000b6`e27ff080 00007ff6`55306e91 postgres!ExecWithCheckOptions+0x4d0\n000000b6`e27ff0c0 00007ffa`93b52f0c postgres!standard_ExecutorRun+0x13d\n000000b6`e27ff140 00007ff6`550e636d pg_stat_statements!pg_stat_statements_reset_1_7+0x2b0\n000000b6`e27ff2c0 00007ff6`550e5ba0 postgres!PortalRunFetch+0x645\n000000b6`e27ff300 00007ff6`55013d31 postgres!PortalRun+0x24c\n000000b6`e27ff4e0 00007ff6`55010db9 postgres!die+0x1cad\n000000b6`e27ff5b0 00007ff6`5500e004 postgres!PostgresMain+0xb31\n000000b6`e27ff7a0 00007ff6`55001592 postgres!SubPostmasterMain+0x230\n000000b6`e27ff9d0 00007ff6`554c6ef0 postgres!main+0x2a2\n000000b6`e27ffbc0 00007ffa`91217974 postgres!pg_check_dir+0x68c\n000000b6`e27ffc00 00007ffa`9c84a2f1 KERNEL32!BaseThreadInitThunk+0x14\n000000b6`e27ffc30 00000000`00000000 ntdll!RtlUserThreadStart+0x21\n\n2022-05-23 09:05:28.893 GMT postmaster[6644] LOG: server process (PID 6388) was terminated by exception 0xC0000354\n2022-05-23 09:05:28.893 GMT postmaster[6644] DETAIL: Failed process was running: SELECT pg_stat_statements_reset();\n\nChild-SP RetAddr Call Site\n0000004e`65ffbd40 00007ffa`93b66d3a basebackup_to_shell!PG_init+0x5c8c\n0000004e`65ffbd70 00007ffa`93b67898 basebackup_to_shell!PG_init+0x5c52\n0000004e`65ffbdb0 00007ffa`93b6817e basebackup_to_shell!PG_init+0x67b0\n0000004e`65ffbe10 00007ffa`93b656bb basebackup_to_shell!PG_init+0x7096\n0000004e`65ffbe40 00007ffa`93b65441 basebackup_to_shell!PG_init+0x45d3\n0000004e`65ffbea0 00007ffa`93b653d2 basebackup_to_shell!PG_init+0x4359\n0000004e`65ffbee0 00007ffa`93b6556e basebackup_to_shell!PG_init+0x42ea\n0000004e`65ffbf10 00007ffa`93b657be basebackup_to_shell!PG_init+0x4486\n0000004e`65ffbf90 00007ffa`93b616ce basebackup_to_shell!PG_init+0x46d6\n0000004e`65ffc010 00007ffa`93b61041 basebackup_to_shell!PG_init+0x5e6\n0000004e`65ffc040 00007ff6`5545be58 basebackup_to_shell+0x1041\n0000004e`65ffc070 00007ff6`55289171 postgres!bbsink_zstd_new+0x100\n0000004e`65ffc0b0 00007ff6`55288bd1 postgres!SendBaseBackup+0x2c8d\n0000004e`65ffc160 00007ff6`55288b68 postgres!SendBaseBackup+0x26ed\n0000004e`65ffce90 00007ff6`55288b68 postgres!SendBaseBackup+0x2684\n0000004e`65ffdbc0 00007ff6`552878d1 postgres!SendBaseBackup+0x2684\n0000004e`65ffe8f0 00007ff6`5528661f postgres!SendBaseBackup+0x13ed\n0000004e`65fff110 00007ff6`550ad66c postgres!SendBaseBackup+0x13b\n0000004e`65fff300 00007ff6`55010da9 postgres!exec_replication_command+0x3c0\n0000004e`65fff340 00007ff6`5500e004 postgres!PostgresMain+0xb21\n0000004e`65fff530 00007ff6`55001592 postgres!SubPostmasterMain+0x230\n0000004e`65fff760 00007ff6`554c6ef0 postgres!main+0x2a2\n0000004e`65fff950 00007ffa`91217974 postgres!pg_check_dir+0x68c\n0000004e`65fff990 00007ffa`9c84a2f1 KERNEL32!BaseThreadInitThunk+0x14\n0000004e`65fff9c0 00000000`00000000 ntdll!RtlUserThreadStart+0x21\n\n2022-05-23 09:10:16.466 GMT [5404][postmaster] LOG: server process (PID 1132) was terminated by exception 0xC0000354\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 25 May 2022 22:26:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: ccache, MSVC, and meson"
},
{
"msg_contents": "On Sun, May 22, 2022 at 06:26:06PM -0500, Justin Pryzby wrote:\n> ccache 4.6.1 was released which allows compiling postgres\n\n4.6.1 is now in choco. This is required to use with the current msbuild\nprocess.\n\n> But with the existing build system, it's no faster anyway, I guess due to poor\n> use of parallelism.\n> https://cirrus-ci.com/task/5972008205811712\n\nActually, it cuts the msbuild compilation time in half [0] (which is not\nimpressive, but also not nothing). cache misses are slower, though.\nhttps://cirrus-ci.com/task/5926287305867264\n\n[0] My compiled ccache may have been a non-optimized build...\n\nI don't know how to do make this work for msbuild without naming the binary\nsomething other than ccache. Maybe that's fine, but better done on the OS\nimage, rather than in the CI config.\n\n\n",
"msg_date": "Wed, 25 May 2022 22:40:13 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: ccache, MSVC, and meson"
},
{
"msg_contents": "On Tue, May 24, 2022 at 09:48:17PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2022-05-24 17:17:47 -0500, Justin Pryzby wrote:\n> > Also, /O2 cuts ~3 minutes off the test time on cirrus, which seems worth it,\n> > except that it omits frame pointers, which probably breaks debuggability.\n> \n> It likely also causes us to use the non-debug C runtime? Which IMO would be\n> bad, it does detect quite a few problems that are otherwise hard to find.\n> \n> \n> > And when I pass /Oy- to \"avoid omitting\" frame pointers, several tests crash...\n> \n> Huh. Do you get a backtrace containing anything remotely meaningful?\n\nI looked at this again. The issue isn't caused by optimizations, but\n(apparently) by an absence of options in \"msbuild --buildtype plain\".\nAdding /MD or /MDd fixes the issue.\n\nThat's done in the existing build system here:\n\nsrc/tools/msvc/MSBuildProject.pm: runtime => 'MultiThreadedDebugDLL'\nsrc/tools/msvc/MSBuildProject.pm: runtime => 'MultiThreadedDLL'\n\nhttps://cirrus-ci.com/task/4895613501308928\n\nBTW: vcvarsall isn't needed in the \"check_world\" scripts.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 14 Jun 2022 16:08:19 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: ccache, MSVC, and meson"
},
{
"msg_contents": "On Tue, May 24, 2022 at 01:30:39PM -0700, Andres Freund wrote:\n> On 2022-05-24 14:52:02 -0500, Justin Pryzby wrote:\n> > > The spurious message should be fixed, of course. I suspect you dont need a\n> > > wrapper, you can just set CC='ccache cl.exe' or similar? Afaics it's not\n> > > meaningful to do 'CC=ccache.exe' alone, because then it'll interpret arguments\n> > > as ccache options, rather than compiler options.\n> > \n> > if meson didn't crash CC=ccache.exe might have worked, because I had set\n> > CCACHE_COMPILER.\n> \n> Did you report the issue? Should be simple enough to fix.\n> \n> I seriously doubt it's a good idea to use CCACHE_COMPILER - there's no way\n> meson (or autoconf or ..) can rely on the results of compiler tests that way,\n> since CCACHE_COMPILER can change at any time.\n\nThis updated patch doesn't use CCACHE_COMPILER.\n\ncache misses are several times slower (12 minute build time vs 2:30 with ninja,\nwithout ccache), so it's possible that can be *slower* if the hit ratio is\ninadequate. ninja on cirrus builds 3x faster with ccache, but msbuild is only\n~2x faster, so I recommend using it only with ninja.\n\nThere's still warts requires using \"plain\" with /Z7 /MDd.",
"msg_date": "Fri, 1 Jul 2022 14:18:41 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: ccache, MSVC, and meson"
},
{
"msg_contents": "Note that ccache 4.7 was released (and also uploaded to chocolatey).\nThat supports depend mode with MSVC.\n\nPCH made cache misses a lot less slow. However, I still haven't found\nanyt way to improve compilation speed much on cirrusci...\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 20 Oct 2022 09:17:54 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: ccache, MSVC, and meson"
}
] |
[
{
"msg_contents": "PostgreSQL version:PostgreSQL 14.0 on x86_64-pc-linux-gnu, compiled by gcc\r\n(GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit\r\nPlatform information:Linux version 3.10.0-1127.el7.x86_64\r\n(mockbuild@kbuilder.bsys.centos.org) (gcc version 4.8.5 20150623 (Red Hat\r\n4.8.5-39) (GCC) ) #1 SMP Tue Mar 31 23:36:51 UTC 2020\r\n\r\nI created two tables for testing. One is remote table in database A and the\r\nother is foreign table in database B.\r\nThen i use INSERT statements with lo_import function to add data to remote\r\ntable.\r\n\r\nThe output i have got.\r\nThe result is remote table,pg_largeobject in database\r\nA,pg_largeobject_metadata in database A have correct data.\r\nBut,i don't find correct data in pg_largeobject and pg_largeobject_metadata\r\nin database B.\r\n\r\nMy operation steps are as follows:\r\n Both database A and database B:\r\n create extension postgres_fdw;\r\nselect * from pg_largeobject_metadata ;--check if exists any rows\r\nselect * from pg_largeobject;\r\n database A:\r\n CREATE TABLE oid_table (id INT NOT NULL, oid_1 oid, oid_2 oid);\r\n insert into oid_table values\r\n(1,lo_import('/home/highgo/pictures/bird.jpg'),lo_import('/home/highgo/pictures/pig.jpg'));--Two\r\nordinary files on the machine\r\nselect * from oid_table;\r\n database B:\r\n CREATE server srv_postgres_cn_0 FOREIGN data wrapper postgres_fdw\r\noptions(host '127.0.0.1', port '9000', dbname 'postgres');\r\n CREATE USER mapping FOR highgo server srv_postgres_cn_0 options(user\r\n'highgo', password '123456');\r\n CREATE FOREIGN TABLE oid_table_ft (id INT NOT NULL, oid_1 oid, oid_2\r\noid) server srv_postgres_cn_0 options(schema_name 'public', table_name\r\n'oid_table');\r\nselect * from oid_table_ft;\r\nselect lo_export(oid_1,'/usr/local/pgsql/out.jpg') from oid_table_ft where\r\nid=1;--the result is \"ERROR: large object xxx does not exist\"\r\n\r\ncomments :\r\nmy default databse is \"postgres\" and default user is \"highgo\" and I don't\r\nthink these will have an impact on this problem.\r\n\r\nThe output i expected:\r\npg_largeobject_metadata and pg_largeobject in both database A and database\r\nB should have rows.Shouldn't only in database A.So, i can use large object\r\nfunctions\r\nto operate large_objectin remote table or foreign table.\r\n\r\nPlease forgive me, English is not my mother tongue. If you have any doubts\r\nabout my description, please contact me, and I will reply to you at the\r\nfirst time. Thank you sincerely and look forward to your reply.\nPostgreSQL version:PostgreSQL 14.0 on x86_64-pc-linux-gnu, compiled by gcc(GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bitPlatform information:Linux version 3.10.0-1127.el7.x86_64(mockbuild@kbuilder.bsys.centos.org) (gcc version 4.8.5 20150623 (Red Hat4.8.5-39) (GCC) ) #1 SMP Tue Mar 31 23:36:51 UTC 2020I created two tables for testing. One is remote table in database A and theother is foreign table in database B.Then i use INSERT statements with lo_import function to add data to remotetable.The output i have got.The result is remote table,pg_largeobject in databaseA,pg_largeobject_metadata in database A have correct data.But,i don't find correct data in pg_largeobject and pg_largeobject_metadatain database B.My operation steps are as follows: Both database A and database B: create extension postgres_fdw;select * from pg_largeobject_metadata ;--check if exists any rowsselect * from pg_largeobject; database A: CREATE TABLE oid_table (id INT NOT NULL, oid_1 oid, oid_2 oid); insert into oid_table values(1,lo_import('/home/highgo/pictures/bird.jpg'),lo_import('/home/highgo/pictures/pig.jpg'));--Twoordinary files on the machineselect * from oid_table; database B: CREATE server srv_postgres_cn_0 FOREIGN data wrapper postgres_fdwoptions(host '127.0.0.1', port '9000', dbname 'postgres'); CREATE USER mapping FOR highgo server srv_postgres_cn_0 options(user'highgo', password '123456'); CREATE FOREIGN TABLE oid_table_ft (id INT NOT NULL, oid_1 oid, oid_2oid) server srv_postgres_cn_0 options(schema_name 'public', table_name'oid_table');select * from oid_table_ft;select lo_export(oid_1,'/usr/local/pgsql/out.jpg') from oid_table_ft whereid=1;--the result is \"ERROR: large object xxx does not exist\"comments :my default databse is \"postgres\" and default user is \"highgo\" and I don'tthink these will have an impact on this problem.The output i expected:pg_largeobject_metadata and pg_largeobject in both database A and databaseB should have rows.Shouldn't only in database A.So, i can use large objectfunctionsto operate large_objectin remote table or foreign table.Please forgive me, English is not my mother tongue. If you have any doubtsabout my description, please contact me, and I will reply to you at thefirst time. Thank you sincerely and look forward to your reply.",
"msg_date": "Mon, 23 May 2022 09:45:40 +0800",
"msg_from": "\"=?gb18030?B?U2FsYWRpbg==?=\" <jiaoshuntian@highgo.com>",
"msg_from_op": true,
"msg_subject": "postgres_fdw has insufficient support for large object"
},
{
"msg_contents": "On Mon, May 23, 2022 at 7:16 AM Saladin <jiaoshuntian@highgo.com> wrote:\n>\n> PostgreSQL version:PostgreSQL 14.0 on x86_64-pc-linux-gnu, compiled by gcc\n> (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit\n> Platform information:Linux version 3.10.0-1127.el7.x86_64\n> (mockbuild@kbuilder.bsys.centos.org) (gcc version 4.8.5 20150623 (Red Hat\n> 4.8.5-39) (GCC) ) #1 SMP Tue Mar 31 23:36:51 UTC 2020\n>\n> I created two tables for testing. One is remote table in database A and the\n> other is foreign table in database B.\n> Then i use INSERT statements with lo_import function to add data to remote\n> table.\n>\n> The output i have got.\n> The result is remote table,pg_largeobject in database\n> A,pg_largeobject_metadata in database A have correct data.\n> But,i don't find correct data in pg_largeobject and pg_largeobject_metadata\n> in database B.\n>\n> My operation steps are as follows:\n> Both database A and database B:\n> create extension postgres_fdw;\n> select * from pg_largeobject_metadata ;--check if exists any rows\n> select * from pg_largeobject;\n> database A:\n> CREATE TABLE oid_table (id INT NOT NULL, oid_1 oid, oid_2 oid);\n> insert into oid_table values\n> (1,lo_import('/home/highgo/pictures/bird.jpg'),lo_import('/home/highgo/pictures/pig.jpg'));--Two\n> ordinary files on the machine\n> select * from oid_table;\n> database B:\n> CREATE server srv_postgres_cn_0 FOREIGN data wrapper postgres_fdw\n> options(host '127.0.0.1', port '9000', dbname 'postgres');\n> CREATE USER mapping FOR highgo server srv_postgres_cn_0 options(user\n> 'highgo', password '123456');\n> CREATE FOREIGN TABLE oid_table_ft (id INT NOT NULL, oid_1 oid, oid_2\n> oid) server srv_postgres_cn_0 options(schema_name 'public', table_name\n> 'oid_table');\n> select * from oid_table_ft;\n> select lo_export(oid_1,'/usr/local/pgsql/out.jpg') from oid_table_ft where\n> id=1;--the result is \"ERROR: large object xxx does not exist\"\n>\n> comments :\n> my default databse is \"postgres\" and default user is \"highgo\" and I don't\n> think these will have an impact on this problem.\n>\n> The output i expected:\n> pg_largeobject_metadata and pg_largeobject in both database A and database\n> B should have rows.Shouldn't only in database A.So, i can use large object\n> functions\n> to operate large_objectin remote table or foreign table.\n\nI don't think that the local pg_largeobject should maintain the\nforeign server's data, instead that the export should fetch the data\nfrom the remote's pg_largeobject table. Then I just checked inserting\ninto the foriegn from your test as shown below[1] and I noticed that\nthe insert is also importing the large object into the local\npg_largeobject instead of the remote server's pg_large object, which\nclearly seems broken to me. Basically, the actual row is inserted on\nthe remote server and the large object w.r.t. the same row is imported\nin local pg_largeobject.\n\ninsert into oid_table_ft values(1,lo_import('/home/highgo/pictures/bird.jpg'));\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 23 May 2022 10:45:06 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw has insufficient support for large object"
},
{
"msg_contents": "Dilip Kumar <dilipbalaut@gmail.com> writes:\n> I don't think that the local pg_largeobject should maintain the\n> foreign server's data, instead that the export should fetch the data\n> from the remote's pg_largeobject table. Then I just checked inserting\n> into the foriegn from your test as shown below[1] and I noticed that\n> the insert is also importing the large object into the local\n> pg_largeobject instead of the remote server's pg_large object, which\n> clearly seems broken to me. Basically, the actual row is inserted on\n> the remote server and the large object w.r.t. the same row is imported\n> in local pg_largeobject.\n\n> insert into oid_table_ft values(1,lo_import('/home/highgo/pictures/bird.jpg'));\n\nFor this example to \"work\", lo_import() would have to somehow know\nthat its result would get inserted into some foreign table and\nthen go create the large object on that table's server instead\nof locally.\n\nThis is unlikely to happen, for about ten different reasons that\nyou should have no trouble understanding if you stop to think\nabout it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 May 2022 01:24:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw has insufficient support for large object"
},
{
"msg_contents": "On Mon, May 23, 2022 at 10:54 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Dilip Kumar <dilipbalaut@gmail.com> writes:\n> > I don't think that the local pg_largeobject should maintain the\n> > foreign server's data, instead that the export should fetch the data\n> > from the remote's pg_largeobject table. Then I just checked inserting\n> > into the foriegn from your test as shown below[1] and I noticed that\n> > the insert is also importing the large object into the local\n> > pg_largeobject instead of the remote server's pg_large object, which\n> > clearly seems broken to me. Basically, the actual row is inserted on\n> > the remote server and the large object w.r.t. the same row is imported\n> > in local pg_largeobject.\n>\n> > insert into oid_table_ft values(1,lo_import('/home/highgo/pictures/bird.jpg'));\n>\n> For this example to \"work\", lo_import() would have to somehow know\n> that its result would get inserted into some foreign table and\n> then go create the large object on that table's server instead\n> of locally.\n\nYeah that makes sense. The lo_import() is just running as an\nindependent function to import the object into pg_largeobject and\nreturn the Oid so definitely it has no business to know where that Oid\nwill be stored :)\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 23 May 2022 11:05:33 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw has insufficient support for large object"
},
{
"msg_contents": "On Sunday, May 22, 2022, Saladin <jiaoshuntian@highgo.com> wrote:\n\n>\n> The output i expected:\n> pg_largeobject_metadata and pg_largeobject in both database A and database\n> B should have rows.Shouldn't only in database A.So, i can use large object\n> functions\n> to operate large_objectin remote table or foreign table.\n>\n\nThis is an off-topic email for the -hackers mailing list. -general is the\nappropriate list.\n\nYour expectation is simply unsupported by anything in the documentation.\n\nIf you want to do what you say you will need to use dblink (and the file\nneeds to be accessible to the remote server directly) and directly execute\nentire queries on the remote server, the FDW infrastructure simply does not\nwork in the way you are expecting.\n\nOr just use bytea.\n\nDavid J.\n\nOn Sunday, May 22, 2022, Saladin <jiaoshuntian@highgo.com> wrote:The output i expected:pg_largeobject_metadata and pg_largeobject in both database A and databaseB should have rows.Shouldn't only in database A.So, i can use large objectfunctionsto operate large_objectin remote table or foreign table.This is an off-topic email for the -hackers mailing list. -general is the appropriate list.Your expectation is simply unsupported by anything in the documentation.If you want to do what you say you will need to use dblink (and the file needs to be accessible to the remote server directly) and directly execute entire queries on the remote server, the FDW infrastructure simply does not work in the way you are expecting.Or just use bytea.David J.",
"msg_date": "Sun, 22 May 2022 22:41:34 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw has insufficient support for large object"
},
{
"msg_contents": "\"=?gb18030?B?U2FsYWRpbg==?=\" <jiaoshuntian@highgo.com> writes:\n> The output i expected:\n> pg_largeobject_metadata and pg_largeobject in both database A and database\n> B should have rows.Shouldn't only in database A.So, i can use large object\n> functions\n> to operate large_objectin remote table or foreign table.\n\nThe big picture here is that Postgres is a hodgepodge of features\nthat were developed at different times and with different quality\nstandards, over a period that's now approaching forty years.\nSome of these features interoperate better than others. Large\nobjects, in particular, are largely a mess with a lot of issues\nsuch as not having a well-defined garbage collection mechanism.\nThey do not interoperate well with foreign tables, or several\nother things, and you will not find anybody excited about putting\neffort into fixing that. We're unlikely to remove large objects\naltogether, because some people use them successfully and we're not\nabout breaking cases that work today. But they're fundamentally\nincompatible with use in foreign tables in the way you expect,\nand that is not likely to get fixed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 May 2022 02:21:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw has insufficient support for large object"
},
{
"msg_contents": "On Mon, May 23, 2022 at 1:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The big picture here is that Postgres is a hodgepodge of features\n> that were developed at different times and with different quality\n> standards, over a period that's now approaching forty years.\n> Some of these features interoperate better than others. Large\n> objects, in particular, are largely a mess with a lot of issues\n> such as not having a well-defined garbage collection mechanism.\n> They do not interoperate well with foreign tables, or several\n> other things, and you will not find anybody excited about putting\n> effort into fixing that. We're unlikely to remove large objects\n> altogether, because some people use them successfully and we're not\n> about breaking cases that work today.\n\nWe could possibly have a category of such features and label them\n\"obsolete\", where we don't threaten to remove them someday (i.e.\n\"deprecated\"), but we are not going to improve them in any meaningful\nway, and users would be warned about using them in new projects if\nbetter alternatives are available.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 23 May 2022 13:30:47 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw has insufficient support for large object"
},
{
"msg_contents": "On Mon, May 23, 2022 at 2:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The big picture here is that Postgres is a hodgepodge of features\n> that were developed at different times and with different quality\n> standards, over a period that's now approaching forty years.\n> Some of these features interoperate better than others. Large\n> objects, in particular, are largely a mess with a lot of issues\n> such as not having a well-defined garbage collection mechanism.\n\nWell, in one sense, the garbage mechanism is pretty well-defined:\nobjects get removed when you explicitly remove them. Given that\nPostgreSQL has no idea that the value you store in your OID column has\nany relationship with the large object that is identified by that OID,\nI don't see how it could work any other way. The problem isn't really\nthat the behavior is unreasonable or even badly-designed. The real\nissue is that it's not what people want.\n\nI used to think that what people wanted was something like TOAST.\nAfter all, large objects can be a lot bigger than toasted values, and\nthat size limitation might be a problem for some people. But then I\nrealized that there's a pretty important behavioral difference: when\nyou fetch a row that contains an OID that happens to identify a large\nobject, you can look at the rest of the row and then decide whether or\nnot you want to fetch the large object. If you just use a regular\ncolumn, with a data type of text or bytea, and store really big values\nin there, you don't have that option: the server sends you all the\ndata whether you want it or not. Similarly, on the storage side, you\ncan't send the value to the server a chunk at a time, which means you\nhave to buffer the whole value in memory on the client side first,\nwhich might be inconvenient.\n\nI don't think that allowing larger toasted values would actually be\nthat hard. We couldn't do it with varlena, but we could introduce a\nnew negative typlen that corresponds to some new representation that\npermits larger values. That would require sorting out various places\nwhere we randomly limit things to 1GB, but I think that's pretty\ndoable. However, I'm not sure that would really solve any problem,\nbecause who wants to malloc(1TB) in your application, and then\nprobably again in libpq, to schlep that value to the server -- and\nthen do the same thing in reverse when you get the value back? Without\nsome notion of certain values that are accessed via streaming rather\nthan monolithically, I can't really imagine getting to a satisfying\nplace.\n\nI realize I've drifted away from the original topic a bit. I just\nthink it's interesting to think about what a better mechanism might\nlook like.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 May 2022 18:16:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw has insufficient support for large object"
}
] |
[
{
"msg_contents": "Hi hackers,\nThanks to all the developers. The attached patch updates the manual for the pg_database catalog.\nThe current pg_database view definition is missing the datlocprovider column. The attached patch adds this column info.\nhttps://www.postgresql.org/docs/15/catalog-pg-database.html\n\nRegards,\nNoriyoshi Shinoda",
"msg_date": "Mon, 23 May 2022 02:00:18 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>",
"msg_from_op": true,
"msg_subject": "PG15 beta1 fix pg_database view document"
},
{
"msg_contents": "On Mon, May 23, 2022 at 02:00:18AM +0000, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n> Thanks to all the developers. The attached patch updates the manual\n> for the pg_database catalog.\n> The current pg_database view definition is missing the\n> datlocprovider column. The attached patch adds this column info. \n> https://www.postgresql.org/docs/15/catalog-pg-database.html\n\nIndeed. I have checked the rest of the catalog headers for any\ninconsistencies with the docs and what you have noticed here is the\nonly one.\n\n+ <structfield>datlocprovider</structfield> <type>char</type>\n+ </para>\n+ <para>\n+ Provider of the collation for this database:\n<literal>c</literal> = libc, <literal>i</literal> = icu\n+ </para></entry>\n\nICU needs to be upper-case if you are referring to the library name.\nHere, my guess is that you are referring to the value that can be\npassed down to the command? You could use lower-case terms like on\nthe CREATE COLLATION page, but I would put these within a <literal>\nmarkup. Anyway, this formulation is incorrect.\n--\nMichael",
"msg_date": "Mon, 23 May 2022 13:25:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 fix pg_database view document"
},
{
"msg_contents": "On 23.05.22 06:25, Michael Paquier wrote:\n> Indeed. I have checked the rest of the catalog headers for any\n> inconsistencies with the docs and what you have noticed here is the\n> only one.\n\nCommitted. Thanks for checking.\n\n> + <structfield>datlocprovider</structfield> <type>char</type>\n> + </para>\n> + <para>\n> + Provider of the collation for this database:\n> <literal>c</literal> = libc, <literal>i</literal> = icu\n> + </para></entry>\n> \n> ICU needs to be upper-case if you are referring to the library name.\n> Here, my guess is that you are referring to the value that can be\n> passed down to the command? You could use lower-case terms like on\n> the CREATE COLLATION page, but I would put these within a <literal>\n> markup. Anyway, this formulation is incorrect.\n\nThis is presumably copied from pg_collation.collprovider, which uses the \nsame markup. If we want to improve it, they should be changed together.\n\n\n",
"msg_date": "Mon, 23 May 2022 10:37:38 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: PG15 beta1 fix pg_database view document"
}
] |
[
{
"msg_contents": "Hi Team,\n\nAppreciate your time to look into this.\n\nTo select from another database I try to use dblink or fdw extension of\nPostgres, like this:\n\nmesods =>CREATE EXTENSION dblink;\n\nCREATE EXTENSION\n\nmesods => CREATE EXTENSION postgres_fdw;\n\nCREATE EXTENSION\n\nmesods=> select dblink_connect('conn_db_link','foreign_server') ;\n\n dblink_connect\n\n----------------\n\n OK\n\n(1 row)\n\n\n\nmesods=> select * from dblink('foreign_server','select * from ods_sch.emp')\nAS x(a int,b text);\n\n a | b\n\n---+---------\n\n 1 | Gohan\n\n 1 | Piccolo\n\n 1 | Tien\n\n(3 rows)\n\n\nThis works fine when I specify which columns I want to select.\n\n\nIs there something that postgres has without specifying the column names we\ncan fetch the data from dblink.\n\n\nAwaiting your reply.\n\n\nThank you.\n\n\nRegards,\n\nChirag Karkera\n\nHi Team, Appreciate your time to look into this. To select from another database I try to use dblink or fdw extension of Postgres, like this:mesods =>CREATE EXTENSION dblink;CREATE EXTENSIONmesods => CREATE EXTENSION postgres_fdw;\nCREATE EXTENSIONmesods=> select\ndblink_connect('conn_db_link','foreign_server') ;\n dblink_connect\n----------------\n OK\n(1 row)\n \nmesods=> select * from dblink('foreign_server','select\n* from ods_sch.emp') AS x(a int,b text);\n a | b\n---+---------\n 1 | Gohan\n 1 | Piccolo\n 1 | Tien\n(3 rows)This works fine when I specify which columns I want to select. Is there something that postgres has without specifying the column names we can fetch the data from dblink. Awaiting your reply.Thank you.Regards,Chirag Karkera",
"msg_date": "Mon, 23 May 2022 13:46:29 +0530",
"msg_from": "Chirag Karkera <chiragkrkr102@gmail.com>",
"msg_from_op": true,
"msg_subject": "Use Dblink without column defination"
},
{
"msg_contents": "Hi Team,\n\nAny update on this?\n\nThank You.\n\nRegards,\nChirag Karkera\n\nOn Mon, 23 May, 2022, 1:46 pm Chirag Karkera, <chiragkrkr102@gmail.com>\nwrote:\n\n> Hi Team,\n>\n> Appreciate your time to look into this.\n>\n> To select from another database I try to use dblink or fdw extension of\n> Postgres, like this:\n>\n> mesods =>CREATE EXTENSION dblink;\n>\n> CREATE EXTENSION\n>\n> mesods => CREATE EXTENSION postgres_fdw;\n>\n> CREATE EXTENSION\n>\n> mesods=> select dblink_connect('conn_db_link','foreign_server') ;\n>\n> dblink_connect\n>\n> ----------------\n>\n> OK\n>\n> (1 row)\n>\n>\n>\n> mesods=> select * from dblink('foreign_server','select * from\n> ods_sch.emp') AS x(a int,b text);\n>\n> a | b\n>\n> ---+---------\n>\n> 1 | Gohan\n>\n> 1 | Piccolo\n>\n> 1 | Tien\n>\n> (3 rows)\n>\n>\n> This works fine when I specify which columns I want to select.\n>\n>\n> Is there something that postgres has without specifying the column names\n> we can fetch the data from dblink.\n>\n>\n> Awaiting your reply.\n>\n>\n> Thank you.\n>\n>\n> Regards,\n>\n> Chirag Karkera\n>\n\nHi Team,Any update on this?Thank You.Regards,Chirag Karkera On Mon, 23 May, 2022, 1:46 pm Chirag Karkera, <chiragkrkr102@gmail.com> wrote:Hi Team, Appreciate your time to look into this. To select from another database I try to use dblink or fdw extension of Postgres, like this:mesods =>CREATE EXTENSION dblink;CREATE EXTENSIONmesods => CREATE EXTENSION postgres_fdw;\nCREATE EXTENSIONmesods=> select\ndblink_connect('conn_db_link','foreign_server') ;\n dblink_connect\n----------------\n OK\n(1 row)\n \nmesods=> select * from dblink('foreign_server','select\n* from ods_sch.emp') AS x(a int,b text);\n a | b\n---+---------\n 1 | Gohan\n 1 | Piccolo\n 1 | Tien\n(3 rows)This works fine when I specify which columns I want to select. Is there something that postgres has without specifying the column names we can fetch the data from dblink. Awaiting your reply.Thank you.Regards,Chirag Karkera",
"msg_date": "Tue, 24 May 2022 18:37:49 +0530",
"msg_from": "Chirag Karkera <chiragkrkr102@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use Dblink without column defination"
},
{
"msg_contents": "On 23.05.22 10:16, Chirag Karkera wrote:\n> mesods=> select * from dblink('foreign_server','select * from \n> ods_sch.emp') AS x(a int,b text);\n> \n> a | b\n> \n> ---+---------\n> \n> 1 | Gohan\n> \n> 1 | Piccolo\n> \n> 1 | Tien\n> \n> (3 rows)\n> \n> This works fine when I specify which columns I want to select.\n> \n> Is there something that postgres has without specifying the column names \n> we can fetch the data from dblink.\n\nNot in dblink. You could use foreign-data wrappers, which have a \ndifferent interface, which you might like better.\n\n\n",
"msg_date": "Tue, 24 May 2022 16:10:36 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Use Dblink without column defination"
}
] |
[
{
"msg_contents": "Hi,\n\nIt seems like there are some duplications of 'the' in pgstat.c and \npgstat_internal.h.\nAttaching a tiny patch to fix them.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION",
"msg_date": "Mon, 23 May 2022 22:46:22 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "fix typos in storing statistics in shared memory"
},
{
"msg_contents": "On Mon, May 23, 2022 at 10:46:22PM +0900, torikoshia wrote:\n> It seems like there are some duplications of 'the' in pgstat.c and\n> pgstat_internal.h.\n> Attaching a tiny patch to fix them.\n\nLGTM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 23 May 2022 10:22:36 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix typos in storing statistics in shared memory"
},
{
"msg_contents": "On Mon, May 23, 2022 at 10:22:36AM -0700, Nathan Bossart wrote:\n> On Mon, May 23, 2022 at 10:46:22PM +0900, torikoshia wrote:\n>> It seems like there are some duplications of 'the' in pgstat.c and\n>> pgstat_internal.h.\n>> Attaching a tiny patch to fix them.\n> \n> LGTM\n\nThanks Torikoshi-san, fixed.\n--\nMichael",
"msg_date": "Tue, 24 May 2022 11:02:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: fix typos in storing statistics in shared memory"
}
] |
[
{
"msg_contents": "Normal aggregate and partition wise aggregate have a big difference rows cost:\n\nbegin;\ncreate table t1(id integer, name text) partition by hash(id);\ncreate table t1_0 partition of t1 for values with(modulus 3, remainder 0);\ncreate table t1_1 partition of t1 for values with(modulus 3, remainder 1);\ncreate table t1_2 partition of t1 for values with(modulus 3, remainder 2);\ncommit;\n\nnormal aggregate rows cost is 200.\nexplain (verbose)\nselect count(1) from t1 group by id;\nHashAggregate (cost=106.20..108.20 rows=200 width=12) --here rows is 200\n Output: count(1), t1.id\n Group Key: t1.id\n -> Append (cost=0.00..87.15 rows=3810 width=4)\n -> Seq Scan on public.t1_0 t1_1 (cost=0.00..22.70 rows=1270 width=4)\n Output: t1_1.id\n -> Seq Scan on public.t1_1 t1_2 (cost=0.00..22.70 rows=1270 width=4)\n Output: t1_2.id\n -> Seq Scan on public.t1_2 t1_3 (cost=0.00..22.70 rows=1270 width=4)\n Output: t1_3.id\n\nAnd partition wise aggregate rows cost is 600\nset enable_partitionwise_aggregate = on;\nexplain (verbose)\nselect count(1) from t1 group by id;\nAppend (cost=29.05..96.15 rows=600 width=12) --here rows is 600\n -> HashAggregate (cost=29.05..31.05 rows=200 width=12) --this rows looks like same as normal aggregate\n Output: count(1), t1.id\n Group Key: t1.id\n -> Seq Scan on public.t1_0 t1 (cost=0.00..22.70 rows=1270 width=4)\n Output: t1.id\n -> HashAggregate (cost=29.05..31.05 rows=200 width=12)\n Output: count(1), t1_1.id\n Group Key: t1_1.id\n -> Seq Scan on public.t1_1 (cost=0.00..22.70 rows=1270 width=4)\n Output: t1_1.id\n -> HashAggregate (cost=29.05..31.05 rows=200 width=12)\n Output: count(1), t1_2.id\n Group Key: t1_2.id\n -> Seq Scan on public.t1_2 (cost=0.00..22.70 rows=1270 width=4)\n Output: t1_2.id\n\nSource code is 15beta1(7fdbdf204920ac279f280d0a8e96946fdaf41aef)\n\n\n\n",
"msg_date": "Tue, 24 May 2022 11:38:14 +0800",
"msg_from": "\"bucoo\" <bucoo@sohu.com>",
"msg_from_op": true,
"msg_subject": "partition wise aggregate wrong rows cost"
},
{
"msg_contents": "On Tue, 24 May 2022 at 15:38, bucoo <bucoo@sohu.com> wrote:\n>\n> Normal aggregate and partition wise aggregate have a big difference rows cost:\n\n> explain (verbose)\n> select count(1) from t1 group by id;\n> HashAggregate (cost=106.20..108.20 rows=200 width=12) --here rows is 200\n\n> set enable_partitionwise_aggregate = on;\n> explain (verbose)\n> select count(1) from t1 group by id;\n> Append (cost=29.05..96.15 rows=600 width=12) --here rows is 600\n\nI wouldn't say this is a bug. Could you not say that they're both\nwrong given that your tables are empty?\n\nWhat's going on here is that estimate_num_groups() is just returning\n200, which is what it returns when there are no statistics to give any\nindication of a better value. 200 is returned no matter if the\nestimate is for a single partition or the partitioned table. For the\npartition-wise aggregate case, the 3 individual 200 estimates are just\nsummed up by the Append costing code to give 600.\n\nThe only way we could really do anything different here would be to\nhave estimate_num_groups() return a default value based on the number\nof input rows. However, that 200 default is pretty long standing. We'd\nneed to consider quite a bit more than this case before we could\nrealistically consider changing it.\n\nFor tables that are being created and queried quickly after, we\nnormally tell people to run ANALYZE on the given tables to prevent\nthis sort of thing.\n\nDavid\n\n\n",
"msg_date": "Tue, 24 May 2022 15:58:12 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: partition wise aggregate wrong rows cost"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Tue, 24 May 2022 at 15:38, bucoo <bucoo@sohu.com> wrote:\n>> Normal aggregate and partition wise aggregate have a big difference rows cost:\n\n> I wouldn't say this is a bug. Could you not say that they're both\n> wrong given that your tables are empty?\n\nWe try fairly hard to ensure that the rowcount estimate for a given\nrelation does not vary across paths, so I concur with the OP that\nthis is a bug. Having said that, I'm not sure that the consequences\nare significant. As you say, the estimates seem to get a lot closer\nas soon as the table has some statistics. (But nonetheless, they\nare not identical, so it's still a bug.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 May 2022 00:16:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: partition wise aggregate wrong rows cost"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nThe DefineCustomStringVariable function(or any\nother DefineCustomXXXVariable) has a short_desc parameter that can be\nNULL and it's not apparent that this will lead to a segfault when SHOW ALL\nis used.\nThis happens because the ShowAllGUCConfig function expects a non-NULL\nshort_desc.\n\nThis happened for the Supabase supautils extension(\nhttps://github.com/supabase/supautils/issues/24) and any other extension\nthat uses the DefineCustomXXXVariable has the same bug risk.\n\nThis patch does an Assert on the short_desc(also on the name as an extra\nmeasure), so a postgres built with --enable-cassert can prevent the above\nissue.\n\n---\nSteve Chavez\nEngineering at https://supabase.com/",
"msg_date": "Mon, 23 May 2022 23:39:16 -0500",
"msg_from": "Steve Chavez <steve@supabase.io>",
"msg_from_op": true,
"msg_subject": "Assert name/short_desc to prevent SHOW ALL segfault"
},
{
"msg_contents": "On Mon, May 23, 2022 at 11:39:16PM -0500, Steve Chavez wrote:\n> The DefineCustomStringVariable function(or any\n> other DefineCustomXXXVariable) has a short_desc parameter that can be\n> NULL and it's not apparent that this will lead to a segfault when SHOW ALL\n> is used.\n> This happens because the ShowAllGUCConfig function expects a non-NULL\n> short_desc.\n> \n> This happened for the Supabase supautils extension(\n> https://github.com/supabase/supautils/issues/24) and any other extension\n> that uses the DefineCustomXXXVariable has the same bug risk.\n> \n> This patch does an Assert on the short_desc(also on the name as an extra\n> measure), so a postgres built with --enable-cassert can prevent the above\n> issue.\n\nI would actually ERROR on this so that we aren't relying on\n--enable-cassert builds to catch it. That being said, if there's no strong\nreason to enforce that a short description be provided, then why not adjust\nShowAllGUCConfig() to set that column to NULL when short_desc is missing?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 24 May 2022 11:41:49 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assert name/short_desc to prevent SHOW ALL segfault"
},
{
"msg_contents": "On Tue, May 24, 2022 at 11:41:49AM -0700, Nathan Bossart wrote:\n> I would actually ERROR on this so that we aren't relying on\n> --enable-cassert builds to catch it. That being said, if there's no strong\n> reason to enforce that a short description be provided, then why not adjust\n> ShowAllGUCConfig() to set that column to NULL when short_desc is missing?\n\nWell, issuing an ERROR on the stable branches would be troublesome for\nextension developers when reloading after a minor update if they did\nnot set their short_desc in a custom GUC. So, while I'd like to\nencourage the use of short_desc, using your suggestion to make the\ncode more flexible with NULL is fine by me. GetConfigOptionByNum()\ndoes that for long_desc by the way, meaning that we also have a\nproblem there on a build with --enable-nls for short_desc, no?\n--\nMichael",
"msg_date": "Wed, 25 May 2022 10:20:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert name/short_desc to prevent SHOW ALL segfault"
},
{
"msg_contents": "Thank you for the reviews Nathan, Michael.\n\nI agree with handling NULL in ShowAllGUCConfig() instead.\n\nI've attached the updated patch.\n\n--\nSteve Chavez\nEngineering at https://supabase.com/\n\nOn Tue, 24 May 2022 at 20:21, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, May 24, 2022 at 11:41:49AM -0700, Nathan Bossart wrote:\n> > I would actually ERROR on this so that we aren't relying on\n> > --enable-cassert builds to catch it. That being said, if there's no\n> strong\n> > reason to enforce that a short description be provided, then why not\n> adjust\n> > ShowAllGUCConfig() to set that column to NULL when short_desc is missing?\n>\n> Well, issuing an ERROR on the stable branches would be troublesome for\n> extension developers when reloading after a minor update if they did\n> not set their short_desc in a custom GUC. So, while I'd like to\n> encourage the use of short_desc, using your suggestion to make the\n> code more flexible with NULL is fine by me. GetConfigOptionByNum()\n> does that for long_desc by the way, meaning that we also have a\n> problem there on a build with --enable-nls for short_desc, no?\n> --\n> Michael\n>",
"msg_date": "Wed, 25 May 2022 00:05:55 -0500",
"msg_from": "Steve Chavez <steve@supabase.io>",
"msg_from_op": true,
"msg_subject": "Re: Assert name/short_desc to prevent SHOW ALL segfault"
},
{
"msg_contents": "On Wed, May 25, 2022 at 12:05:55AM -0500, Steve Chavez wrote:\n> Thank you for the reviews Nathan, Michael.\n> \n> I agree with handling NULL in ShowAllGUCConfig() instead.\n> \n> I've attached the updated patch.\n\nShouldn't the same check as extra_desc be done in\nGetConfigOptionByNum() for short_desc (point of upthread)? See\n3ac7d024, though -fsanitize=undefined does not apply here, I guess.\n--\nMichael",
"msg_date": "Wed, 25 May 2022 14:31:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert name/short_desc to prevent SHOW ALL segfault"
},
{
"msg_contents": "Hi,\n\nOn 2022-05-24 11:41:49 -0700, Nathan Bossart wrote:\n> On Mon, May 23, 2022 at 11:39:16PM -0500, Steve Chavez wrote:\n> > The DefineCustomStringVariable function(or any\n> > other DefineCustomXXXVariable) has a short_desc parameter that can be\n> > NULL and it's not apparent that this will lead to a segfault when SHOW ALL\n> > is used.\n> > This happens because the ShowAllGUCConfig function expects a non-NULL\n> > short_desc.\n> > \n> > This happened for the Supabase supautils extension(\n> > https://github.com/supabase/supautils/issues/24) and any other extension\n> > that uses the DefineCustomXXXVariable has the same bug risk.\n> > \n> > This patch does an Assert on the short_desc(also on the name as an extra\n> > measure), so a postgres built with --enable-cassert can prevent the above\n> > issue.\n> \n> I would actually ERROR on this so that we aren't relying on\n> --enable-cassert builds to catch it.\n\nHow about adding pg_nonnull(...) (ending up as __attribute__((nonnull(...))?\nThen code passing NULLs would get compiler warnings? It'd be useful in quite a\nfew more places.\n\n\n> That being said, if there's no strong reason to enforce that a short\n> description be provided, then why not adjust ShowAllGUCConfig() to set that\n> column to NULL when short_desc is missing?\n\nThere's a bunch more places that'd need to be adjusted, if we go that way. I\ndon't really have an opinion on it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 24 May 2022 23:17:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Assert name/short_desc to prevent SHOW ALL segfault"
},
{
"msg_contents": "On Tue, May 24, 2022 at 11:17:39PM -0700, Andres Freund wrote:\n> On 2022-05-24 11:41:49 -0700, Nathan Bossart wrote:\n>> I would actually ERROR on this so that we aren't relying on\n>> --enable-cassert builds to catch it.\n> \n> How about adding pg_nonnull(...) (ending up as __attribute__((nonnull(...))?\n> Then code passing NULLs would get compiler warnings? It'd be useful in quite a\n> few more places.\n\nI attached an attempt at this for the \"name\" and \"valueAddr\" arguments for\nthe DefineCustomXXXVariable functions. It looked like nonnull was\nsupported by GCC and Clang, but I haven't looked too closely to see whether\nwe need version checks as well.\n\n>> That being said, if there's no strong reason to enforce that a short\n>> description be provided, then why not adjust ShowAllGUCConfig() to set that\n>> column to NULL when short_desc is missing?\n> \n> There's a bunch more places that'd need to be adjusted, if we go that way. I\n> don't really have an opinion on it.\n\nI looked around and didn't see anywhere else obvious that needed adjustment\nbesides what Michael pointed out (3ac7d024). Am I missing something?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 26 May 2022 14:45:50 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assert name/short_desc to prevent SHOW ALL segfault"
},
{
"msg_contents": "On Thu, May 26, 2022 at 02:45:50PM -0700, Nathan Bossart wrote:\n> On Tue, May 24, 2022 at 11:17:39PM -0700, Andres Freund wrote:\n>> On 2022-05-24 11:41:49 -0700, Nathan Bossart wrote:\n>>> I would actually ERROR on this so that we aren't relying on\n>>> --enable-cassert builds to catch it.\n>> \n>> How about adding pg_nonnull(...) (ending up as __attribute__((nonnull(...))?\n>> Then code passing NULLs would get compiler warnings? It'd be useful in quite a\n>> few more places.\n> \n> I attached an attempt at this for the \"name\" and \"valueAddr\" arguments for\n> the DefineCustomXXXVariable functions. It looked like nonnull was\n> supported by GCC and Clang, but I haven't looked too closely to see whether\n> we need version checks as well.\n\nAdding pg_attribute_nonnull() is a neat idea. It looks like nonnull\nwas added in GCC 4.0, and I can see it first appearing in clang 3.7.\nThe only buildfarm member claiming to use a version of clang older\nthan that is dangomushi, aka 3.6.0. That's my machine, and the\ninformation on the buildfarm website is outdated as the version of\nclang available there is 13.0 as of today. I think that we are going\nto run into problems with buildfarm member protosciurus, running\nSolaris 10 under sparc? It claims to use gcc 3.4.3. I would worry\nalso about prairiedog, we've hard our share of compatibility issues\nwith this one in the past. It claims to use gcc 4.0.1 but Apple has\nits own idea of versioning, and that's our oldest macos member.\n\n>>> That being said, if there's no strong reason to enforce that a short\n>>> description be provided, then why not adjust ShowAllGUCConfig() to set that\n>>> column to NULL when short_desc is missing?\n>> \n>> There's a bunch more places that'd need to be adjusted, if we go that way. I\n>> don't really have an opinion on it.\n> \n> I looked around and didn't see anywhere else obvious that needed adjustment\n> besides what Michael pointed out (3ac7d024). Am I missing\n> something?\n\nI don't think so. help_config.c is able to handle NULL for the short\ndescription already. \n\nFWIW, I would be fine to backpatch the NULL handling for short_desc,\nwhile treating the addition of nonnull as a HEAD-only change.\n--\nMichael",
"msg_date": "Fri, 27 May 2022 11:54:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert name/short_desc to prevent SHOW ALL segfault"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> FWIW, I would be fine to backpatch the NULL handling for short_desc,\n> while treating the addition of nonnull as a HEAD-only change.\n\nYeah, sounds about right to me. My guess is that we will need\na configure check for nonnull, but perhaps I'm wrong.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 May 2022 23:01:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assert name/short_desc to prevent SHOW ALL segfault"
},
{
"msg_contents": "On Thu, May 26, 2022 at 11:01:44PM -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> FWIW, I would be fine to backpatch the NULL handling for short_desc,\n>> while treating the addition of nonnull as a HEAD-only change.\n> \n> Yeah, sounds about right to me. My guess is that we will need\n> a configure check for nonnull, but perhaps I'm wrong.\n\nMakes sense. Here's a new patch set. 0001 is the part intended for\nback-patching, and 0002 is the rest (i.e., adding pg_attribute_nonnull()).\nI switched to using __has_attribute to discover whether nonnull was\nsupported, as that seemed cleaner. I didn't see any need for a new\nconfigure check, but maybe I am missing something.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 27 May 2022 10:43:17 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assert name/short_desc to prevent SHOW ALL segfault"
},
{
"msg_contents": "On Fri, May 27, 2022 at 10:43:17AM -0700, Nathan Bossart wrote:\n> Makes sense. Here's a new patch set. 0001 is the part intended for\n> back-patching, and 0002 is the rest (i.e., adding pg_attribute_nonnull()).\n> I switched to using __has_attribute to discover whether nonnull was\n\nOkay, I have looked at 0001 this morning and applied it down to 12.\nThe change in GetConfigOptionByNum() is not required in 10 and 11, as\nthe strings of pg_show\\all_settings() have begun to be translated in\n12~.\n\n> supported, as that seemed cleaner. I didn't see any need for a new\n> configure check, but maybe I am missing something.\n\nAnd I've learnt today that we enforce a definition of __has_attribute\nat the top of c.h, and that we already rely on that. So I agree that\nwhat you are doing in 0002 should be enough. Should we wait until 16~\nopens for business though? I don't see a strong argument to push\nforward with that now that we are in beta mode on HEAD.\n--\nMichael",
"msg_date": "Sat, 28 May 2022 12:26:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert name/short_desc to prevent SHOW ALL segfault"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> And I've learnt today that we enforce a definition of __has_attribute\n> at the top of c.h, and that we already rely on that. So I agree that\n> what you are doing in 0002 should be enough. Should we wait until 16~\n> opens for business though? I don't see a strong argument to push\n> forward with that now that we are in beta mode on HEAD.\n\nAgreed. This part isn't a bug fix.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 May 2022 23:28:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assert name/short_desc to prevent SHOW ALL segfault"
},
{
"msg_contents": "On Sat, May 28, 2022 at 12:26:34PM +0900, Michael Paquier wrote:\n> On Fri, May 27, 2022 at 10:43:17AM -0700, Nathan Bossart wrote:\n>> Makes sense. Here's a new patch set. 0001 is the part intended for\n>> back-patching, and 0002 is the rest (i.e., adding pg_attribute_nonnull()).\n>> I switched to using __has_attribute to discover whether nonnull was\n> \n> Okay, I have looked at 0001 this morning and applied it down to 12.\n> The change in GetConfigOptionByNum() is not required in 10 and 11, as\n> the strings of pg_show\\all_settings() have begun to be translated in\n> 12~.\n\nThanks!\n\n>> supported, as that seemed cleaner. I didn't see any need for a new\n>> configure check, but maybe I am missing something.\n> \n> And I've learnt today that we enforce a definition of __has_attribute\n> at the top of c.h, and that we already rely on that. So I agree that\n> what you are doing in 0002 should be enough. Should we wait until 16~\n> opens for business though? I don't see a strong argument to push\n> forward with that now that we are in beta mode on HEAD.\n\nYeah, I see no reason that this should go into v15. I created a new\ncommitfest entry so that this isn't forgotten:\n\n\thttps://commitfest.postgresql.org/38/3655/\n\nAnd I've reposted 0002 here so that we get some cfbot coverage in the\nmeantime.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 28 May 2022 05:50:18 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assert name/short_desc to prevent SHOW ALL segfault"
},
{
"msg_contents": "On Sat, May 28, 2022 at 05:50:18AM -0700, Nathan Bossart wrote:\n> Yeah, I see no reason that this should go into v15. I created a new\n> commitfest entry so that this isn't forgotten:\n> \n> \thttps://commitfest.postgresql.org/38/3655/\n> \n> And I've reposted 0002 here so that we get some cfbot coverage in the\n> meantime.\n\nNow that v16 is open for business, I have been able to look at it\nagain, and applied the patch on HEAD. My apologies for the wait.\n--\nMichael",
"msg_date": "Sat, 2 Jul 2022 12:33:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert name/short_desc to prevent SHOW ALL segfault"
},
{
"msg_contents": "On Sat, Jul 02, 2022 at 12:33:28PM +0900, Michael Paquier wrote:\n> Now that v16 is open for business, I have been able to look at it\n> again, and applied the patch on HEAD. My apologies for the wait.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 1 Jul 2022 20:40:20 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assert name/short_desc to prevent SHOW ALL segfault"
}
] |
[
{
"msg_contents": "Hi all,\n(Adding Andrew Dunstan in CC.)\n\nI have been toying with $subject, trying to improve the ways to test\npg_upgrade across different major versions as perl makes that easier.\nThe buildfarm does three things to allow such tests to work (see\nTestUpgradeXversion.pm):\n- Apply a filter to the dumps generated to make them perfectly equal\nas the set of regexps on the SQL dumps (removal of empty lines and\ncomments in the SQL dumps as arguments of a diff command).\n- Apply diffs dumps to remove or modify objects, to avoid\ninconsistencies, which is what upgrade_adapt.sql does in the tree.\n- Add tweaks to the dump commands used, like --extra-float-digits=0\nwhen testing with a version <= 11 as origin (aka c6f9464b in the\nbuildfarm client).\n\nAttached is a basic patch to show what can be used to improve the TAP\ntests of pg_upgrade in this area, with contents coming mostly from the\nbuildfarm client. The end picture would be to allow all those tests\nto use the core code, rather than duplicating that in the buildfarm\nclient. This reduces a lot the amount of noise that can be seen when\ncomparing the dumps taken (the tests pass with v14) while remaining\nsimple, down to v11, so that could be a first step. There are a\ncouple of things where I am not sure how the buildfarm handles things,\nbut perhaps the dumps of installcheck have been tweaked to ignore such\ncases? Here is an exhaustive list:\n- multirange_type_name when using PG <= v13 as origin, for CREATE\nTYPE.\n- CREATE/ALTER PROCEDURE append IN to the list of parameters dumped,\nwhen using PG <= v13 as origin.\n- CREATE OPERATOR CLASS and ALTER OPERATOR FAMILY, where FUNCTION 2 is\nmoved from one command to the other.\n\nThoughts?\n--\nMichael",
"msg_date": "Tue, 24 May 2022 15:03:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Improve TAP tests of pg_upgrade for cross-version tests"
},
{
"msg_contents": "On Tue, May 24, 2022 at 03:03:28PM +0900, Michael Paquier wrote:\n> (Adding Andrew Dunstan in CC.)\n> \n> I have been toying with $subject, trying to improve the ways to test\n> pg_upgrade across different major versions as perl makes that easier.\n> The buildfarm does three things to allow such tests to work (see\n> TestUpgradeXversion.pm):\n\nAnd with Andrew in CC, that's better ;p\n--\nMichael",
"msg_date": "Wed, 25 May 2022 10:35:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Improve TAP tests of pg_upgrade for cross-version tests"
},
{
"msg_contents": "On Wed, May 25, 2022 at 10:35:57AM +0900, Michael Paquier wrote:\n> On Tue, May 24, 2022 at 03:03:28PM +0900, Michael Paquier wrote:\n>> (Adding Andrew Dunstan in CC.)\n>> \n>> I have been toying with $subject, trying to improve the ways to test\n>> pg_upgrade across different major versions as perl makes that easier.\n>> The buildfarm does three things to allow such tests to work (see\n>> TestUpgradeXversion.pm):\n\nRebased to cope with the recent changes in this area.\n--\nMichael",
"msg_date": "Thu, 9 Jun 2022 16:49:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Improve TAP tests of pg_upgrade for cross-version tests"
},
{
"msg_contents": "\nOn 2022-06-09 Th 03:49, Michael Paquier wrote:\n> On Wed, May 25, 2022 at 10:35:57AM +0900, Michael Paquier wrote:\n>> On Tue, May 24, 2022 at 03:03:28PM +0900, Michael Paquier wrote:\n>>> (Adding Andrew Dunstan in CC.)\n>>>\n>>> I have been toying with $subject, trying to improve the ways to test\n>>> pg_upgrade across different major versions as perl makes that easier.\n>>> The buildfarm does three things to allow such tests to work (see\n>>> TestUpgradeXversion.pm):\n> Rebased to cope with the recent changes in this area.\n\n\nI tried in fb16d2c658 to avoid littering the mainline code with\nversion-specific tests, and put that in the methods in the subclasses\nthat override the mainline functions.\n\nI'll try to rework this along those lines.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 12 Jun 2022 10:14:35 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Improve TAP tests of pg_upgrade for cross-version tests"
},
{
"msg_contents": "\nOn 2022-06-12 Su 10:14, Andrew Dunstan wrote:\n> On 2022-06-09 Th 03:49, Michael Paquier wrote:\n>> On Wed, May 25, 2022 at 10:35:57AM +0900, Michael Paquier wrote:\n>>> On Tue, May 24, 2022 at 03:03:28PM +0900, Michael Paquier wrote:\n>>>> (Adding Andrew Dunstan in CC.)\n>>>>\n>>>> I have been toying with $subject, trying to improve the ways to test\n>>>> pg_upgrade across different major versions as perl makes that easier.\n>>>> The buildfarm does three things to allow such tests to work (see\n>>>> TestUpgradeXversion.pm):\n>> Rebased to cope with the recent changes in this area.\n>\n> I tried in fb16d2c658 to avoid littering the mainline code with\n> version-specific tests, and put that in the methods in the subclasses\n> that override the mainline functions.\n>\n> I'll try to rework this along those lines.\n>\n>\n\nI think I must have been insufficiently caffeinated when I wrote this,\nbecause clearly I was not reading correctly.\n\n\nI have had another look and the patch looks fine, although I haven't\ntested it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 12 Jun 2022 17:58:54 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Improve TAP tests of pg_upgrade for cross-version tests"
},
{
"msg_contents": "On Sun, Jun 12, 2022 at 05:58:54PM -0400, Andrew Dunstan wrote:\n> On 2022-06-12 Su 10:14, Andrew Dunstan wrote:\n>> I tried in fb16d2c658 to avoid littering the mainline code with\n>> version-specific tests, and put that in the methods in the subclasses\n>> that override the mainline functions.\n\nExcept that manipulating the diffs of pg_upgrade is not something that\nneeds to be internal to the subclasses where we set up the nodes :)\n\n> I think I must have been insufficiently caffeinated when I wrote this,\n> because clearly I was not reading correctly.\n> \n> I have had another look and the patch looks fine, although I haven't\n> tested it.\n\nI have a question about the tests done in the buildfarm though. Do\nthe dumps from the older versions drop some of the objects that cause\nthe diffs in the dumps? At which extent is that a dump from\ninstallcheck?\n--\nMichael",
"msg_date": "Mon, 13 Jun 2022 16:51:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Improve TAP tests of pg_upgrade for cross-version tests"
},
{
"msg_contents": "\nOn 2022-06-13 Mo 03:51, Michael Paquier wrote:\n> On Sun, Jun 12, 2022 at 05:58:54PM -0400, Andrew Dunstan wrote:\n>> On 2022-06-12 Su 10:14, Andrew Dunstan wrote:\n>>> I tried in fb16d2c658 to avoid littering the mainline code with\n>>> version-specific tests, and put that in the methods in the subclasses\n>>> that override the mainline functions.\n> Except that manipulating the diffs of pg_upgrade is not something that\n> needs to be internal to the subclasses where we set up the nodes :)\n>\n>> I think I must have been insufficiently caffeinated when I wrote this,\n>> because clearly I was not reading correctly.\n>>\n>> I have had another look and the patch looks fine, although I haven't\n>> tested it.\n> I have a question about the tests done in the buildfarm though. Do\n> the dumps from the older versions drop some of the objects that cause\n> the diffs in the dumps? At which extent is that a dump from\n> installcheck?\n\n\nSee lines 324..347 (save stage) and 426..586 (upgrade stage) of\n<https://github.com/PGBuildFarm/client-code/blob/main/PGBuild/Modules/TestUpgradeXversion.pm>\n\nWe save the cluster to be upgraded after all the installcheck stages\nhave run, so on crake here's the list of databases upgraded for HEAD:\n\n\npostgres\ntemplate1\ntemplate0\ncontrib_regression_pgxml\ncontrib_regression_bool_plperl\ncontrib_regression_hstore_plperl\ncontrib_regression_jsonb_plperl\nregression\ncontrib_regression_redis_fdw\ncontrib_regression_file_textarray_fdw\nisolation_regression\npl_regression_plpgsql\ncontrib_regression_hstore_plpython3\npl_regression_plperl\npl_regression_plpython3\npl_regression_pltcl\ncontrib_regression_adminpack\ncontrib_regression_amcheck\ncontrib_regression_bloom\ncontrib_regression_btree_gin\ncontrib_regression_btree_gist\ncontrib_regression_citext\ncontrib_regression_cube\ncontrib_regression_dblink\ncontrib_regression_dict_int\ncontrib_regression_dict_xsyn\ncontrib_regression_earthdistance\ncontrib_regression_file_fdw\ncontrib_regression_fuzzystrmatch\ncontrib_regression_hstore\ncontrib_regression__int\ncontrib_regression_isn\ncontrib_regression_lo\ncontrib_regression_ltree\ncontrib_regression_pageinspect\ncontrib_regression_passwordcheck\ncontrib_regression_pg_surgery\ncontrib_regression_pg_trgm\ncontrib_regression_pgstattuple\ncontrib_regression_pg_visibility\ncontrib_regression_postgres_fdw\ncontrib_regression_seg\ncontrib_regression_tablefunc\ncontrib_regression_tsm_system_rows\ncontrib_regression_tsm_system_time\ncontrib_regression_unaccent\ncontrib_regression_pgcrypto\ncontrib_regression_uuid-ossp\ncontrib_regression_jsonb_plpython3\ncontrib_regression_ltree_plpython3\nisolation_regression_summarization-and-inprogress-insertion\nisolation_regression_delay_execution\ncontrib_regression_dummy_index_am\ncontrib_regression_dummy_seclabel\ncontrib_regression_plsample\ncontrib_regression_spgist_name_ops\ncontrib_regression_test_bloomfilter\ncontrib_regression_test_extensions\ncontrib_regression_test_ginpostinglist\ncontrib_regression_test_integerset\ncontrib_regression_test_parser\ncontrib_regression_test_pg_dump\ncontrib_regression_test_predtest\ncontrib_regression_test_rbtree\ncontrib_regression_test_regex\ncontrib_regression_test_rls_hooks\ncontrib_regression_test_shm_mq\ncontrib_regression_rolenames\n\n\ncheers\n\n\nandrew\n\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 13 Jun 2022 09:59:01 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Improve TAP tests of pg_upgrade for cross-version tests"
},
{
"msg_contents": "On Thu, Jun 09, 2022 at 04:49:01PM +0900, Michael Paquier wrote:\n> Rebased to cope with the recent changes in this area.\n\nPlease find attached an updated version of this patch, where I have\nextended the support of the upgrade script down to 9.5 as origin\nversion, as ~9.4 now fail because of cluster_name, so Cluster.pm does\nnot support that. FWIW, I have created an extra set of dumps down to\n9.4.\n\nThis adds proper handling for initdb --wal-segsize and\n--allow-group-access when it comes to v11~, as these options are\nmissing in ~10.\n--\nMichael",
"msg_date": "Wed, 6 Jul 2022 15:27:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Improve TAP tests of pg_upgrade for cross-version tests"
},
{
"msg_contents": "On Wed, Jul 06, 2022 at 03:27:28PM +0900, Michael Paquier wrote:\n> On Thu, Jun 09, 2022 at 04:49:01PM +0900, Michael Paquier wrote:\n> > Rebased to cope with the recent changes in this area.\n> \n> Please find attached an updated version of this patch, where I have\n> extended the support of the upgrade script down to 9.5 as origin\n> version, as ~9.4 now fail because of cluster_name, so Cluster.pm does\n> not support that. FWIW, I have created an extra set of dumps down to\n> 9.4.\n\nThis was using the old psql rather than the new one.\nBefore v10, psql didn't have \\if.\n\nI think Cluster.pm should be updated to support the upgrades that upgrade.sh\nsupported. I guess it ought to be fixed in v15.\n\n-- \nJustin",
"msg_date": "Fri, 29 Jul 2022 16:15:26 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve TAP tests of pg_upgrade for cross-version tests"
},
{
"msg_contents": "On Fri, Jul 29, 2022 at 04:15:26PM -0500, Justin Pryzby wrote:\n> This was using the old psql rather than the new one.\n> Before v10, psql didn't have \\if.\n\n # Note that upgrade_adapt.sql from the new version is used, to\n\t# cope with an upgrade to this version.\n- $oldnode->command_ok(\n+ $newnode->command_ok(\n [\n- 'psql', '-X',\n+ \"$newbindir/psql\", '-X',\n\nYeah, you are right here that this psql command should use the one\nfrom the new cluster and connect to the old cluster. There is no\npoint in adding $newbindir though as Cluster::_get_env would enforce\nPATH to find the correct binary. I'll look at that in details later.\n--\nMichael",
"msg_date": "Sat, 30 Jul 2022 16:29:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Improve TAP tests of pg_upgrade for cross-version tests"
},
{
"msg_contents": "Hello!\n\nOn 30.07.2022 10:29, Michael Paquier wrote:\n> [\n> - 'psql', '-X',\n> + \"$newbindir/psql\", '-X',\n> \n\nFound that adding $newbindir to psql gives an error when upgrading from \nversions 14 and below to master when the test tries to run \nupgrade_adapt.sql script:\n\nt/002_pg_upgrade.pl .. 1/?\n# Failed test 'ran adapt script'\n# at t/002_pg_upgrade.pl line 141.\n\nin regress_log_002_pg_upgrade:\n# Running: <$newbindir>/psql -X -f \n<$srcdir>/src/bin/pg_upgrade/upgrade_adapt.sql regression\n<$newbindir>/psql: symbol lookup error: <$newbindir>/psql: undefined \nsymbol: PQmblenBounded\n\nTests from 15-stable and from itself work as expected.\n\nQuestion about similar error was here: \nhttps://www.postgresql.org/message-id/flat/BN0PR20MB3912AA107FA6E90FB6B0A034FD9F9%40BN0PR20MB3912.namprd20.prod.outlook.com \n\n\nWith best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Sun, 31 Jul 2022 18:22:33 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": false,
"msg_subject": "Re: Improve TAP tests of pg_upgrade for cross-version tests"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-29 16:15:26 -0500, Justin Pryzby wrote:\n> This was using the old psql rather than the new one.\n> Before v10, psql didn't have \\if.\n> \n> I think Cluster.pm should be updated to support the upgrades that upgrade.sh\n> supported. I guess it ought to be fixed in v15.\n\nThis fails tests widely, and has so for a while:\nhttps://cirrus-ci.com/build/4862820121575424\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/39/3649\n\nNote that it causes timeouts, which end up chewing up a cfbot \"slot\" for an\nhour...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 2 Oct 2022 10:02:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Improve TAP tests of pg_upgrade for cross-version tests"
},
{
"msg_contents": "On Sun, Oct 02, 2022 at 10:02:37AM -0700, Andres Freund wrote:\n> This fails tests widely, and has so for a while:\n> https://cirrus-ci.com/build/4862820121575424\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/39/3649\n> \n> Note that it causes timeouts, which end up chewing up a cfbot \"slot\" for an\n> hour...\n\nSorry for kicking the can down the road for a too-long time. Attached\nis an updated patch that I have strimmed down to the minimum that\naddresses all the issues I want fixed as of the scope of this thread:\n- The filtering of the dumps is reduced to a minimum, removing only\ncomments and empty lines.\n- The configuration of the old and new nodes is tweaked so as it is\nabvle to handle upgrade from nodes older than 10.\n- pg_dumpall is tweaked to use --extra-float-digits=0 for the old\nnodes older than 11 to minimize the amount of diffs generated.\n\nThat's quite nice in itself, as it becomes possible to use much more\ndump patterns loaded as part of the tests. More filtering rules could\nbe used, like the part about procedures and functions that I have sent\nin the previous versions, but what I have here is enough to make the\ntest complete with all the versions supported by Cluster.pm. I am\nthinking to get this stuff applied soon before moving on to the PATH\nissue.\n\nThere is still one issue reported upthread by Justin about the fact\nthat we can use unexpected commands depending on the node involved.\nFor example, something like that would build a PATH so as psql from\nthe old node is used, not from the new node:\n$newnode->command_ok(['psql', '-X', '-f', 'blah.sql']);\n\nSo with the current Cluster.pm, we could fail upgrade_adapt.sql if the\nversion upgraded from does not support psql's \\if, and I don't think\nthat we should use a full path to the binary either. I am not\ncompletely done analyzing that and this deserves a separate thread, as\nit impacts all the commands used in TAP tests manipulating nodes from\nmultiple versions.\n--\nMichael",
"msg_date": "Mon, 3 Oct 2022 11:23:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Improve TAP tests of pg_upgrade for cross-version tests"
}
] |
[
{
"msg_contents": "> We try fairly hard to ensure that the row count estimate for a given relation\n> does not vary across paths, so I concur with the OP that this is a bug. Having\n> said that, I'm not sure that the consequences are significant. As you say, the\n> estimates seem to get a lot closer as soon as the table has some statistics.\n> (But nonetheless, they are not identical, so it's still a bug.)\n\nYes, the estimates seem to get a lot closer as soon as the table has some statistics.\n\n> I'm not sure that the consequences are significant.\nAt least it doesn't make any difference to me for now.\nI noticed this problem while testing aggregation.\nIt looks a little weird, so I emailed.\n\nThanks every one.\n\n\n\n",
"msg_date": "Tue, 24 May 2022 14:06:28 +0800",
"msg_from": "\"bucoo\" <bucoo@sohu.com>",
"msg_from_op": true,
"msg_subject": "re: partition wise aggregate wrong rows cost"
}
] |
[
{
"msg_contents": "Hi,\n\nSimon reported $subject off-list.\n\nFor triggers on partitioned tables, various enable/disable trigger\nvariants recurse to also process partitions' triggers by way of\nATSimpleRecursion() done in the \"prep\" phase. While that way of\nrecursing is fine for row-level triggers which are cloned to\npartitions, it isn't for statement-level triggers which are not\ncloned, so you get an unexpected error as follows:\n\ncreate table p (a int primary key) partition by list (a);\ncreate table p1 partition of p for values in (1);\ncreate function trigfun () returns trigger language plpgsql as $$\nbegin raise notice 'insert on p'; end; $$;\ncreate trigger trig before insert on p for statement execute function trigfun();\nalter table p disable trigger trig;\nERROR: trigger \"trig\" for table \"p1\" does not exist\n\nThe problem is that ATPrepCmd() is too soon to perform the recursion\nin this case as it's not known at that stage if the trigger being\nenabled/disabled is row-level or statement level, so it's better to\nperform it during ATExecCmd(). Actually, that is how it used to be\ndone before bbb927b4db9b changed things to use ATSimpleRecursion() to\nfix a related problem, which was that the ONLY specification was\nignored by the earlier implementation. The information of whether\nONLY is specified in a given command is only directly available in the\n\"prep\" phase and must be remembered somehow if the recursion must be\nhandled in the \"exec\" phase. The way that's typically done that I see\nin tablecmds.c is to have ATPrepCmd() change the AlterTableCmd.subtype\nto a recursive variant of a given sub-command. For example,\nAT_ValidateConstraint by AT_ValidateConstraintRecurse if ONLY is not\nspecified.\n\nSo, I think we should do something like the attached. A lot of\nboilerplate is needed given that the various enable/disable trigger\nvariants are represented as separate sub-commands (AlterTableCmd\nsubtypes), which can perhaps be avoided by inventing a\nEnableDisableTrigStmt sub-command node that stores (only?) the recurse\nflag.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 24 May 2022 15:11:28 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "enable/disable broken for statement triggers on partitioned tables"
},
{
"msg_contents": "On Tue, May 24, 2022 at 3:11 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Simon reported $subject off-list.\n>\n> For triggers on partitioned tables, various enable/disable trigger\n> variants recurse to also process partitions' triggers by way of\n> ATSimpleRecursion() done in the \"prep\" phase. While that way of\n> recursing is fine for row-level triggers which are cloned to\n> partitions, it isn't for statement-level triggers which are not\n> cloned, so you get an unexpected error as follows:\n>\n> create table p (a int primary key) partition by list (a);\n> create table p1 partition of p for values in (1);\n> create function trigfun () returns trigger language plpgsql as $$\n> begin raise notice 'insert on p'; end; $$;\n> create trigger trig before insert on p for statement execute function trigfun();\n> alter table p disable trigger trig;\n> ERROR: trigger \"trig\" for table \"p1\" does not exist\n>\n> The problem is that ATPrepCmd() is too soon to perform the recursion\n> in this case as it's not known at that stage if the trigger being\n> enabled/disabled is row-level or statement level, so it's better to\n> perform it during ATExecCmd(). Actually, that is how it used to be\n> done before bbb927b4db9b changed things to use ATSimpleRecursion() to\n> fix a related problem, which was that the ONLY specification was\n> ignored by the earlier implementation. The information of whether\n> ONLY is specified in a given command is only directly available in the\n> \"prep\" phase and must be remembered somehow if the recursion must be\n> handled in the \"exec\" phase. The way that's typically done that I see\n> in tablecmds.c is to have ATPrepCmd() change the AlterTableCmd.subtype\n> to a recursive variant of a given sub-command. For example,\n> AT_ValidateConstraint by AT_ValidateConstraintRecurse if ONLY is not\n> specified.\n>\n> So, I think we should do something like the attached. A lot of\n> boilerplate is needed given that the various enable/disable trigger\n> variants are represented as separate sub-commands (AlterTableCmd\n> subtypes), which can perhaps be avoided by inventing a\n> EnableDisableTrigStmt sub-command node that stores (only?) the recurse\n> flag.\n\nAdded to the next CF: https://commitfest.postgresql.org/38/3728/\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 30 Jun 2022 10:23:55 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: enable/disable broken for statement triggers on partitioned\n tables"
},
{
"msg_contents": "Hi!\n\nI've looked through the code and everything looks good.\nBut there is one thing I doubt.\nPatch changes result of test:\n----\n\ncreate function trig_nothing() returns trigger language plpgsql\n as $$ begin return null; end $$;\ncreate table parent (a int) partition by list (a);\ncreate table child1 partition of parent for values in (1);\n\ncreate trigger tg after insert on parent\n for each row execute procedure trig_nothing();\nselect tgrelid::regclass, tgname, tgenabled from pg_trigger\n where tgrelid in ('parent'::regclass, 'child1'::regclass)\n order by tgrelid::regclass::text;\nalter table only parent enable always trigger tg; -- no recursion\nselect tgrelid::regclass, tgname, tgenabled from pg_trigger\n where tgrelid in ('parent'::regclass, 'child1'::regclass)\n order by tgrelid::regclass::text;\nalter table parent enable always trigger tg; -- recursion\nselect tgrelid::regclass, tgname, tgenabled from pg_trigger\n where tgrelid in ('parent'::regclass, 'child1'::regclass)\n order by tgrelid::regclass::text;\n\ndrop table parent, child1;\ndrop function trig_nothing();\n\n----\nResults of vanilla + patch:\n----\nCREATE FUNCTION\nCREATE TABLE\nCREATE TABLE\nCREATE TRIGGER\n tgrelid | tgname | tgenabled\n---------+--------+-----------\n child1 | tg | O\n parent | tg | O\n(2 rows)\n\nALTER TABLE\n tgrelid | tgname | tgenabled\n---------+--------+-----------\n child1 | tg | O\n parent | tg | A\n(2 rows)\n\nALTER TABLE\n tgrelid | tgname | tgenabled\n---------+--------+-----------\n child1 | tg | O\n parent | tg | A\n(2 rows)\n\nDROP TABLE\nDROP FUNCTION\n\n----\nResults of vanilla:\n----\nCREATE FUNCTION\nCREATE TABLE\nCREATE TABLE\nCREATE TRIGGER\n tgrelid | tgname | tgenabled\n---------+--------+-----------\n child1 | tg | O\n parent | tg | O\n(2 rows)\n\nALTER TABLE\n tgrelid | tgname | tgenabled\n---------+--------+-----------\n child1 | tg | O\n parent | tg | A\n(2 rows)\n\nALTER TABLE\n tgrelid | tgname | tgenabled\n---------+--------+-----------\n child1 | tg | A\n parent | tg | A\n(2 rows)\n\nDROP TABLE\nDROP FUNCTION\n----\nThe patch doesn't start recursion in case 'tgenabled' flag of parent \ntable is not changes.\nProbably vanilla result is more correct.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\n\n\n",
"msg_date": "Thu, 7 Jul 2022 21:44:35 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: enable/disable broken for statement triggers on partitioned\n tables"
},
{
"msg_contents": "Hi,\n\nOn Fri, Jul 8, 2022 at 3:44 AM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> I've looked through the code and everything looks good.\n> But there is one thing I doubt.\n> Patch changes result of test:\n> ----\n>\n> create function trig_nothing() returns trigger language plpgsql\n> as $$ begin return null; end $$;\n> create table parent (a int) partition by list (a);\n> create table child1 partition of parent for values in (1);\n>\n> create trigger tg after insert on parent\n> for each row execute procedure trig_nothing();\n> select tgrelid::regclass, tgname, tgenabled from pg_trigger\n> where tgrelid in ('parent'::regclass, 'child1'::regclass)\n> order by tgrelid::regclass::text;\n> alter table only parent enable always trigger tg; -- no recursion\n> select tgrelid::regclass, tgname, tgenabled from pg_trigger\n> where tgrelid in ('parent'::regclass, 'child1'::regclass)\n> order by tgrelid::regclass::text;\n> alter table parent enable always trigger tg; -- recursion\n> select tgrelid::regclass, tgname, tgenabled from pg_trigger\n> where tgrelid in ('parent'::regclass, 'child1'::regclass)\n> order by tgrelid::regclass::text;\n>\n> drop table parent, child1;\n> drop function trig_nothing();\n>\n> ----\n> Results of vanilla + patch:\n> ----\n> CREATE FUNCTION\n> CREATE TABLE\n> CREATE TABLE\n> CREATE TRIGGER\n> tgrelid | tgname | tgenabled\n> ---------+--------+-----------\n> child1 | tg | O\n> parent | tg | O\n> (2 rows)\n>\n> ALTER TABLE\n> tgrelid | tgname | tgenabled\n> ---------+--------+-----------\n> child1 | tg | O\n> parent | tg | A\n> (2 rows)\n>\n> ALTER TABLE\n> tgrelid | tgname | tgenabled\n> ---------+--------+-----------\n> child1 | tg | O\n> parent | tg | A\n> (2 rows)\n>\n> DROP TABLE\n> DROP FUNCTION\n>\n> ----\n> Results of vanilla:\n> ----\n> CREATE FUNCTION\n> CREATE TABLE\n> CREATE TABLE\n> CREATE TRIGGER\n> tgrelid | tgname | tgenabled\n> ---------+--------+-----------\n> child1 | tg | O\n> parent | tg | O\n> (2 rows)\n>\n> ALTER TABLE\n> tgrelid | tgname | tgenabled\n> ---------+--------+-----------\n> child1 | tg | O\n> parent | tg | A\n> (2 rows)\n>\n> ALTER TABLE\n> tgrelid | tgname | tgenabled\n> ---------+--------+-----------\n> child1 | tg | A\n> parent | tg | A\n> (2 rows)\n>\n> DROP TABLE\n> DROP FUNCTION\n> ----\n> The patch doesn't start recursion in case 'tgenabled' flag of parent\n> table is not changes.\n> Probably vanilla result is more correct.\n\nThanks for the review and this test case.\n\nI agree that the patch shouldn't have changed that behavior, so I've\nfixed the patch so that EnableDisableTrigger() recurses even if the\nparent trigger is unchanged.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 13 Jul 2022 12:08:33 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: enable/disable broken for statement triggers on partitioned\n tables"
},
{
"msg_contents": "> I agree that the patch shouldn't have changed that behavior, so I've\n> fixed the patch so that EnableDisableTrigger() recurses even if the\n> parent trigger is unchanged.\n\nThanks, I think this patch is ready for committer.\n\n-- \nWith best regards,\nDmitry Koval\n\nPostgres Professional: http://postgrespro.com\n\n\n",
"msg_date": "Thu, 14 Jul 2022 14:20:16 +0300",
"msg_from": "Dmitry Koval <d.koval@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: enable/disable broken for statement triggers on partitioned\n tables"
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 8:20 PM Dmitry Koval <d.koval@postgrespro.ru> wrote:\n> > I agree that the patch shouldn't have changed that behavior, so I've\n> > fixed the patch so that EnableDisableTrigger() recurses even if the\n> > parent trigger is unchanged.\n>\n> Thanks, I think this patch is ready for committer.\n\nGreat, thanks.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Jul 2022 20:51:03 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: enable/disable broken for statement triggers on partitioned\n tables"
},
{
"msg_contents": "On 2022-May-24, Amit Langote wrote:\n\n> So, I think we should do something like the attached. A lot of\n> boilerplate is needed given that the various enable/disable trigger\n> variants are represented as separate sub-commands (AlterTableCmd\n> subtypes), which can perhaps be avoided by inventing a\n> EnableDisableTrigStmt sub-command node that stores (only?) the recurse\n> flag.\n\nYeah, I don't know about adding tons of values to that enum just so that\nwe can use that to hide a boolean inside. Why not add a boolean to the\ncontaining struct? Something like the attached.\n\nWe can later use the same thing to undo what happens in in AddColumn,\nDropColumn, etc. It all looks pretty strange and confusing to me.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Investigación es lo que hago cuando no sé lo que estoy haciendo\"\n(Wernher von Braun)",
"msg_date": "Fri, 29 Jul 2022 20:44:52 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: enable/disable broken for statement triggers on partitioned\n tables"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Yeah, I don't know about adding tons of values to that enum just so that\n> we can use that to hide a boolean inside. Why not add a boolean to the\n> containing struct? Something like the attached.\n\nI do not think it's a great idea to have ALTER TABLE scribbling on\nthe source parsetree. That tree could be in plancache and subject\nto reuse later.\n\nMind you, I don't say that we're perfectly clean about this elsewhere.\nBut there is a pretty hard expectation that the executor doesn't\nmodify plan trees, and I think the same rule should apply to utility\nstatements.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 29 Jul 2022 16:25:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: enable/disable broken for statement triggers on partitioned\n tables"
},
{
"msg_contents": "On Sat, Jul 30, 2022 at 5:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > Yeah, I don't know about adding tons of values to that enum just so that\n> > we can use that to hide a boolean inside. Why not add a boolean to the\n> > containing struct? Something like the attached.\n>\n> I do not think it's a great idea to have ALTER TABLE scribbling on\n> the source parsetree.\n\nHmm, I think we already do scribble on the source parse tree even\nbefore this patch, for example, as ATPrepCmd() does for DROP\nCONSTRAINT:\n\n if (recurse)\n cmd->subtype = AT_DropConstraintRecurse;\n\n> That tree could be in plancache and subject\n> to reuse later.\n\nI see that 7c337b6b527b added 'readOnlyTree' to\nstandard_ProcessUtility()'s API, I guess, to make any changes that\nAlterTable() and underlings make to the input AlterTableStmt be local\nto a given execution. Though, maybe that's not really a permission to\nadd more code that makes such changes?\n\nIn this case of needing to remember the inh/recurse flag mentioned in\nthe original AT command, we could avoid scribbling over the input\nAlterTableStmt by setting a new flag in AlteredTableInfo, instead of\nAlterTableCmd. AlteredTableInfo has other runtime info about the\nrelation being altered and perhaps it wouldn't be too bad if it also\nstores the inh/recurse flag.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 1 Aug 2022 12:51:05 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: enable/disable broken for statement triggers on partitioned\n tables"
},
{
"msg_contents": "On 2022-Aug-01, Amit Langote wrote:\n\n> On Sat, Jul 30, 2022 at 5:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > I do not think it's a great idea to have ALTER TABLE scribbling on\n> > the source parsetree.\n> \n> Hmm, I think we already do scribble on the source parse tree even\n> before this patch, for example, as ATPrepCmd() does for DROP\n> CONSTRAINT:\n> \n> if (recurse)\n> cmd->subtype = AT_DropConstraintRecurse;\n\nNo, actually nothing scribbles on the parsetree, because ATPrepCmd is\nworking on a copy of the node, so there's no harm done to the original.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I can't go to a restaurant and order food because I keep looking at the\nfonts on the menu. Five minutes later I realize that it's also talking\nabout food\" (Donald Knuth)\n\n\n",
"msg_date": "Mon, 1 Aug 2022 20:58:08 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: enable/disable broken for statement triggers on partitioned\n tables"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> No, actually nothing scribbles on the parsetree, because ATPrepCmd is\n> working on a copy of the node, so there's no harm done to the original.\n\nOh, okay then. Maybe this needs to be noted somewhere?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Aug 2022 15:13:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: enable/disable broken for statement triggers on partitioned\n tables"
},
{
"msg_contents": "On Tue, Aug 2, 2022 at 3:58 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Aug-01, Amit Langote wrote:\n>\n> > On Sat, Jul 30, 2022 at 5:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > > I do not think it's a great idea to have ALTER TABLE scribbling on\n> > > the source parsetree.\n> >\n> > Hmm, I think we already do scribble on the source parse tree even\n> > before this patch, for example, as ATPrepCmd() does for DROP\n> > CONSTRAINT:\n> >\n> > if (recurse)\n> > cmd->subtype = AT_DropConstraintRecurse;\n>\n> No, actually nothing scribbles on the parsetree, because ATPrepCmd is\n> working on a copy of the node, so there's no harm done to the original.\n\nOh, I missed this bit in ATPrepCmd():\n\n /*\n * Copy the original subcommand for each table. This avoids conflicts\n * when different child tables need to make different parse\n * transformations (for example, the same column may have different column\n * numbers in different children).\n */\n cmd = copyObject(cmd);\n\nThat's copying for a different purpose than what Tom mentioned, but\ncopying nonetheless. Maybe we should modify this comment a bit to\nclarify about Tom's concern?\n\nRegarding the patch, I agree that storing the recurse flag rather than\noverwriting subtype might be better.\n\n+ bool execTimeRecursion; /* set by ATPrepCmd if ATExecCmd must\n+ * recurse to children */\n\nMight it be better to call this field simply 'recurse'? I think it's\nclear from the context and the comment above the flag is to be used\nduring execution.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 2 Aug 2022 10:43:47 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: enable/disable broken for statement triggers on partitioned\n tables"
},
{
"msg_contents": "On 2022-Aug-02, Amit Langote wrote:\n\n> Regarding the patch, I agree that storing the recurse flag rather than\n> overwriting subtype might be better.\n> \n> + bool execTimeRecursion; /* set by ATPrepCmd if ATExecCmd must\n> + * recurse to children */\n> \n> Might it be better to call this field simply 'recurse'? I think it's\n> clear from the context and the comment above the flag is to be used\n> during execution.\n\nYeah, I guess we can do that and also reword the overall ALTER TABLE\ncomment about recursion. That's in the attached first patch, which is\nintended as backpatchable.\n\nThe second patch is just to show how we'd rewrite AT_AddColumn to no\nlonger use the Recurse separate enum value but instead use the ->recurse\nflag. This is pretty straightforward and it's a clear net reduction of\ncode. We can't backpatch this kind of thing of course, both because of\nthe ABI break (easily fixed) and because potential destabilization\n(scary). We can do similar tihngs for the other AT enum values for\nrecursion. This isn't complete since there are a few other values in\nthat enum that we should process in this way too; I don't intend it to\npush it just yet.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"XML!\" Exclaimed C++. \"What are you doing here? You're not a programming\nlanguage.\"\n\"Tell that to the people who use me,\" said XML.\nhttps://burningbird.net/the-parable-of-the-languages/",
"msg_date": "Wed, 3 Aug 2022 20:01:48 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: enable/disable broken for statement triggers on partitioned\n tables"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 3:01 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Aug-02, Amit Langote wrote:\n> > Regarding the patch, I agree that storing the recurse flag rather than\n> > overwriting subtype might be better.\n> >\n> > + bool execTimeRecursion; /* set by ATPrepCmd if ATExecCmd must\n> > + * recurse to children */\n> >\n> > Might it be better to call this field simply 'recurse'? I think it's\n> > clear from the context and the comment above the flag is to be used\n> > during execution.\n>\n> Yeah, I guess we can do that and also reword the overall ALTER TABLE\n> comment about recursion. That's in the attached first patch, which is\n> intended as backpatchable.\n\nThanks. This one looks good to me.\n\n> The second patch is just to show how we'd rewrite AT_AddColumn to no\n> longer use the Recurse separate enum value but instead use the ->recurse\n> flag. This is pretty straightforward and it's a clear net reduction of\n> code. We can't backpatch this kind of thing of course, both because of\n> the ABI break (easily fixed) and because potential destabilization\n> (scary). We can do similar tihngs for the other AT enum values for\n> recursion. This isn't complete since there are a few other values in\n> that enum that we should process in this way too; I don't intend it to\n> push it just yet.\n\nI like the idea of removing all AT_*Recurse subtypes in HEAD.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 4 Aug 2022 09:46:28 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: enable/disable broken for statement triggers on partitioned\n tables"
},
{
"msg_contents": "Another point for backpatch: EnableDisableTrigger() changes API, which\nis potentially not good. In backbranches I'll keep the function\nunchanged and add another function with the added argument,\nEnableDisableTriggerNew().\n\nSo extensions that want to be compatible with both old and current\nversions (assuming any users of that function exist out of core; I\ndidn't find any) could do something like\n\n#if PG_VERSION_NUM <= 160000\n\tEnableDisableTriggerNew( all args )\n#else\n\tEnableDisableTrigger( all args )\n#endif\n\nand otherwise they're compatible as compiled today.\n\nSince there are no known users of this interface, it doesn't seem to\nwarrant any more convenient treatment.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Those who use electric razors are infidels destined to burn in hell while\nwe drink from rivers of beer, download free vids and mingle with naked\nwell shaved babes.\" (http://slashdot.org/comments.pl?sid=44793&cid=4647152)\n\n\n",
"msg_date": "Thu, 4 Aug 2022 14:56:45 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: enable/disable broken for statement triggers on partitioned\n tables"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 9:56 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Another point for backpatch: EnableDisableTrigger() changes API, which\n> is potentially not good. In backbranches I'll keep the function\n> unchanged and add another function with the added argument,\n> EnableDisableTriggerNew().\n\n+1\n\n> So extensions that want to be compatible with both old and current\n> versions (assuming any users of that function exist out of core; I\n> didn't find any) could do something like\n>\n> #if PG_VERSION_NUM <= 160000\n> EnableDisableTriggerNew( all args )\n> #else\n> EnableDisableTrigger( all args )\n> #endif\n>\n> and otherwise they're compatible as compiled today.\n>\n> Since there are no known users of this interface, it doesn't seem to\n> warrant any more convenient treatment.\n\nMakes sense.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 4 Aug 2022 22:21:31 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: enable/disable broken for statement triggers on partitioned\n tables"
},
{
"msg_contents": "OK, pushed. This soon caused buildfarm to show a failure due to\nunderspecified ORDER BY, so I just pushed a fix for that too.\n\nThanks Simon for reporting the problem, and thanks Amit for the patch.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Si quieres ser creativo, aprende el arte de perder el tiempo\"\n\n\n",
"msg_date": "Fri, 5 Aug 2022 11:58:23 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: enable/disable broken for statement triggers on partitioned\n tables"
},
{
"msg_contents": "On Fri, Aug 5, 2022 at 6:58 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> OK, pushed. This soon caused buildfarm to show a failure due to\n> underspecified ORDER BY, so I just pushed a fix for that too.\n\nThank you.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Aug 2022 19:44:00 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: enable/disable broken for statement triggers on partitioned\n tables"
}
] |
[
{
"msg_contents": "Hi,\r\n Some databases (like Teradata) support the following syntax:\r\n\r\n select col1, col2*20 as col2_1, col2_1*200 as col3_1 from your_table;\r\n\r\n The last element in the target list can refer the second one using its alias.\r\n\r\n This feature is similar to some programming languages (like Lisp)'s let*.\r\n\r\n For Postgres, it seems the only way is to write a subquery and then a new target list.\r\n\r\n Will Postgres plan to support this feature?\r\n\r\n Thanks a lot!\r\n\n\n\n\n\n\n\n\r\nHi,\n\r\n Some databases (like Teradata) support the following syntax:\n\r\n \n\r\n select col1, col2*20 as col2_1, col2_1*200 as col3_1 from your_table;\n\n\n\n\n The last element in the target list can refer the second one using its alias.\n\n \n\n This feature is similar to some programming languages (like Lisp)'s\r\nlet*.\n\n\n\n\n For Postgres, it seems the only way is to write a subquery and then a new target list.\n\n \n\n Will Postgres plan to support this feature?\n\n\n\n\n Thanks a lot!",
"msg_date": "Tue, 24 May 2022 15:12:21 +0000",
"msg_from": "Wood May <asdf_pg@outlook.com>",
"msg_from_op": true,
"msg_subject": "Reference column alias for common expressions"
},
{
"msg_contents": "On Tue, May 24, 2022 at 4:12 PM Wood May <asdf_pg@outlook.com> wrote:\n>\n> Hi,\n> Some databases (like Teradata) support the following syntax:\n>\n> select col1, col2*20 as col2_1, col2_1*200 as col3_1 from your_table;\n>\n> The last element in the target list can refer the second one using its alias.\n>\n> This feature is similar to some programming languages (like Lisp)'s let*.\n\nI think this is incompatible with SQL semantics.\n\n>\n> For Postgres, it seems the only way is to write a subquery and then a new target list.\n\nAnother option is to use LATERAL subqueries, eg\n\nselect t.col1, level1.col2_1, level2.col3_1\nfrom your_table as t\n lateral join\n (select t.col2*20 as col2_1) as level1 on true\n lateral join\n (select level1.col2_1*200 as col3_1) as level2 on true ;\n\n>\n> Will Postgres plan to support this feature?\n>\n> Thanks a lot!\n\nRegards\n Pantelis Theodosiou\n\n\n",
"msg_date": "Tue, 24 May 2022 16:47:05 +0100",
"msg_from": "Pantelis Theodosiou <ypercube@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reference column alias for common expressions"
},
{
"msg_contents": "Wood May <asdf_pg@outlook.com> writes:\n> Some databases (like Teradata) support the following syntax:\n> select col1, col2*20 as col2_1, col2_1*200 as col3_1 from your_table;\n> The last element in the target list can refer the second one using its alias.\n> This feature is similar to some programming languages (like Lisp)'s let*.\n> For Postgres, it seems the only way is to write a subquery and then a new target list.\n\n> Will Postgres plan to support this feature?\n\nNo. It's flat out contrary to the SQL standard.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 May 2022 12:03:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reference column alias for common expressions"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.