threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Hi,\n\nWhen renaming a column that is part of a primary key,\nthe primary key index's pg_attribute.attname value\nisn't updated accordingly, the old value remains.\n\nThis causes problems when trying to measure if the\neffects of a migration script caused the same end result\nas if installing the same version of the schema from scratch.\n\nThe schema diffing tool reports a diff, and there is one,\nbut not actually diff that causes any problems,\nsince the primary key index's attname doesn't appear\nto be used for anything, since the attnum is probably\nused instead, which is correct.\n\nBelow in an example to illustrate the problem:\n\nCREATE TABLE foo (\n foo_id integer NOT NULL,\n CONSTRAINT foo_pk PRIMARY KEY (foo_id)\n);\n\n\\d foo\n\n Table \"public.foo\"\nColumn | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\nfoo_id | integer | | not null |\nIndexes:\n \"foo_pk\" PRIMARY KEY, btree (foo_id)\n\nSELECT c.relname, a.attnum, c.relkind, a.attname\nFROM pg_class AS c\nJOIN pg_attribute AS a ON a.attrelid = c.oid\nJOIN pg_namespace AS n ON n.oid = c.relnamespace\nWHERE n.nspname = 'public'\nORDER BY 1,2;\n\nrelname | attnum | relkind | attname\n---------+--------+---------+----------\nfoo | -6 | r | tableoid\nfoo | -5 | r | cmax\nfoo | -4 | r | xmax\nfoo | -3 | r | cmin\nfoo | -2 | r | xmin\nfoo | -1 | r | ctid\nfoo | 1 | r | foo_id\nfoo_pk | 1 | i | foo_id\n(8 rows)\n\nALTER TABLE foo RENAME COLUMN foo_id TO bar_id;\nALTER TABLE foo RENAME CONSTRAINT \"foo_pk\" TO \"bar_pk\";\nALTER TABLE foo RENAME TO bar;\n\n\\d bar\n\n Table \"public.bar\"\nColumn | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\nbar_id | integer | | not null |\nIndexes:\n \"bar_pk\" PRIMARY KEY, btree (bar_id)\n\nLooks good! But...\n\nSELECT c.relname, a.attnum, c.relkind, a.attname\nFROM pg_class AS c\nJOIN pg_attribute AS a ON a.attrelid = c.oid\nJOIN pg_namespace AS n ON n.oid = c.relnamespace\nWHERE n.nspname = 'public'\nORDER BY 1,2;\n\nrelname | attnum | relkind | attname\n---------+--------+---------+----------\nbar | -6 | r | tableoid\nbar | -5 | r | cmax\nbar | -4 | r | xmax\nbar | -3 | r | cmin\nbar | -2 | r | xmin\nbar | -1 | r | ctid\nbar | 1 | r | bar_id\nbar_pk | 1 | i | foo_id\n(8 rows)\n\nOn the last row, we can see that the\nattname for the PRIMARY KEY index\nstill says \"foo_id\".\n\nWhile I could ignore PRIMARY KEY index\nattname values, it is ugly and I hope there\nis a way to avoid it.\n\n/Joel\nHi,When renaming a column that is part of a primary key,the primary key index's pg_attribute.attname valueisn't updated accordingly, the old value remains.This causes problems when trying to measure if theeffects of a migration script caused the same end resultas if installing the same version of the schema from scratch.The schema diffing tool reports a diff, and there is one,but not actually diff that causes any problems,since the primary key index's attname doesn't appearto be used for anything, since the attnum is probablyused instead, which is correct.Below in an example to illustrate the problem:CREATE TABLE foo ( foo_id integer NOT NULL, CONSTRAINT foo_pk PRIMARY KEY (foo_id));\\d foo Table \"public.foo\"Column | Type | Collation | Nullable | Default--------+---------+-----------+----------+---------foo_id | integer | | not null |Indexes: \"foo_pk\" PRIMARY KEY, btree (foo_id)SELECT c.relname, a.attnum, c.relkind, a.attnameFROM pg_class AS cJOIN pg_attribute AS a ON a.attrelid = c.oidJOIN pg_namespace AS n ON n.oid = c.relnamespaceWHERE n.nspname = 'public'ORDER BY 1,2;relname | attnum | relkind | attname---------+--------+---------+----------foo | -6 | r | tableoidfoo | -5 | r | cmaxfoo | -4 | r | xmaxfoo | -3 | r | cminfoo | -2 | r | xminfoo | -1 | r | ctidfoo | 1 | r | foo_idfoo_pk | 1 | i | foo_id(8 rows)ALTER TABLE foo RENAME COLUMN foo_id TO bar_id;ALTER TABLE foo RENAME CONSTRAINT \"foo_pk\" TO \"bar_pk\";ALTER TABLE foo RENAME TO bar;\\d bar Table \"public.bar\"Column | Type | Collation | Nullable | Default--------+---------+-----------+----------+---------bar_id | integer | | not null |Indexes: \"bar_pk\" PRIMARY KEY, btree (bar_id)Looks good! But...SELECT c.relname, a.attnum, c.relkind, a.attnameFROM pg_class AS cJOIN pg_attribute AS a ON a.attrelid = c.oidJOIN pg_namespace AS n ON n.oid = c.relnamespaceWHERE n.nspname = 'public'ORDER BY 1,2;relname | attnum | relkind | attname---------+--------+---------+----------bar | -6 | r | tableoidbar | -5 | r | cmaxbar | -4 | r | xmaxbar | -3 | r | cminbar | -2 | r | xminbar | -1 | r | ctidbar | 1 | r | bar_idbar_pk | 1 | i | foo_id(8 rows)On the last row, we can see that theattname for the PRIMARY KEY indexstill says \"foo_id\".While I could ignore PRIMARY KEY indexattname values, it is ugly and I hope thereis a way to avoid it./Joel",
"msg_date": "Mon, 22 Feb 2021 18:21:23 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "pg_attribute.attname inconsistency when renaming primary key columns"
},
{
"msg_contents": "I solved my problem by using attnum::text instead of attname for pg_class.relkind = ‘i’ as a work-around to avoid a diff.\n\nOn Mon, Feb 22, 2021, at 18:21, Joel Jacobson wrote:\n> Hi,\n> \n> When renaming a column that is part of a primary key,\n> the primary key index's pg_attribute.attname value\n> isn't updated accordingly, the old value remains.\n> \n> This causes problems when trying to measure if the\n> effects of a migration script caused the same end result\n> as if installing the same version of the schema from scratch.\n> \n> The schema diffing tool reports a diff, and there is one,\n> but not actually diff that causes any problems,\n> since the primary key index's attname doesn't appear\n> to be used for anything, since the attnum is probably\n> used instead, which is correct.\n> \n> Below in an example to illustrate the problem:\n> \n> CREATE TABLE foo (\n> foo_id integer NOT NULL,\n> CONSTRAINT foo_pk PRIMARY KEY (foo_id)\n> );\n> \n> \\d foo\n> \n> Table \"public.foo\"\n> Column | Type | Collation | Nullable | Default\n> --------+---------+-----------+----------+---------\n> foo_id | integer | | not null |\n> Indexes:\n> \"foo_pk\" PRIMARY KEY, btree (foo_id)\n> \n> SELECT c.relname, a.attnum, c.relkind, a.attname\n> FROM pg_class AS c\n> JOIN pg_attribute AS a ON a.attrelid = c.oid\n> JOIN pg_namespace AS n ON n.oid = c.relnamespace\n> WHERE n.nspname = 'public'\n> ORDER BY 1,2;\n> \n> relname | attnum | relkind | attname\n> ---------+--------+---------+----------\n> foo | -6 | r | tableoid\n> foo | -5 | r | cmax\n> foo | -4 | r | xmax\n> foo | -3 | r | cmin\n> foo | -2 | r | xmin\n> foo | -1 | r | ctid\n> foo | 1 | r | foo_id\n> foo_pk | 1 | i | foo_id\n> (8 rows)\n> \n> ALTER TABLE foo RENAME COLUMN foo_id TO bar_id;\n> ALTER TABLE foo RENAME CONSTRAINT \"foo_pk\" TO \"bar_pk\";\n> ALTER TABLE foo RENAME TO bar;\n> \n> \\d bar\n> \n> Table \"public.bar\"\n> Column | Type | Collation | Nullable | Default\n> --------+---------+-----------+----------+---------\n> bar_id | integer | | not null |\n> Indexes:\n> \"bar_pk\" PRIMARY KEY, btree (bar_id)\n> \n> Looks good! But...\n> \n> SELECT c.relname, a.attnum, c.relkind, a.attname\n> FROM pg_class AS c\n> JOIN pg_attribute AS a ON a.attrelid = c.oid\n> JOIN pg_namespace AS n ON n.oid = c.relnamespace\n> WHERE n.nspname = 'public'\n> ORDER BY 1,2;\n> \n> relname | attnum | relkind | attname\n> ---------+--------+---------+----------\n> bar | -6 | r | tableoid\n> bar | -5 | r | cmax\n> bar | -4 | r | xmax\n> bar | -3 | r | cmin\n> bar | -2 | r | xmin\n> bar | -1 | r | ctid\n> bar | 1 | r | bar_id\n> bar_pk | 1 | i | foo_id\n> (8 rows)\n> \n> On the last row, we can see that the\n> attname for the PRIMARY KEY index\n> still says \"foo_id\".\n> \n> While I could ignore PRIMARY KEY index\n> attname values, it is ugly and I hope there\n> is a way to avoid it.\n> \n> /Joel\n\nKind regards,\n\nJoel\n\nI solved my problem by using attnum::text instead of attname for pg_class.relkind = ‘i’ as a work-around to avoid a diff.On Mon, Feb 22, 2021, at 18:21, Joel Jacobson wrote:Hi,When renaming a column that is part of a primary key,the primary key index's pg_attribute.attname valueisn't updated accordingly, the old value remains.This causes problems when trying to measure if theeffects of a migration script caused the same end resultas if installing the same version of the schema from scratch.The schema diffing tool reports a diff, and there is one,but not actually diff that causes any problems,since the primary key index's attname doesn't appearto be used for anything, since the attnum is probablyused instead, which is correct.Below in an example to illustrate the problem:CREATE TABLE foo ( foo_id integer NOT NULL, CONSTRAINT foo_pk PRIMARY KEY (foo_id));\\d foo Table \"public.foo\"Column | Type | Collation | Nullable | Default--------+---------+-----------+----------+---------foo_id | integer | | not null |Indexes: \"foo_pk\" PRIMARY KEY, btree (foo_id)SELECT c.relname, a.attnum, c.relkind, a.attnameFROM pg_class AS cJOIN pg_attribute AS a ON a.attrelid = c.oidJOIN pg_namespace AS n ON n.oid = c.relnamespaceWHERE n.nspname = 'public'ORDER BY 1,2;relname | attnum | relkind | attname---------+--------+---------+----------foo | -6 | r | tableoidfoo | -5 | r | cmaxfoo | -4 | r | xmaxfoo | -3 | r | cminfoo | -2 | r | xminfoo | -1 | r | ctidfoo | 1 | r | foo_idfoo_pk | 1 | i | foo_id(8 rows)ALTER TABLE foo RENAME COLUMN foo_id TO bar_id;ALTER TABLE foo RENAME CONSTRAINT \"foo_pk\" TO \"bar_pk\";ALTER TABLE foo RENAME TO bar;\\d bar Table \"public.bar\"Column | Type | Collation | Nullable | Default--------+---------+-----------+----------+---------bar_id | integer | | not null |Indexes: \"bar_pk\" PRIMARY KEY, btree (bar_id)Looks good! But...SELECT c.relname, a.attnum, c.relkind, a.attnameFROM pg_class AS cJOIN pg_attribute AS a ON a.attrelid = c.oidJOIN pg_namespace AS n ON n.oid = c.relnamespaceWHERE n.nspname = 'public'ORDER BY 1,2;relname | attnum | relkind | attname---------+--------+---------+----------bar | -6 | r | tableoidbar | -5 | r | cmaxbar | -4 | r | xmaxbar | -3 | r | cminbar | -2 | r | xminbar | -1 | r | ctidbar | 1 | r | bar_idbar_pk | 1 | i | foo_id(8 rows)On the last row, we can see that theattname for the PRIMARY KEY indexstill says \"foo_id\".While I could ignore PRIMARY KEY indexattname values, it is ugly and I hope thereis a way to avoid it./JoelKind regards,Joel",
"msg_date": "Mon, 22 Feb 2021 21:42:44 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_pg=5Fattribute.attname_inconsistency_when_renaming_primary?=\n =?UTF-8?Q?_key_columns?="
},
{
"msg_contents": "At Mon, 22 Feb 2021 21:42:44 +0100, \"Joel Jacobson\" <joel@compiler.org> wrote in \r\n> I solved my problem by using attnum::text instead of attname for pg_class.relkind = ‘i’ as a work-around to avoid a diff.\r\n\r\nFor your information, note that the attname of an index relation is\r\nnot the name of the target column in the base table. If you created\r\nan index with expression columns, the attributes would be named as\r\n\"expr[x]\". And the names are freely changeable irrelevantly from the\r\ncolumn names of the base table.\r\n\r\nSo to know the referred column name of an index column, do something\r\nlike the following instead.\r\n\r\nSELECT ci.relname as indexname, ai.attname as indcolname,\r\n cr.relname as relname, ar.attname as relattname, ar.attnum\r\nFROM pg_index i\r\nJOIN pg_class cr ON (cr.oid = i.indrelid)\r\nJOIN pg_class ci ON (ci.oid = i.indexrelid)\r\nJOIN pg_attribute ai ON (ai.attrelid = ci.oid)\r\nJOIN pg_attribute ar ON (ar.attrelid = cr.oid AND ar.attnum = ANY(i.indkey))\r\nWHERE ci.relnamespace = 'public'::regnamespace;\r\n\r\nindexname | indcolname | relname | relattname | attnum \r\n-----------+------------+---------+------------+--------\r\n bar_pk | foo_id | bar | bar_id | 1\r\n(1 row)\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Wed, 24 Feb 2021 16:55:11 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_attribute.attname inconsistency when renaming primary key\n columns"
},
{
"msg_contents": "On Wed, Feb 24, 2021 at 04:55:11PM +0900, Kyotaro Horiguchi wrote:\n> At Mon, 22 Feb 2021 21:42:44 +0100, \"Joel Jacobson\" <joel@compiler.org> wrote in \n> > I solved my problem by using attnum::text instead of attname for pg_class.relkind = ‘i’ as a work-around to avoid a diff.\n> \n> For your information, note that the attname of an index relation is\n> not the name of the target column in the base table. If you created\n> an index with expression columns, the attributes would be named as\n> \"expr[x]\". And the names are freely changeable irrelevantly from the\n> column names of the base table.\n\nYes, the attname associated to the index expressions makes that\nweird, so you should not rely on that. This reminds me of the\ndiscussion that introduced ALTER INDEX SET STATISTICS, which uses\ncolumn numbers:\nhttps://www.postgresql.org/message-id/CAPpHfdsSYo6xpt0F=ngAdqMPFJJhC7zApde9h1qwkdpHpwFisA@mail.gmail.com\n\n> So to know the referred column name of an index column, do something\n> like the following instead.\n\nFWIW, for any schema diff tool, I would recommend to completely ignore\nattname, and instead extract the index attributes using\npg_get_indexdef() that can work on attribute numbers. You can find a\nlot of inspiration from psql -E to see the queries used internally for\nthings like \\d or \\di. For example:\n=# create table aa (a int);\n=# create index aai on aa((a + a), (a - a));\n=# SELECT attnum,\n pg_catalog.pg_get_indexdef(a.attrelid, a.attnum, TRUE) AS indexdef\n FROM pg_catalog.pg_attribute a\n WHERE a.attrelid = 'aai' ::regclass AND a.attnum > 0 AND NOT a.attisdropped\n ORDER BY a.attnum;\n attnum | indexdef\n--------+----------\n 1 | (a + a)\n 2 | (a - a)\n(2 rows)\n=# ALTER TABLE aa RENAME COLUMN a to b;\n=# SELECT attnum,\n pg_catalog.pg_get_indexdef(a.attrelid, a.attnum, TRUE) AS indexdef\n FROM pg_catalog.pg_attribute a\n WHERE a.attrelid = 'aai' ::regclass AND a.attnum > 0 AND NOT a.attisdropped\n ORDER BY a.attnum;\n attnum | indexdef\n--------+----------\n 1 | (b + b)\n 2 | (b - b)\n(2 rows)\n--\nMichael",
"msg_date": "Fri, 26 Feb 2021 16:15:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_attribute.attname inconsistency when renaming primary key\n columns"
}
] |
[
{
"msg_contents": "Hi,\n\nThe 2pc decoding added in\n\ncommit a271a1b50e9bec07e2ef3a05e38e7285113e4ce6\nAuthor: Amit Kapila <akapila@postgresql.org>\nDate: 2021-01-04 08:34:50 +0530\n\n Allow decoding at prepare time in ReorderBuffer.\n\nhas a deadlock danger when used in a way that takes advantage of\nseparate decoding of the 2PC PREPARE.\n\n\nI assume the goal of decoding the 2PC PREPARE is so one can wait for the\nPREPARE to have logically replicated, before doing the COMMIT PREPARED.\n\n\nHowever, currently it's pretty easy to get into a state where logical\ndecoding cannot progress until the 2PC transaction has\ncommitted/aborted. Which essentially would lead to undetected deadlocks.\n\nThe problem is that catalog tables accessed during logical decoding need\nto get locked (otherwise e.g. a table rewrite could happen\nconcurrently). But if the prepared transaction itself holds a lock on a\ncatalog table, logical decoding will block on that lock - which won't be\nreleased until replication progresses. A deadlock.\n\nA trivial example:\n\nSELECT pg_create_logical_replication_slot('test', 'test_decoding');\nCREATE TABLE foo(id serial primary key);\nBEGIN;\nLOCK pg_class;\nINSERT INTO foo DEFAULT VALUES;\nPREPARE TRANSACTION 'foo';\n\n-- hangs waiting for pg_class to be unlocked\nSELECT pg_logical_slot_get_changes('test', NULL, NULL, 'two-phase-commit', '1');\n\n\nNow, more realistic versions of this scenario would probably lock a\n'user catalog table' containing replication metadata instead of\npg_class, but ...\n\n\nAt first this seems to be a significant issue. But on the other hand, if\nyou were to shut the cluster down in this situation (or disconnect all\nsessions), you have broken cluster on your hand - without logical\ndecoding being involved. As it turns out, we need to read pg_class to\nlog in... And I can't remember this being reported to be a problem?\n\n\nPerhaps all that we need to do is to disallow 2PC prepare if [user]\ncatalog tables have been locked exclusively? Similar to how we're\ndisallowing preparing tables with temp table access.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Feb 2021 14:28:47 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Tue, Feb 23, 2021 at 3:58 AM Andres Freund <andres@anarazel.de> wrote:\n>\n>\n> At first this seems to be a significant issue. But on the other hand, if\n> you were to shut the cluster down in this situation (or disconnect all\n> sessions), you have broken cluster on your hand - without logical\n> decoding being involved. As it turns out, we need to read pg_class to\n> log in... And I can't remember this being reported to be a problem?\n>\n\nI don't remember seeing such a report but I think that is a reason\nenough (leaving aside logical decoding of 2PC) to either disallow\nlocking catalog tables or at least document it in some way.\n\n>\n> Perhaps all that we need to do is to disallow 2PC prepare if [user]\n> catalog tables have been locked exclusively?\n>\n\nRight, and we have discussed this during development [1][2]. We\nthought either we disallow this operation or will document it. I\nthought of doing this along with a core-implementation of Prepare\nwaiting to get it logically replicated. But at this stage, I think if\nthe user wants he can do a similar thing in his application where\nafter prepare it can wait for the transaction to get logically\nreplicated (if they have their own replication solution based on\nlogical decoding) and then decide whether to rollback or commit. So,\nmaybe we should either disallow this operation or at least document\nit. What do you think?\n\n> Similar to how we're\n> disallowing preparing tables with temp table access.\n>\n\nYeah, we disallow other things like pg_export_snapshot as well in a\nPrepared transaction, so we can probably disallow this operation as\nwell.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1JeeXOwD6rYnhSOYk5YN-fUTmxe1GkTpN2-BvgnKN6gZg%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAMGcDxf83P5SGnGH52=_0wRP9pO6uRWCMRwAA0nxKtZvir2_vQ@mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 23 Feb 2021 08:56:39 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "Hi,\n\nOn 2021-02-23 08:56:39 +0530, Amit Kapila wrote:\n> On Tue, Feb 23, 2021 at 3:58 AM Andres Freund <andres@anarazel.de> wrote:\n> > Perhaps all that we need to do is to disallow 2PC prepare if [user]\n> > catalog tables have been locked exclusively?\n\n> Right, and we have discussed this during development [1][2].\n\nI remember bringing it up before as well... Issues like this really need\nto be mentioned as explicit caveats at least somewhere in the code and\ncommit message. You can't expect people to look at 3+ year old threads.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Feb 2021 19:39:13 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Tue, Feb 23, 2021 at 9:09 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2021-02-23 08:56:39 +0530, Amit Kapila wrote:\n> > On Tue, Feb 23, 2021 at 3:58 AM Andres Freund <andres@anarazel.de> wrote:\n> > > Perhaps all that we need to do is to disallow 2PC prepare if [user]\n> > > catalog tables have been locked exclusively?\n>\n> > Right, and we have discussed this during development [1][2].\n>\n> I remember bringing it up before as well... Issues like this really need\n> to be mentioned as explicit caveats at least somewhere in the code and\n> commit message.\n>\n\nOkay, so is it sufficient to add comments in code, or do we want to\nadd something in docs? I am not completely sure if we need to add in\ndocs till we have core-implementation of prepare waiting to get\nlogically replicated.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 23 Feb 2021 09:24:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "Hi\n\nOn 2021-02-23 09:24:18 +0530, Amit Kapila wrote:\n> Okay, so is it sufficient to add comments in code, or do we want to\n> add something in docs? I am not completely sure if we need to add in\n> docs till we have core-implementation of prepare waiting to get\n> logically replicated.\n\nThere's plenty users of logical decoding that aren't going through the\nnormal replication mechanism - so they can hit this. So I think it needs\nto be documented somewhere.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Feb 2021 20:03:15 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Tue, Feb 23, 2021 at 9:33 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2021-02-23 09:24:18 +0530, Amit Kapila wrote:\n> > Okay, so is it sufficient to add comments in code, or do we want to\n> > add something in docs? I am not completely sure if we need to add in\n> > docs till we have core-implementation of prepare waiting to get\n> > logically replicated.\n>\n> There's plenty users of logical decoding that aren't going through the\n> normal replication mechanism - so they can hit this. So I think it needs\n> to be documented somewhere.\n>\n\nAs per discussion, the attached patch updates both docs and comments\nin the code.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 23 Feb 2021 12:00:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Tue, Feb 23, 2021 at 12:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Feb 23, 2021 at 9:33 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2021-02-23 09:24:18 +0530, Amit Kapila wrote:\n> > > Okay, so is it sufficient to add comments in code, or do we want to\n> > > add something in docs? I am not completely sure if we need to add in\n> > > docs till we have core-implementation of prepare waiting to get\n> > > logically replicated.\n> >\n> > There's plenty users of logical decoding that aren't going through the\n> > normal replication mechanism - so they can hit this. So I think it needs\n> > to be documented somewhere.\n> >\n>\n> As per discussion, the attached patch updates both docs and comments\n> in the code.\n>\n\nI have pushed this patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 1 Mar 2021 08:51:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Tue, Feb 23, 2021 at 3:59 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> The 2pc decoding added in\n>\n> commit a271a1b50e9bec07e2ef3a05e38e7285113e4ce6\n> Author: Amit Kapila <akapila@postgresql.org>\n> Date: 2021-01-04 08:34:50 +0530\n>\n> Allow decoding at prepare time in ReorderBuffer.\n>\n> has a deadlock danger when used in a way that takes advantage of\n> separate decoding of the 2PC PREPARE.\n>\n>\n> I assume the goal of decoding the 2PC PREPARE is so one can wait for the\n> PREPARE to have logically replicated, before doing the COMMIT PREPARED.\n>\n>\n> However, currently it's pretty easy to get into a state where logical\n> decoding cannot progress until the 2PC transaction has\n> committed/aborted. Which essentially would lead to undetected deadlocks.\n>\n> The problem is that catalog tables accessed during logical decoding need\n> to get locked (otherwise e.g. a table rewrite could happen\n> concurrently). But if the prepared transaction itself holds a lock on a\n> catalog table, logical decoding will block on that lock - which won't be\n> released until replication progresses. A deadlock.\n>\n> A trivial example:\n>\n> SELECT pg_create_logical_replication_slot('test', 'test_decoding');\n> CREATE TABLE foo(id serial primary key);\n> BEGIN;\n> LOCK pg_class;\n> INSERT INTO foo DEFAULT VALUES;\n> PREPARE TRANSACTION 'foo';\n>\n> -- hangs waiting for pg_class to be unlocked\n> SELECT pg_logical_slot_get_changes('test', NULL, NULL, 'two-phase-commit', '1');\n>\n>\n> Now, more realistic versions of this scenario would probably lock a\n> 'user catalog table' containing replication metadata instead of\n> pg_class, but ...\n>\n>\n> At first this seems to be a significant issue. But on the other hand, if\n> you were to shut the cluster down in this situation (or disconnect all\n> sessions), you have broken cluster on your hand - without logical\n> decoding being involved. As it turns out, we need to read pg_class to\n> log in... And I can't remember this being reported to be a problem?\n>\n>\n> Perhaps all that we need to do is to disallow 2PC prepare if [user]\n> catalog tables have been locked exclusively? Similar to how we're\n> disallowing preparing tables with temp table access.\n>\n\nEven I felt we should not allow prepare a transaction that has locked\nsystem tables, as it does not allow creating a new session after\nrestart and also causes the deadlock while logical decoding of\nprepared transaction.\nI have made a patch to make the prepare transaction fail in this\nscenario. Attached the patch for the same.\nThoughts?\n\nRegards,\nVignesh",
"msg_date": "Mon, 15 Mar 2021 20:05:40 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Tue, Mar 16, 2021 at 1:36 AM vignesh C <vignesh21@gmail.com> wrote:\n\n> On Tue, Feb 23, 2021 at 3:59 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > The 2pc decoding added in\n> >\n> > commit a271a1b50e9bec07e2ef3a05e38e7285113e4ce6\n> > Author: Amit Kapila <akapila@postgresql.org>\n> > Date: 2021-01-04 08:34:50 +0530\n> >\n> > Allow decoding at prepare time in ReorderBuffer.\n> >\n> > has a deadlock danger when used in a way that takes advantage of\n> > separate decoding of the 2PC PREPARE.\n> >\n> >\n> > I assume the goal of decoding the 2PC PREPARE is so one can wait for the\n> > PREPARE to have logically replicated, before doing the COMMIT PREPARED.\n> >\n> >\n> > However, currently it's pretty easy to get into a state where logical\n> > decoding cannot progress until the 2PC transaction has\n> > committed/aborted. Which essentially would lead to undetected deadlocks.\n> >\n> > The problem is that catalog tables accessed during logical decoding need\n> > to get locked (otherwise e.g. a table rewrite could happen\n> > concurrently). But if the prepared transaction itself holds a lock on a\n> > catalog table, logical decoding will block on that lock - which won't be\n> > released until replication progresses. A deadlock.\n> >\n> > A trivial example:\n> >\n> > SELECT pg_create_logical_replication_slot('test', 'test_decoding');\n> > CREATE TABLE foo(id serial primary key);\n> > BEGIN;\n> > LOCK pg_class;\n> > INSERT INTO foo DEFAULT VALUES;\n> > PREPARE TRANSACTION 'foo';\n> >\n> > -- hangs waiting for pg_class to be unlocked\n> > SELECT pg_logical_slot_get_changes('test', NULL, NULL,\n> 'two-phase-commit', '1');\n> >\n> >\n> > Now, more realistic versions of this scenario would probably lock a\n> > 'user catalog table' containing replication metadata instead of\n> > pg_class, but ...\n> >\n> >\n> > At first this seems to be a significant issue. But on the other hand, if\n> > you were to shut the cluster down in this situation (or disconnect all\n> > sessions), you have broken cluster on your hand - without logical\n> > decoding being involved. As it turns out, we need to read pg_class to\n> > log in... And I can't remember this being reported to be a problem?\n> >\n> >\n> > Perhaps all that we need to do is to disallow 2PC prepare if [user]\n> > catalog tables have been locked exclusively? Similar to how we're\n> > disallowing preparing tables with temp table access.\n> >\n>\n> Even I felt we should not allow prepare a transaction that has locked\n> system tables, as it does not allow creating a new session after\n> restart and also causes the deadlock while logical decoding of\n> prepared transaction.\n> I have made a patch to make the prepare transaction fail in this\n> scenario. Attached the patch for the same.\n> Thoughts?\n>\n>\nThe patch applies fine on HEAD and \"make check\" passes fine. No major\ncomments on the patch, just a minor comment:\n\nIf you could change the error from, \" cannot PREPARE a transaction that has\na lock on user catalog/system table(s)\"\nto \"cannot PREPARE a transaction that has an *exclusive lock* on user\ncatalog/system table(s)\" that would be a more\naccurate instruction to the user.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Tue, Mar 16, 2021 at 1:36 AM vignesh C <vignesh21@gmail.com> wrote:On Tue, Feb 23, 2021 at 3:59 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> The 2pc decoding added in\n>\n> commit a271a1b50e9bec07e2ef3a05e38e7285113e4ce6\n> Author: Amit Kapila <akapila@postgresql.org>\n> Date: 2021-01-04 08:34:50 +0530\n>\n> Allow decoding at prepare time in ReorderBuffer.\n>\n> has a deadlock danger when used in a way that takes advantage of\n> separate decoding of the 2PC PREPARE.\n>\n>\n> I assume the goal of decoding the 2PC PREPARE is so one can wait for the\n> PREPARE to have logically replicated, before doing the COMMIT PREPARED.\n>\n>\n> However, currently it's pretty easy to get into a state where logical\n> decoding cannot progress until the 2PC transaction has\n> committed/aborted. Which essentially would lead to undetected deadlocks.\n>\n> The problem is that catalog tables accessed during logical decoding need\n> to get locked (otherwise e.g. a table rewrite could happen\n> concurrently). But if the prepared transaction itself holds a lock on a\n> catalog table, logical decoding will block on that lock - which won't be\n> released until replication progresses. A deadlock.\n>\n> A trivial example:\n>\n> SELECT pg_create_logical_replication_slot('test', 'test_decoding');\n> CREATE TABLE foo(id serial primary key);\n> BEGIN;\n> LOCK pg_class;\n> INSERT INTO foo DEFAULT VALUES;\n> PREPARE TRANSACTION 'foo';\n>\n> -- hangs waiting for pg_class to be unlocked\n> SELECT pg_logical_slot_get_changes('test', NULL, NULL, 'two-phase-commit', '1');\n>\n>\n> Now, more realistic versions of this scenario would probably lock a\n> 'user catalog table' containing replication metadata instead of\n> pg_class, but ...\n>\n>\n> At first this seems to be a significant issue. But on the other hand, if\n> you were to shut the cluster down in this situation (or disconnect all\n> sessions), you have broken cluster on your hand - without logical\n> decoding being involved. As it turns out, we need to read pg_class to\n> log in... And I can't remember this being reported to be a problem?\n>\n>\n> Perhaps all that we need to do is to disallow 2PC prepare if [user]\n> catalog tables have been locked exclusively? Similar to how we're\n> disallowing preparing tables with temp table access.\n>\n\nEven I felt we should not allow prepare a transaction that has locked\nsystem tables, as it does not allow creating a new session after\nrestart and also causes the deadlock while logical decoding of\nprepared transaction.\nI have made a patch to make the prepare transaction fail in this\nscenario. Attached the patch for the same.\nThoughts?The patch applies fine on HEAD and \"make check\" passes fine. No major comments on the patch, just a minor comment:If you could change the error from, \" cannot PREPARE a transaction that has a lock on user catalog/system table(s)\"to \"cannot PREPARE a transaction that has an exclusive lock on user catalog/system table(s)\" that would be a moreaccurate instruction to the user.regards,Ajin CherianFujitsu Australia",
"msg_date": "Wed, 31 Mar 2021 20:04:56 +1100",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Wed, Mar 31, 2021 at 2:35 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> The patch applies fine on HEAD and \"make check\" passes fine. No major comments on the patch, just a minor comment:\n>\n> If you could change the error from, \" cannot PREPARE a transaction that has a lock on user catalog/system table(s)\"\n> to \"cannot PREPARE a transaction that has an exclusive lock on user catalog/system table(s)\" that would be a more\n> accurate instruction to the user.\n>\n\nThanks for reviewing the patch.\nPlease find the updated patch which includes the fix for the same.\n\nRegards,\nVignesh",
"msg_date": "Wed, 31 Mar 2021 17:47:50 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Wed, Mar 31, 2021 at 5:47 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, Mar 31, 2021 at 2:35 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n> > The patch applies fine on HEAD and \"make check\" passes fine. No major comments on the patch, just a minor comment:\n> >\n> > If you could change the error from, \" cannot PREPARE a transaction that has a lock on user catalog/system table(s)\"\n> > to \"cannot PREPARE a transaction that has an exclusive lock on user catalog/system table(s)\" that would be a more\n> > accurate instruction to the user.\n> >\n>\n> Thanks for reviewing the patch.\n> Please find the updated patch which includes the fix for the same.\n\nThis similar problem exists in case of synchronous replication setup\nhaving synchronous_standby_names referring to the subscriber, when we\ndo the steps \"begin;lock pg_class; insert into test1 values(10);\ncommit\". In this case while decoding of commit, the commit will wait\nwhile trying to acquire a lock on pg_class relation, stack trace for\nthe same is given below:\n#4 0x0000556936cd5d37 in ProcSleep (locallock=0x556937de8728,\nlockMethodTable=0x5569371c2620 <default_lockmethod>) at proc.c:1361\n#5 0x0000556936cc294a in WaitOnLock (locallock=0x556937de8728,\nowner=0x556937e3cd90) at lock.c:1858\n#6 0x0000556936cc1231 in LockAcquireExtended (locktag=0x7ffcbb23cff0,\nlockmode=1, sessionLock=false, dontWait=false, reportMemoryError=true,\nlocallockp=0x7ffcbb23cfe8)\nat lock.c:1100\n#7 0x0000556936cbdbce in LockRelationOid (relid=1259, lockmode=1) at lmgr.c:117\n#8 0x00005569367afb12 in relation_open (relationId=1259, lockmode=1)\nat relation.c:56\n#9 0x00005569368888a2 in table_open (relationId=1259, lockmode=1) at table.c:43\n#10 0x0000556936e90a91 in RelidByRelfilenode (reltablespace=0,\nrelfilenode=16385) at relfilenodemap.c:192\n#11 0x0000556936c40361 in ReorderBufferProcessTXN (rb=0x556937e8e760,\ntxn=0x556937eb8778, commit_lsn=23752880, snapshot_now=0x556937ea0a90,\ncommand_id=0, streaming=false)\nat reorderbuffer.c:2122\n#12 0x0000556936c411b7 in ReorderBufferReplay (txn=0x556937eb8778,\nrb=0x556937e8e760, xid=590, commit_lsn=23752880, end_lsn=23752928,\ncommit_time=672204445820756,\norigin_id=0, origin_lsn=0) at reorderbuffer.c:2589\n#13 0x0000556936c41239 in ReorderBufferCommit (rb=0x556937e8e760,\nxid=590, commit_lsn=23752880, end_lsn=23752928,\ncommit_time=672204445820756, origin_id=0, origin_lsn=0)\nat reorderbuffer.c:2613\n#14 0x0000556936c2f4d9 in DecodeCommit (ctx=0x556937e8c750,\nbuf=0x7ffcbb23d610, parsed=0x7ffcbb23d4b0, xid=590, two_phase=false)\nat decode.c:744\n\nThoughts?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 20 Apr 2021 09:57:16 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Tue, Apr 20, 2021 at 9:57 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> This similar problem exists in case of synchronous replication setup\n> having synchronous_standby_names referring to the subscriber, when we\n> do the steps \"begin;lock pg_class; insert into test1 values(10);\n> commit\". In this case while decoding of commit, the commit will wait\n> while trying to acquire a lock on pg_class relation,\n>\n\nSo, this appears to be an existing caveat of synchronous replication.\nIf that is the case, I am not sure if it is a good idea to just block\nsuch ops for the prepared transaction. Also, what about other\noperations which acquire an exclusive lock on [user]_catalog_tables\nlike:\ncluster pg_trigger using pg_class_oid_index, similarly cluster on any\nuser_catalog_table, then the other problematic operation could\ntruncate of user_catalog_table as is discussed in another thread [1].\nI think all such operations can block even with synchronous\nreplication. I am not sure if we can create examples for all cases\nbecause for ex. we don't have use of user_catalog_tables in-core but\nmaybe for others, we can try to create examples and see what happens?\n\nIf all such operations can block for synchronous replication and\nprepared transactions replication then we might want to document them\nas caveats at page:\nhttps://www.postgresql.org/docs/devel/logicaldecoding-synchronous.html\nand then also give the reference for these caveats at prepared\ntransactions page:https://www.postgresql.org/docs/devel/logicaldecoding-two-phase-commits.html\n\nWhat do you think?\n\nAs this appears to be an existing caveat of logical replication, I\nhave added the Petr and Peter E in this email.\n\n[1] - https://www.postgresql.org/message-\nid/OSBPR01MB4888314C70DA6B112E32DD6AED2B9%40OSBPR01MB4888.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 24 May 2021 10:03:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Mon, Feb 22, 2021 at 02:28:47PM -0800, Andres Freund wrote:\n> Perhaps all that we need to do is to disallow 2PC prepare if [user]\n> catalog tables have been locked exclusively? Similar to how we're\n> disallowing preparing tables with temp table access.\n\nAt least for anything involving critical relations that get loaded at\nstartup? It seems to me that if we can avoid users to get them\ncompletely locked out even if they run the operation on an object\nthey own, that would be better than requiring tweaks involving\npg_resetwal or equal to rip of the 2PC transaction from existence.\n--\nMichael",
"msg_date": "Mon, 24 May 2021 14:03:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Mon, May 24, 2021 at 10:03:01AM +0530, Amit Kapila wrote:\n> So, this appears to be an existing caveat of synchronous replication.\n> If that is the case, I am not sure if it is a good idea to just block\n> such ops for the prepared transaction. Also, what about other\n> operations which acquire an exclusive lock on [user]_catalog_tables\n> like:\n> cluster pg_trigger using pg_class_oid_index, similarly cluster on any\n> user_catalog_table, then the other problematic operation could\n> truncate of user_catalog_table as is discussed in another thread [1].\n> I think all such operations can block even with synchronous\n> replication. I am not sure if we can create examples for all cases\n> because for ex. we don't have use of user_catalog_tables in-core but\n> maybe for others, we can try to create examples and see what happens?\n> \n> If all such operations can block for synchronous replication and\n> prepared transactions replication then we might want to document them\n> as caveats at page:\n> https://www.postgresql.org/docs/devel/logicaldecoding-synchronous.html\n> and then also give the reference for these caveats at prepared\n> transactions page:https://www.postgresql.org/docs/devel/logicaldecoding-two-phase-commits.html\n> \n> What do you think?\n\nIt seems to me that the 2PC issues on catalog tables and the issues\nrelated to logical replication in synchonous mode are two distinct\nthings that need to be fixed separately.\n\nThe issue with LOCK taken on a catalog while a PREPARE TRANSACTION\nholds locks around is bad enough in itself as it could lock down a\nuser from a cluster as long as the PREPARE TRANSACTION is not removed\nfrom WAL (say the relation is critical for the connection startup).\nThis could be really disruptive for the user even if he tried to take\na lock on an object he owns, and the way to recover is not easy here,\nand the way to recover involves either an old backup or worse,\npg_resetwal.\n\nThe second issue with logical replication is still disruptive, but it\nlooks to me more like a don't-do-it issue, and documenting the caveats\nsounds fine enough.\n\nLooking at the patch from upthread..\n\n+ /*\n+ * Make note that we've locked a system table or an user catalog\n+ * table. This flag will be checked later during prepare transaction\n+ * to fail the prepare transaction.\n+ */\n+ if (lockstmt->mode >= ExclusiveLock &&\n+ (IsCatalogRelationOid(reloid) ||\n+ RelationIsUsedAsCatalogTable(rel)))\n+ MyXactFlags |= XACT_FLAGS_ACQUIREDEXCLUSIVELOCK_SYSREL;\nI think that I'd just use IsCatalogRelationOid() here, and I'd be more\nsevere and restrict all attempts for any lock levels. It seems to me\nthat this needs to happen within RangeVarCallbackForLockTable().\nI would also rename the flag as just XACT_FLAGS_LOCKEDCATALOG.\n\n+ errmsg(\"cannot PREPARE a transaction that has an exclusive lock on user catalog/system table(s)\")));\nWhat about \"cannot PREPARE a transaction that has locked a catalog\nrelation\"?\n--\nMichael",
"msg_date": "Tue, 25 May 2021 16:10:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Monday, May 24, 2021 1:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Apr 20, 2021 at 9:57 AM vignesh C <vignesh21@gmail.com> wrote:\r\n> >\r\n> > This similar problem exists in case of synchronous replication setup\r\n> > having synchronous_standby_names referring to the subscriber, when we\r\n> > do the steps \"begin;lock pg_class; insert into test1 values(10);\r\n> > commit\". In this case while decoding of commit, the commit will wait\r\n> > while trying to acquire a lock on pg_class relation,\r\n> >\r\n> \r\n> So, this appears to be an existing caveat of synchronous replication.\r\n> If that is the case, I am not sure if it is a good idea to just block such ops for the\r\n> prepared transaction. Also, what about other operations which acquire an\r\n> exclusive lock on [user]_catalog_tables\r\n> like:\r\n> cluster pg_trigger using pg_class_oid_index, similarly cluster on any\r\n> user_catalog_table, then the other problematic operation could truncate of\r\n> user_catalog_table as is discussed in another thread [1].\r\n> I think all such operations can block even with synchronous replication. I am not\r\n> sure if we can create examples for all cases because for ex. we don't have use\r\n> of user_catalog_tables in-core but maybe for others, we can try to create\r\n> examples and see what happens?\r\n> \r\n> If all such operations can block for synchronous replication and prepared\r\n> transactions replication then we might want to document them as caveats at\r\n> page:\r\n> https://www.postgresql.org/docs/devel/logicaldecoding-synchronous.html\r\n> and then also give the reference for these caveats at prepared transactions\r\n> page:https://www.postgresql.org/docs/devel/logicaldecoding-two-phase-com\r\n> mits.html\r\n> \r\n> What do you think?\r\nI've checked the behavior of CLUSTER command\r\nin synchronous mode, one of the examples above, as well.\r\n\r\nIIUC, you meant pg_class, and\r\nthe deadlock happens when I run cluster commands on pg_class using its index in synchronous mode.\r\nThe command I used is \"BEGIN; CLUSTER pg_class USING pg_class_oid_index; END;\".\r\nThis deadlock comes from 2 processes, the backend to wait synchronization of the standby\r\nand the walsender process which wants to take a lock on pg_class.\r\nTherefore, I think we need to do something, at least documentation fix,\r\nas you mentioned.\r\n\r\nFrom the perspective of restating,\r\nwhen I restart the locked pub with fast and immediate mode,\r\nin both cases, the pub succeeded in restart and accepted\r\ninteractive psql connections. So, after the restart,\r\nwe are released from the lock.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 25 May 2021 08:13:27 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Tue, May 25, 2021 at 1:43 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Monday, May 24, 2021 1:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Tue, Apr 20, 2021 at 9:57 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > This similar problem exists in case of synchronous replication setup\n> > > having synchronous_standby_names referring to the subscriber, when we\n> > > do the steps \"begin;lock pg_class; insert into test1 values(10);\n> > > commit\". In this case while decoding of commit, the commit will wait\n> > > while trying to acquire a lock on pg_class relation,\n> > >\n> >\n> > So, this appears to be an existing caveat of synchronous replication.\n> > If that is the case, I am not sure if it is a good idea to just block such ops for the\n> > prepared transaction. Also, what about other operations which acquire an\n> > exclusive lock on [user]_catalog_tables\n> > like:\n> > cluster pg_trigger using pg_class_oid_index, similarly cluster on any\n> > user_catalog_table, then the other problematic operation could truncate of\n> > user_catalog_table as is discussed in another thread [1].\n> > I think all such operations can block even with synchronous replication. I am not\n> > sure if we can create examples for all cases because for ex. we don't have use\n> > of user_catalog_tables in-core but maybe for others, we can try to create\n> > examples and see what happens?\n> >\n> > If all such operations can block for synchronous replication and prepared\n> > transactions replication then we might want to document them as caveats at\n> > page:\n> > https://www.postgresql.org/docs/devel/logicaldecoding-synchronous.html\n> > and then also give the reference for these caveats at prepared transactions\n> > page:https://www.postgresql.org/docs/devel/logicaldecoding-two-phase-com\n> > mits.html\n> >\n> > What do you think?\n> I've checked the behavior of CLUSTER command\n> in synchronous mode, one of the examples above, as well.\n>\n> IIUC, you meant pg_class, and\n> the deadlock happens when I run cluster commands on pg_class using its index in synchronous mode.\n> The command I used is \"BEGIN; CLUSTER pg_class USING pg_class_oid_index; END;\".\n> This deadlock comes from 2 processes, the backend to wait synchronization of the standby\n> and the walsender process which wants to take a lock on pg_class.\n>\n\nHave you tried to prepare this transaction? That won't be allowed. I\nwanted to see if we can generate some scenarios where it is blocked\nfor prepared xacts decoding and for synchronous replication.\n\n> Therefore, I think we need to do something, at least documentation fix,\n> as you mentioned.\n>\n\nYes, I think that is true.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 26 May 2021 08:16:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Tue, May 25, 2021 at 12:40 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, May 24, 2021 at 10:03:01AM +0530, Amit Kapila wrote:\n> > So, this appears to be an existing caveat of synchronous replication.\n> > If that is the case, I am not sure if it is a good idea to just block\n> > such ops for the prepared transaction. Also, what about other\n> > operations which acquire an exclusive lock on [user]_catalog_tables\n> > like:\n> > cluster pg_trigger using pg_class_oid_index, similarly cluster on any\n> > user_catalog_table, then the other problematic operation could\n> > truncate of user_catalog_table as is discussed in another thread [1].\n> > I think all such operations can block even with synchronous\n> > replication. I am not sure if we can create examples for all cases\n> > because for ex. we don't have use of user_catalog_tables in-core but\n> > maybe for others, we can try to create examples and see what happens?\n> >\n> > If all such operations can block for synchronous replication and\n> > prepared transactions replication then we might want to document them\n> > as caveats at page:\n> > https://www.postgresql.org/docs/devel/logicaldecoding-synchronous.html\n> > and then also give the reference for these caveats at prepared\n> > transactions page:https://www.postgresql.org/docs/devel/logicaldecoding-two-phase-commits.html\n> >\n> > What do you think?\n>\n> It seems to me that the 2PC issues on catalog tables and the issues\n> related to logical replication in synchonous mode are two distinct\n> things that need to be fixed separately.\n>\n\nFair enough. But the way we were looking at them as they will also\nblock (lead to deadlock) for logical replication of prepared\ntransactions and also logical replication in synchonous mode without\nprepared transactions. Now, if we want to deal with the 2PC issues\nseparately that should be fine as well. However, for that we need to\nsee which all operations we want to block on [user]_catalog_tables.\nThe first one is lock command, then there are other operations like\nCluster which take exclusive lock on system catalog tables and we\nallow them to be part of prepared transactions (example Cluster\npg_trigger using pg_trigger_oid_index;), another kind of operation is\nTruncate on user_catalog_tables. Now, some of these might not allow\nconnecting after restart so we might need to think whether we want to\nprohibit all such operations or only some of them.\n\n>\n> The second issue with logical replication is still disruptive, but it\n> looks to me more like a don't-do-it issue, and documenting the caveats\n> sounds fine enough.\n>\n\nRight, that is what I also think.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 26 May 2021 08:34:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> Fair enough. But the way we were looking at them as they will also\n> block (lead to deadlock) for logical replication of prepared\n> transactions and also logical replication in synchonous mode without\n> prepared transactions. Now, if we want to deal with the 2PC issues\n> separately that should be fine as well. However, for that we need to\n> see which all operations we want to block on [user]_catalog_tables.\n> The first one is lock command, then there are other operations like\n> Cluster which take exclusive lock on system catalog tables and we\n> allow them to be part of prepared transactions (example Cluster\n> pg_trigger using pg_trigger_oid_index;), another kind of operation is\n> Truncate on user_catalog_tables. Now, some of these might not allow\n> connecting after restart so we might need to think whether we want to\n> prohibit all such operations or only some of them.\n\n2PC has pretty much always worked like that, and AFAIR there have been\na grand total of zero complaints about it. It seems quite likely to\nme that you're proposing to expend a lot of effort on restrictions\nthat will hurt more people than they help. Maybe that score is only\nabout one to zero, but still you should account for the possibility\nthat you're breaking legitimate use-cases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 May 2021 23:27:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On 26 May 2021, at 05:04, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> On Tue, May 25, 2021 at 12:40 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> \n>> On Mon, May 24, 2021 at 10:03:01AM +0530, Amit Kapila wrote:\n>>> So, this appears to be an existing caveat of synchronous replication.\n>>> If that is the case, I am not sure if it is a good idea to just block\n>>> such ops for the prepared transaction. Also, what about other\n>>> operations which acquire an exclusive lock on [user]_catalog_tables\n>>> like:\n>>> cluster pg_trigger using pg_class_oid_index, similarly cluster on any\n>>> user_catalog_table, then the other problematic operation could\n>>> truncate of user_catalog_table as is discussed in another thread [1].\n>>> I think all such operations can block even with synchronous\n>>> replication. I am not sure if we can create examples for all cases\n>>> because for ex. we don't have use of user_catalog_tables in-core but\n>>> maybe for others, we can try to create examples and see what happens?\n>>> \n>>> If all such operations can block for synchronous replication and\n>>> prepared transactions replication then we might want to document them\n>>> as caveats at page:\n>>> https://www.postgresql.org/docs/devel/logicaldecoding-synchronous.html\n>>> and then also give the reference for these caveats at prepared\n>>> transactions page:https://www.postgresql.org/docs/devel/logicaldecoding-two-phase-commits.html\n>>> \n>>> What do you think?\n>> \n>> It seems to me that the 2PC issues on catalog tables and the issues\n>> related to logical replication in synchonous mode are two distinct\n>> things that need to be fixed separately.\n>> \n> \n> Fair enough. But the way we were looking at them as they will also\n> block (lead to deadlock) for logical replication of prepared\n> transactions and also logical replication in synchonous mode without\n> prepared transactions. Now, if we want to deal with the 2PC issues\n> separately that should be fine as well. However, for that we need to\n> see which all operations we want to block on [user]_catalog_tables.\n> The first one is lock command, then there are other operations like\n> Cluster which take exclusive lock on system catalog tables and we\n> allow them to be part of prepared transactions (example Cluster\n> pg_trigger using pg_trigger_oid_index;), another kind of operation is\n> Truncate on user_catalog_tables. Now, some of these might not allow\n> connecting after restart so we might need to think whether we want to\n> prohibit all such operations or only some of them.\n> \n\n\nIIRC this was discussed the first time 2PC decoding was proposed and everybody seemed fine with the limitation so I'd vote for just documenting it, same way as the sync rep issue.\n\nIf you'd prefer fixing it by blocking something, wouldn't it make more sense to simply not allow PREPARE if one of these operations was executed, similarly to what we do with temp table access?\n\n--\nPetr\n\n",
"msg_date": "Wed, 26 May 2021 10:23:16 +0200",
"msg_from": "Petr Jelinek <petr.jelinek@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Tue, May 25, 2021 at 12:40 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, May 24, 2021 at 10:03:01AM +0530, Amit Kapila wrote:\n> > So, this appears to be an existing caveat of synchronous replication.\n> > If that is the case, I am not sure if it is a good idea to just block\n> > such ops for the prepared transaction. Also, what about other\n> > operations which acquire an exclusive lock on [user]_catalog_tables\n> > like:\n> > cluster pg_trigger using pg_class_oid_index, similarly cluster on any\n> > user_catalog_table, then the other problematic operation could\n> > truncate of user_catalog_table as is discussed in another thread [1].\n> > I think all such operations can block even with synchronous\n> > replication. I am not sure if we can create examples for all cases\n> > because for ex. we don't have use of user_catalog_tables in-core but\n> > maybe for others, we can try to create examples and see what happens?\n> >\n> > If all such operations can block for synchronous replication and\n> > prepared transactions replication then we might want to document them\n> > as caveats at page:\n> > https://www.postgresql.org/docs/devel/logicaldecoding-synchronous.html\n> > and then also give the reference for these caveats at prepared\n> > transactions page:https://www.postgresql.org/docs/devel/logicaldecoding-two-phase-commits.html\n> >\n> > What do you think?\n>\n> It seems to me that the 2PC issues on catalog tables and the issues\n> related to logical replication in synchonous mode are two distinct\n> things that need to be fixed separately.\n>\n> The issue with LOCK taken on a catalog while a PREPARE TRANSACTION\n> holds locks around is bad enough in itself as it could lock down a\n> user from a cluster as long as the PREPARE TRANSACTION is not removed\n> from WAL (say the relation is critical for the connection startup).\n> This could be really disruptive for the user even if he tried to take\n> a lock on an object he owns, and the way to recover is not easy here,\n> and the way to recover involves either an old backup or worse,\n> pg_resetwal.\n>\n> The second issue with logical replication is still disruptive, but it\n> looks to me more like a don't-do-it issue, and documenting the caveats\n> sounds fine enough.\n>\n> Looking at the patch from upthread..\n>\n> + /*\n> + * Make note that we've locked a system table or an user catalog\n> + * table. This flag will be checked later during prepare transaction\n> + * to fail the prepare transaction.\n> + */\n> + if (lockstmt->mode >= ExclusiveLock &&\n> + (IsCatalogRelationOid(reloid) ||\n> + RelationIsUsedAsCatalogTable(rel)))\n> + MyXactFlags |= XACT_FLAGS_ACQUIREDEXCLUSIVELOCK_SYSREL;\n> I think that I'd just use IsCatalogRelationOid() here, and I'd be more\n> severe and restrict all attempts for any lock levels. It seems to me\n> that this needs to happen within RangeVarCallbackForLockTable().\n> I would also rename the flag as just XACT_FLAGS_LOCKEDCATALOG.\n>\n> + errmsg(\"cannot PREPARE a transaction that has an exclusive lock on user catalog/system table(s)\")));\n> What about \"cannot PREPARE a transaction that has locked a catalog\n> relation\"?\n\nAt this point it is not clear if we are planning to fix this issue by\nthrowing an error or document it. I will fix these comments once we\ncome to consensus.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 26 May 2021 14:36:52 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Wed, May 26, 2021 at 1:53 PM Petr Jelinek\n<petr.jelinek@enterprisedb.com> wrote:\n>\n> On 26 May 2021, at 05:04, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, May 25, 2021 at 12:40 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >>\n> >> It seems to me that the 2PC issues on catalog tables and the issues\n> >> related to logical replication in synchonous mode are two distinct\n> >> things that need to be fixed separately.\n> >>\n> >\n> > Fair enough. But the way we were looking at them as they will also\n> > block (lead to deadlock) for logical replication of prepared\n> > transactions and also logical replication in synchonous mode without\n> > prepared transactions. Now, if we want to deal with the 2PC issues\n> > separately that should be fine as well. However, for that we need to\n> > see which all operations we want to block on [user]_catalog_tables.\n> > The first one is lock command, then there are other operations like\n> > Cluster which take exclusive lock on system catalog tables and we\n> > allow them to be part of prepared transactions (example Cluster\n> > pg_trigger using pg_trigger_oid_index;), another kind of operation is\n> > Truncate on user_catalog_tables. Now, some of these might not allow\n> > connecting after restart so we might need to think whether we want to\n> > prohibit all such operations or only some of them.\n> >\n>\n>\n> IIRC this was discussed the first time 2PC decoding was proposed and everybody seemed fine with the limitation so I'd vote for just documenting it, same way as the sync rep issue.\n>\n\n+1.\n\n> If you'd prefer fixing it by blocking something, wouldn't it make more sense to simply not allow PREPARE if one of these operations was executed, similarly to what we do with temp table access?\n>\n\nThe point was that even if somehow we block for prepare, there doesn't\nseem to be a simple way for synchronous logical replication which can\nalso have similar problems. So, I would prefer to document it and we\ncan even think to backpatch the sync rep related documentation.\nMichael seems to be a bit interested in dealing with some of the 2PC\nissues due to reasons different than logical replication which I am\nnot completely sure is a good idea and Tom also feels that is not a\ngood idea.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 26 May 2021 15:00:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "Hi.\n\nThe attached PG docs patch about catalog deadlocks was previously\nimplemented in another thread [1], but it seems more relevant to this\none.\n\nPSA.\n\n------\n[1] https://www.postgresql.org/message-id/CAA4eK1K%2BSeT31pxwL5iTvXq%3DJhZpG_cUJLFhiz-eD%2BJr-WAPeg%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.",
"msg_date": "Tue, 1 Jun 2021 17:32:33 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Tuesday, June 1, 2021 4:33 PM Peter Smith <smithpb2250@gmail.com>\r\n> To: Andres Freund <andres@anarazel.de>\r\n> Cc: PostgreSQL-development <pgsql-hackers@postgresql.org>; Amit Kapila\r\n> <amit.kapila16@gmail.com>; Markus Wanner\r\n> <markus.wanner@enterprisedb.com>\r\n> Subject: Re: locking [user] catalog tables vs 2pc vs logical rep\r\n> \r\n> Hi.\r\n> \r\n> The attached PG docs patch about catalog deadlocks was previously\r\n> implemented in another thread [1], but it seems more relevant to this one.\r\n> \r\n> PSA.\r\nThank you for providing the patch.\r\nI have updated your patch to include some other viewpoints.\r\n\r\nFor example, CLUSTER command scenario\r\nthat also causes hang of PREPARE in synchronous mode.\r\nWe get this deadlock, using the 2PC patch-set.\r\n\r\nFYI, the scenario is\r\n(1) create a table with a trigger\r\n(2) create pub and sub in synchronous mode\r\n(3) then, execute CLUSTER pg_trigger USING pg_trigger_oid_index,\r\n and do some operations (e.g. INSERT) on the trigger-attached table and PREPARE\r\n\r\nThe mechanism of this is\r\nwalsender tries to take a lock on pg_trigger if the table has a trigger,\r\nbut, pg_trigger is already locked by the CLUSTER command, which leads to the deadlock.\r\nThen, this scenario requires some operations on the table which has trigger\r\nbecause it invokes the walsender to take the lock described above.\r\n\r\nI also included the description about TRUNCATE on user_catalog_table\r\nin the patch. Please have a look at this patch.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Thu, 3 Jun 2021 03:48:20 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 9:18 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Tuesday, June 1, 2021 4:33 PM Peter Smith <smithpb2250@gmail.com>\n> > To: Andres Freund <andres@anarazel.de>\n> > Cc: PostgreSQL-development <pgsql-hackers@postgresql.org>; Amit Kapila\n> > <amit.kapila16@gmail.com>; Markus Wanner\n> > <markus.wanner@enterprisedb.com>\n> > Subject: Re: locking [user] catalog tables vs 2pc vs logical rep\n> >\n> > Hi.\n> >\n> > The attached PG docs patch about catalog deadlocks was previously\n> > implemented in another thread [1], but it seems more relevant to this one.\n> >\n> > PSA.\n> Thank you for providing the patch.\n> I have updated your patch to include some other viewpoints.\n>\n> For example, CLUSTER command scenario\n> that also causes hang of PREPARE in synchronous mode.\n> We get this deadlock, using the 2PC patch-set.\n>\n> FYI, the scenario is\n> (1) create a table with a trigger\n> (2) create pub and sub in synchronous mode\n> (3) then, execute CLUSTER pg_trigger USING pg_trigger_oid_index,\n> and do some operations (e.g. INSERT) on the trigger-attached table and PREPARE\n>\n> The mechanism of this is\n> walsender tries to take a lock on pg_trigger if the table has a trigger,\n> but, pg_trigger is already locked by the CLUSTER command, which leads to the deadlock.\n> Then, this scenario requires some operations on the table which has trigger\n> because it invokes the walsender to take the lock described above.\n>\n> I also included the description about TRUNCATE on user_catalog_table\n> in the patch. Please have a look at this patch.\n\n1) I was not able to generate html docs with the attached patch:\nlogicaldecoding.sgml:1128: element sect1: validity error : Element\nsect1 content does not follow the DTD, expecting (sect1info? , (title\n, subtitle? , titleabbrev?) , (toc | lot | index | glossary |\nbibliography)* , (((calloutlist | glosslist | bibliolist |\nitemizedlist | orderedlist | segmentedlist | simplelist | variablelist\n| caution | important | note | tip | warning | literallayout |\nprogramlisting | programlistingco | screen | screenco | screenshot |\nsynopsis | cmdsynopsis | funcsynopsis | classsynopsis | fieldsynopsis\n| constructorsynopsis | destructorsynopsis | methodsynopsis |\nformalpara | para | simpara | address | blockquote | graphic |\ngraphicco | mediaobject | mediaobjectco | informalequation |\ninformalexample | informalfigure | informaltable | equation | example\n| figure | table | msgset | procedure | sidebar | qandaset | task |\nanchor | bridgehead | remark | highlights | abstract | authorblurb |\nepigraph | indexterm | beginpage)+ , (refentry* | sect2* |\nsimplesect*)) | refentry+ | sect2+ | simplesect+) , (toc | lot | index\n| glossary | bibliography)*), got (title sect2 sect2 note )\n </sect1>\n\n2) You could change hang to deadlock:\n+ logical decoding of published table within the same\ntransaction leads to a hang.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 3 Jun 2021 09:38:37 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 9:18 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Tuesday, June 1, 2021 4:33 PM Peter Smith <smithpb2250@gmail.com>\n> > To: Andres Freund <andres@anarazel.de>\n> > Cc: PostgreSQL-development <pgsql-hackers@postgresql.org>; Amit Kapila\n> > <amit.kapila16@gmail.com>; Markus Wanner\n> > <markus.wanner@enterprisedb.com>\n> > Subject: Re: locking [user] catalog tables vs 2pc vs logical rep\n> >\n> > Hi.\n> >\n> > The attached PG docs patch about catalog deadlocks was previously\n> > implemented in another thread [1], but it seems more relevant to this one.\n> >\n> > PSA.\n> Thank you for providing the patch.\n> I have updated your patch to include some other viewpoints.\n>\n\nI suggest creating a synchronous replication part of the patch for\nback-branches as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 3 Jun 2021 15:37:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Thursday, June 3, 2021 7:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Thu, Jun 3, 2021 at 9:18 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > Thank you for providing the patch.\r\n> > I have updated your patch to include some other viewpoints.\r\n> >\r\n> \r\n> I suggest creating a synchronous replication part of the patch for\r\n> back-branches as well.\r\nYou are right. Please have a look at the attached patch-set.\r\nNeedless to say, the patch for HEAD has descriptions that depend on\r\nthe 2pc patch-set.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Sun, 6 Jun 2021 22:48:13 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Thursday, June 3, 2021 1:09 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> On Thu, Jun 3, 2021 at 9:18 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > Thank you for providing the patch.\r\n> > I have updated your patch to include some other viewpoints.\r\n> >\r\n> > I also included the description about TRUNCATE on user_catalog_table\r\n> > in the patch. Please have a look at this patch.\r\n> \r\n> 1) I was not able to generate html docs with the attached patch:\r\n> logicaldecoding.sgml:1128: element sect1: validity error : Element...\r\nThank you for your review.\r\nI fixed the patch to make it pass to generate html output.\r\nKindly have a look at the v03.\r\n\r\n> 2) You could change hang to deadlock:\r\n> + logical decoding of published table within the same\r\n> transaction leads to a hang.\r\nYes. I included your point. Thanks.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Sun, 6 Jun 2021 22:55:45 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Mon, Jun 7, 2021 at 4:18 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Thursday, June 3, 2021 7:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Thu, Jun 3, 2021 at 9:18 AM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > > Thank you for providing the patch.\n> > > I have updated your patch to include some other viewpoints.\n> > >\n> >\n> > I suggest creating a synchronous replication part of the patch for\n> > back-branches as well.\n> You are right. Please have a look at the attached patch-set.\n> Needless to say, the patch for HEAD has descriptions that depend on\n> the 2pc patch-set.\n>\n\n1)\n+ <para>\n+ The use of any command to take an ACCESS EXCLUSIVE lock on\n[user] catalog tables\n+ can cause the deadlock of logical decoding in synchronous\nmode. This means that\n+ at the transaction commit or prepared transaction, the command\nhangs or the server\n+ becomes to block any new connections. To avoid this, users\nmust refrain from such\n+ operations.\n+ </para>\n\nCan we change it something like:\nLogical decoding of transactions in synchronous replication mode\nrequires access to system tables and/or user catalog tables, hence\nuser should refrain from taking exclusive lock on system tables and/or\nuser catalog tables or refrain from executing commands like cluster\ncommand which will take exclusive lock on system tables internally. If\nnot the transaction will get blocked at commit/prepare time because of\na deadlock.\n\n2) I was not sure if we should include the examples below or the above\npara is enough, we can hear from others and retain it if required:\n+ <para>\n+ When <command>COMMIT</command> is conducted for a transaction that has\n+ issued explicit <command>LOCK</command> on\n<structname>pg_class</structname>\n+ with logical decoding, the deadlock occurs. Also, committing\none that runs\n+ <command>CLUSTER</command> <structname>pg_class</structname> is another\n+ deadlock scenario.\n+ </para>\n+\n+ <para>\n+ Similarly, executing <command>PREPARE TRANSACTION</command>\n+ after <command>LOCK</command> command on\n<structname>pg_class</structname> and\n+ logical decoding of published table within the same\ntransaction leads to the deadlock.\n+ Clustering <structname>pg_trigger</structname> by\n<command>CLUSTER</command> command\n+ brings about the deadlock as well, when published table has a\ntrigger and any operations\n+ that will be decoded are conducted on the same table.\n+ </para>\n+\n+ <para>\n+ The deadlock can happen when users execute <command>TRUNCATE</command>\n+ on user_catalog_table under the condition that output plugin\nhave reference to it.\n </para>\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 7 Jun 2021 09:26:37 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Mon, Jun 7, 2021 at 9:26 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Jun 7, 2021 at 4:18 AM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > On Thursday, June 3, 2021 7:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > On Thu, Jun 3, 2021 at 9:18 AM osumi.takamichi@fujitsu.com\n> > > <osumi.takamichi@fujitsu.com> wrote:\n> > > > Thank you for providing the patch.\n> > > > I have updated your patch to include some other viewpoints.\n> > > >\n> > >\n> > > I suggest creating a synchronous replication part of the patch for\n> > > back-branches as well.\n> > You are right. Please have a look at the attached patch-set.\n> > Needless to say, the patch for HEAD has descriptions that depend on\n> > the 2pc patch-set.\n> >\n>\n> 1)\n> + <para>\n> + The use of any command to take an ACCESS EXCLUSIVE lock on\n> [user] catalog tables\n> + can cause the deadlock of logical decoding in synchronous\n> mode. This means that\n> + at the transaction commit or prepared transaction, the command\n> hangs or the server\n> + becomes to block any new connections. To avoid this, users\n> must refrain from such\n> + operations.\n> + </para>\n>\n> Can we change it something like:\n> Logical decoding of transactions in synchronous replication mode\n> requires access to system tables and/or user catalog tables, hence\n> user should refrain from taking exclusive lock on system tables and/or\n> user catalog tables or refrain from executing commands like cluster\n> command which will take exclusive lock on system tables internally. If\n> not the transaction will get blocked at commit/prepare time because of\n> a deadlock.\n>\n\nI think this is better than what the patch has proposed. I suggest\nminor modifications to your proposed changes. Let's write the above\npara as: \"In synchronous replication setup, a deadlock can happen, if\nthe transaction has locked [user] catalog tables exclusively. This is\nbecause logical decoding of transactions can lock catalog tables to\naccess them. To avoid this users must refrain from taking an exclusive\nlock on [user] catalog tables. This can happen in the following ways:\"\n\n+ <para>\n+ When <command>COMMIT</command> is conducted for a transaction that has\n+ issued explicit <command>LOCK</command> on\n<structname>pg_class</structname>\n+ with logical decoding, the deadlock occurs. Also, committing\none that runs\n+ <command>CLUSTER</command> <structname>pg_class</structname> is another\n+ deadlock scenario.\n </para>\n\nThe above points need to be mentioned in the <itemizedlist> fashion.\nSee <sect2 id=\"continuous-archiving-caveats\"> for an example. I think\nthe above point can be split as follows.\n\n<listitem>\n <para>\nUser has issued an explicit <command>LOCK</command> on\n<structname>pg_class</structname> (or any other catalog table) in a\ntransaction. Now when we try to decode such a transaction, a deadlock\ncan happen.\n</para>\n</listitem>\n\nSimilarly, write separate points for Cluster and Truncate.\n\nOne more comment is that for HEAD, first just create a patch with\nsynchronous replication-related doc changes and then write a separate\npatch for prepared transactions.\n\n> 2) I was not sure if we should include the examples below or the above\n> para is enough,\n>\n\nIt is better to give examples but let's use the format as I suggested above.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 7 Jun 2021 10:44:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Mon, Jun 7, 2021 at 10:44 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> One more comment is that for HEAD, first just create a patch with\n> synchronous replication-related doc changes and then write a separate\n> patch for prepared transactions.\n>\n\nI noticed that docs for \"Synchronous replication support for Logical\nDecoding\" has been introduced by commit\n49c0864d7ef5227faa24f903902db90e5c9d5d69 which goes till 9.6. So, I\nthink you need to create a patch for 9.6 as well unless one of the\nexisting patches already applies in 9.6.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 7 Jun 2021 14:52:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Monday, June 7, 2021 6:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Mon, Jun 7, 2021 at 10:44 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > One more comment is that for HEAD, first just create a patch with\r\n> > synchronous replication-related doc changes and then write a separate\r\n> > patch for prepared transactions.\r\n> >\r\n> \r\n> I noticed that docs for \"Synchronous replication support for Logical Decoding\"\r\n> has been introduced by commit\r\n> 49c0864d7ef5227faa24f903902db90e5c9d5d69 which goes till 9.6. So, I think\r\n> you need to create a patch for 9.6 as well unless one of the existing patches\r\n> already applies in 9.6.\r\nOK. I could apply PG10's patch to 9.6.\r\nAlso, I've made a separate patch for 2PC description.\r\n\r\nOn the other hand, I need to mention that\r\nthere are some gaps to cause failures to apply patches\r\nbetween supported versions.\r\n(e.g. applying a patch for HEAD to stable PG13 fails)\r\n\r\nTo address the gaps between the versions,\r\nI needed to conduct some manual fixes.\r\nTherefore, please note that the content of patch\r\nbetween PG12 and PG13 are almost same\r\nlike PG9.6 and PG10, but, I prepared\r\nindependent patches for HEAD and PG11,\r\nin order to make those applied in a comfortable manner.\r\n\r\n\r\nKindly have a look at the updated patch-set.\r\nThey all passed the test of make html.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Tue, 8 Jun 2021 08:03:59 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Tue, Jun 8, 2021 at 1:34 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Monday, June 7, 2021 6:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Mon, Jun 7, 2021 at 10:44 AM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > >\n> > > One more comment is that for HEAD, first just create a patch with\n> > > synchronous replication-related doc changes and then write a separate\n> > > patch for prepared transactions.\n> > >\n> >\n> > I noticed that docs for \"Synchronous replication support for Logical Decoding\"\n> > has been introduced by commit\n> > 49c0864d7ef5227faa24f903902db90e5c9d5d69 which goes till 9.6. So, I think\n> > you need to create a patch for 9.6 as well unless one of the existing patches\n> > already applies in 9.6.\n> OK. I could apply PG10's patch to 9.6.\n> Also, I've made a separate patch for 2PC description.\n>\n> On the other hand, I need to mention that\n> there are some gaps to cause failures to apply patches\n> between supported versions.\n> (e.g. applying a patch for HEAD to stable PG13 fails)\n>\n> To address the gaps between the versions,\n> I needed to conduct some manual fixes.\n> Therefore, please note that the content of patch\n> between PG12 and PG13 are almost same\n> like PG9.6 and PG10, but, I prepared\n> independent patches for HEAD and PG11,\n> in order to make those applied in a comfortable manner.\n>\n>\n> Kindly have a look at the updated patch-set.\n> They all passed the test of make html.\n\nThanks for the updated patch.\n\nI have few comments:\n1) Should we list the actual system tables like pg_class,pg_trigger,\netc instead of any other catalog table?\nUser has issued an explicit LOCK on pg_class (or any other catalog table)\n2) Here This means deadlock, after this we mention deadlock again for\neach of the examples, we can remove it if redundant.\nThis can happen in the following ways:\n3) Should [user] catalog tables be catalog tables or user catalog tables\n[user] catalog tables\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 8 Jun 2021 18:23:50 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Tue, Jun 8, 2021 at 6:24 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Thanks for the updated patch.\n>\n> I have few comments:\n> 1) Should we list the actual system tables like pg_class,pg_trigger,\n> etc instead of any other catalog table?\n> User has issued an explicit LOCK on pg_class (or any other catalog table)\n>\n\nI think the way it is mentioned is okay. We don't need to specify\nother catalog tables.\n\n> 2) Here This means deadlock, after this we mention deadlock again for\n> each of the examples, we can remove it if redundant.\n> This can happen in the following ways:\n> 3) Should [user] catalog tables be catalog tables or user catalog tables\n> [user] catalog tables\n>\n\nThe third point is not clear. Can you please elaborate by quoting the\nexact change from the patch?\n\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 9 Jun 2021 08:36:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Wednesday, June 9, 2021 12:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Jun 8, 2021 at 6:24 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> >\r\n> > Thanks for the updated patch.\r\n> >\r\n> > I have few comments:\r\n> > 1) Should we list the actual system tables like pg_class,pg_trigger,\r\n> > etc instead of any other catalog table?\r\n> > User has issued an explicit LOCK on pg_class (or any other catalog\r\n> > table)\r\n> >\r\n> \r\n> I think the way it is mentioned is okay. We don't need to specify other catalog\r\n> tables.\r\nOkay.\r\n\r\n\r\n> > 2) Here This means deadlock, after this we mention deadlock again for\r\n> > each of the examples, we can remove it if redundant.\r\n> > This can happen in the following ways:\r\nI think this sentence works to notify that commands described below\r\nare major scenarios naturally, to the readers. Then, I don't want to remove it.\r\n\r\nIf you somehow feel that the descriptions are redundant,\r\nhow about unifying all listitems as nouns. like below ?\r\n\r\n* An explicit <command>LOCK</command> on <structname>pg_class</structname> (or any other catalog table) in a transaction\r\n* Reordering <structname>pg_class</structname> by <command>CLUSTER</command> command in a transaction\r\n* Executing <command>TRUNCATE</command> on user_catalog_table\r\n\r\n\r\n> > 3) Should [user] catalog tables be catalog tables or user catalog\r\n> > tables [user] catalog tables\r\n> >\r\n> \r\n> The third point is not clear. Can you please elaborate by quoting the exact\r\n> change from the patch?\r\nIIUC, he means to replace all descriptions \"[user] catalog tables\"\r\nwith \"catalog tables or user catalog tables\" in the patch,\r\nbecause seemingly we don't use square brackets to describe optional clause in\r\nnormal descriptions(like outside of synopsis and I don't find any example for this).\r\nBut, even if so, I would like to keep the current square brackets description,\r\nwhich makes sentence short and simple.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Wed, 9 Jun 2021 06:33:14 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Wed, Jun 9, 2021 at 12:03 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Wednesday, June 9, 2021 12:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Tue, Jun 8, 2021 at 6:24 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> > > 3) Should [user] catalog tables be catalog tables or user catalog\n> > > tables [user] catalog tables\n> > >\n> >\n> > The third point is not clear. Can you please elaborate by quoting the exact\n> > change from the patch?\n> IIUC, he means to replace all descriptions \"[user] catalog tables\"\n> with \"catalog tables or user catalog tables\" in the patch,\n> because seemingly we don't use square brackets to describe optional clause in\n> normal descriptions(like outside of synopsis and I don't find any example for this).\n> But, even if so, I would like to keep the current square brackets description,\n> which makes sentence short and simple.\n>\n\n+1.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 9 Jun 2021 14:18:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Tuesday, June 8, 2021 5:04 PM I wrote:\r\n> On Monday, June 7, 2021 6:22 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > On Mon, Jun 7, 2021 at 10:44 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> > wrote:\r\n> > >\r\n> > > One more comment is that for HEAD, first just create a patch with\r\n> > > synchronous replication-related doc changes and then write a\r\n> > > separate patch for prepared transactions.\r\n> >\r\n> > I noticed that docs for \"Synchronous replication support for Logical Decoding\"\r\n> > has been introduced by commit\r\n> > 49c0864d7ef5227faa24f903902db90e5c9d5d69 which goes till 9.6. So, I\r\n> > think you need to create a patch for 9.6 as well unless one of the\r\n> > existing patches already applies in 9.6.\r\n> OK. I could apply PG10's patch to 9.6.\r\n> Also, I've made a separate patch for 2PC description.\r\n> \r\n> On the other hand, I need to mention that there are some gaps to cause failures\r\n> to apply patches between supported versions.\r\n> (e.g. applying a patch for HEAD to stable PG13 fails)\r\nI scrutinized this POV and checked the gaps between supported versions.\r\nIn terms of the section where the patch want to fix,\r\nthere are only 2 major gaps between PG10 and PG11 - [1]\r\nand between PG13 and HEAD - [2]. In other words,\r\nthe patch-set should be 4 types.\r\n\r\n* patch for HEAD\r\n* additional patch for HEAD based on 2PC patch-set\r\n* patch for from PG11 to PG13\r\n* patch for PG9.6 and PG10\r\n\r\n> To address the gaps between the versions, I needed to conduct some manual\r\n> fixes.\r\n> Therefore, please note that the content of patch between PG12 and PG13 are\r\n> almost same like PG9.6 and PG10, but, I prepared independent patches for\r\n> HEAD and PG11, in order to make those applied in a comfortable manner.\r\nTherefore, I was wrong.\r\nI didn't need the specific independent patch for PG11.\r\nI'll fix the patch-set accordingly in the next version.\r\n\r\n\r\n[1] how we finish xref tag is different between PG10 and PG11\r\n\r\n--- logicaldecoding.sgml_PG11 2021-06-09 04:38:18.214163527 +0000\r\n+++ logicaldecoding.sgml_PG10 2021-06-09 04:37:50.533163527 +0000\r\n@@ -730,9 +698,9 @@\r\n replication</link> solutions with the same user interface as synchronous\r\n replication for <link linkend=\"streaming-replication\">streaming\r\n replication</link>. To do this, the streaming replication interface\r\n- (see <xref linkend=\"logicaldecoding-walsender\"/>) must be used to stream out\r\n+ (see <xref linkend=\"logicaldecoding-walsender\">) must be used to stream out\r\n data. Clients have to send <literal>Standby status update (F)</literal>\r\n- (see <xref linkend=\"protocol-replication\"/>) messages, just like streaming\r\n+ (see <xref linkend=\"protocol-replication\">) messages, just like streaming\r\n replication clients do.\r\n </para>\r\n\r\n[2] in HEAD, we have a new sect1 after \"Synchronous Replication Support for Logical Decoding\"\r\n\r\n--- logicaldecoding.sgml_PG13 2021-06-09 05:10:34.927163527 +0000\r\n+++ logicaldecoding.sgml_HEAD 2021-06-09 05:08:12.810163527 +0000\r\n@@ -747,4 +1089,177 @@\r\n </para>\r\n </note>\r\n </sect1>\r\n+\r\n+ <sect1 id=\"logicaldecoding-streaming\">\r\n+ <title>Streaming of Large Transactions for Logical Decoding</title>\r\n+\r\n+ <para>\r\n+ The basic output plugin callbacks (e.g., <function>begin_cb</function>,\r\n...\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Wed, 9 Jun 2021 10:21:14 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Wed, Jun 9, 2021 at 12:03 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Wednesday, June 9, 2021 12:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Tue, Jun 8, 2021 at 6:24 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > Thanks for the updated patch.\n> > >\n> > > I have few comments:\n> > > 1) Should we list the actual system tables like pg_class,pg_trigger,\n> > > etc instead of any other catalog table?\n> > > User has issued an explicit LOCK on pg_class (or any other catalog\n> > > table)\n> > >\n> >\n> > I think the way it is mentioned is okay. We don't need to specify other catalog\n> > tables.\n> Okay.\n>\n>\n> > > 2) Here This means deadlock, after this we mention deadlock again for\n> > > each of the examples, we can remove it if redundant.\n> > > This can happen in the following ways:\n> I think this sentence works to notify that commands described below\n> are major scenarios naturally, to the readers. Then, I don't want to remove it.\n>\n> If you somehow feel that the descriptions are redundant,\n> how about unifying all listitems as nouns. like below ?\n>\n> * An explicit <command>LOCK</command> on <structname>pg_class</structname> (or any other catalog table) in a transaction\n> * Reordering <structname>pg_class</structname> by <command>CLUSTER</command> command in a transaction\n> * Executing <command>TRUNCATE</command> on user_catalog_table\n>\n\nThis looks good to me. Keep the 2PC documentation patch also on the same lines.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 10 Jun 2021 09:43:44 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Thursday, June 10, 2021 1:14 PM vignesh C <vignesh21@gmail.com>\r\n> On Wed, Jun 9, 2021 at 12:03 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Wednesday, June 9, 2021 12:06 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > > On Tue, Jun 8, 2021 at 6:24 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> > > >\r\n> > > > Thanks for the updated patch.\r\n> > > >\r\n> > > > I have few comments:\r\n> > > > 1) Should we list the actual system tables like\r\n> > > > pg_class,pg_trigger, etc instead of any other catalog table?\r\n> > > > User has issued an explicit LOCK on pg_class (or any other catalog\r\n> > > > table)\r\n> > > >\r\n> > >\r\n> > > I think the way it is mentioned is okay. We don't need to specify\r\n> > > other catalog tables.\r\n> > Okay.\r\n> >\r\n> >\r\n> > > > 2) Here This means deadlock, after this we mention deadlock again\r\n> > > > for each of the examples, we can remove it if redundant.\r\n> > > > This can happen in the following ways:\r\n> > I think this sentence works to notify that commands described below\r\n> > are major scenarios naturally, to the readers. Then, I don't want to remove\r\n> it.\r\n> >\r\n> > If you somehow feel that the descriptions are redundant, how about\r\n> > unifying all listitems as nouns. like below ?\r\n> >\r\n> > * An explicit <command>LOCK</command> on\r\n> > <structname>pg_class</structname> (or any other catalog table) in a\r\n> > transaction\r\n> > * Reordering <structname>pg_class</structname> by\r\n> > <command>CLUSTER</command> command in a transaction\r\n> > * Executing <command>TRUNCATE</command> on user_catalog_table\r\n> >\r\n> \r\n> This looks good to me. Keep the 2PC documentation patch also on the same\r\n> lines.\r\nYeah, of course. Thanks for your confirmation.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 10 Jun 2021 04:29:44 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Thursday, June 10, 2021 1:30 PM I wrote:\r\n> On Thursday, June 10, 2021 1:14 PM vignesh C <vignesh21@gmail.com>\r\n> > On Wed, Jun 9, 2021 at 12:03 PM osumi.takamichi@fujitsu.com\r\n> > <osumi.takamichi@fujitsu.com> wrote:\r\n> > >\r\n> > > On Wednesday, June 9, 2021 12:06 PM Amit Kapila\r\n> > <amit.kapila16@gmail.com> wrote:\r\n> > > > On Tue, Jun 8, 2021 at 6:24 PM vignesh C <vignesh21@gmail.com>\r\n> wrote:\r\n> > > > >\r\n> > > > > Thanks for the updated patch.\r\n> > > > >\r\n> > > > > I have few comments:\r\n> > > > > 1) Should we list the actual system tables like\r\n> > > > > pg_class,pg_trigger, etc instead of any other catalog table?\r\n> > > > > User has issued an explicit LOCK on pg_class (or any other\r\n> > > > > catalog\r\n> > > > > table)\r\n> > > > >\r\n> > > >\r\n> > > > I think the way it is mentioned is okay. We don't need to specify\r\n> > > > other catalog tables.\r\n> > > Okay.\r\n> > >\r\n> > >\r\n> > > > > 2) Here This means deadlock, after this we mention deadlock\r\n> > > > > again for each of the examples, we can remove it if redundant.\r\n> > > > > This can happen in the following ways:\r\n> > > I think this sentence works to notify that commands described below\r\n> > > are major scenarios naturally, to the readers. Then, I don't want to\r\n> > > remove\r\n> > it.\r\n> > >\r\n> > > If you somehow feel that the descriptions are redundant, how about\r\n> > > unifying all listitems as nouns. like below ?\r\n> > >\r\n> > > * An explicit <command>LOCK</command> on\r\n> > > <structname>pg_class</structname> (or any other catalog table) in a\r\n> > > transaction\r\n> > > * Reordering <structname>pg_class</structname> by\r\n> > > <command>CLUSTER</command> command in a transaction\r\n> > > * Executing <command>TRUNCATE</command> on\r\n> user_catalog_table\r\n> > >\r\n> >\r\n> > This looks good to me. Keep the 2PC documentation patch also on the\r\n> > same lines.\r\n> Yeah, of course. Thanks for your confirmation.\r\nHi, attached the updated patch-set.\r\n\r\nI've conducted some updates.\r\n\r\n(1) Added commit messages for all patches\r\n(2) Sorted out the descriptions of listitem to make them look uniform\r\n(3) Removed PG11-specific patch and unified the patch from PG11 to PG13,\r\nwhich will keep the documents cleanliness for future back-patching, if any.\r\n\r\n(4) Removed unnecessary space after 'id'\r\n\r\nIn v04, there was an unneeded space like below. Fixed.\r\nIn the same logicaldecoding.sgml doc, there is no space after 'id' for sec2.\r\n\r\n+ <sect2 id =\"logicaldecoding-synchronous-caveats\">\r\n+ <title>Caveats</title>\r\n\r\n(5) Fixed the reference accurately by replacing link tag with xref tag.\r\n\r\nIn v04, I let the reference be inaccurate, because the linkend points to the caveats\r\nbut the link word was \"Synchronous Replication Support for Logical Decoding\".\r\n\r\n+ [user] catalog tables exclusively. To avoid this users must refrain from\r\n+ having locks on catalog tables (e.g. explicit <command>LOCK</command> command)\r\n+ in such transactions.\r\n+ (See <link linkend=\"logicaldecoding-synchronous-caveats\">Synchronous\r\n+ Replication Support for Logical Decoding</link> for the details.)\r\n\r\nSo, in v05, I've fixed this to point out the caveats directly.\r\n\r\n+ [user] catalog tables exclusively. To avoid this users must refrain from\r\n+ having locks on catalog tables (e.g. explicit <command>LOCK</command> command)\r\n+ in such transactions.\r\n+ (See <xref linkend=\"logicaldecoding-synchronous-caveats\"/> for the details.)\r\n\r\nKindly have a look at the patch-set.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Fri, 11 Jun 2021 01:27:40 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Fri, Jun 11, 2021 at 6:57 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Thursday, June 10, 2021 1:30 PM I wrote:\n> > On Thursday, June 10, 2021 1:14 PM vignesh C <vignesh21@gmail.com>\n> > > On Wed, Jun 9, 2021 at 12:03 PM osumi.takamichi@fujitsu.com\n> > > <osumi.takamichi@fujitsu.com> wrote:\n> > > >\n> > > > On Wednesday, June 9, 2021 12:06 PM Amit Kapila\n> > > <amit.kapila16@gmail.com> wrote:\n> > > > > On Tue, Jun 8, 2021 at 6:24 PM vignesh C <vignesh21@gmail.com>\n> > wrote:\n> > > > > >\n> > > > > > Thanks for the updated patch.\n> > > > > >\n> > > > > > I have few comments:\n> > > > > > 1) Should we list the actual system tables like\n> > > > > > pg_class,pg_trigger, etc instead of any other catalog table?\n> > > > > > User has issued an explicit LOCK on pg_class (or any other\n> > > > > > catalog\n> > > > > > table)\n> > > > > >\n> > > > >\n> > > > > I think the way it is mentioned is okay. We don't need to specify\n> > > > > other catalog tables.\n> > > > Okay.\n> > > >\n> > > >\n> > > > > > 2) Here This means deadlock, after this we mention deadlock\n> > > > > > again for each of the examples, we can remove it if redundant.\n> > > > > > This can happen in the following ways:\n> > > > I think this sentence works to notify that commands described below\n> > > > are major scenarios naturally, to the readers. Then, I don't want to\n> > > > remove\n> > > it.\n> > > >\n> > > > If you somehow feel that the descriptions are redundant, how about\n> > > > unifying all listitems as nouns. like below ?\n> > > >\n> > > > * An explicit <command>LOCK</command> on\n> > > > <structname>pg_class</structname> (or any other catalog table) in a\n> > > > transaction\n> > > > * Reordering <structname>pg_class</structname> by\n> > > > <command>CLUSTER</command> command in a transaction\n> > > > * Executing <command>TRUNCATE</command> on\n> > user_catalog_table\n> > > >\n> > >\n> > > This looks good to me. Keep the 2PC documentation patch also on the\n> > > same lines.\n> > Yeah, of course. Thanks for your confirmation.\n> Hi, attached the updated patch-set.\n>\n> I've conducted some updates.\n>\n> (1) Added commit messages for all patches\n> (2) Sorted out the descriptions of listitem to make them look uniform\n> (3) Removed PG11-specific patch and unified the patch from PG11 to PG13,\n> which will keep the documents cleanliness for future back-patching, if any.\n>\n> (4) Removed unnecessary space after 'id'\n>\n> In v04, there was an unneeded space like below. Fixed.\n> In the same logicaldecoding.sgml doc, there is no space after 'id' for sec2.\n>\n> + <sect2 id =\"logicaldecoding-synchronous-caveats\">\n> + <title>Caveats</title>\n>\n> (5) Fixed the reference accurately by replacing link tag with xref tag.\n>\n> In v04, I let the reference be inaccurate, because the linkend points to the caveats\n> but the link word was \"Synchronous Replication Support for Logical Decoding\".\n>\n> + [user] catalog tables exclusively. To avoid this users must refrain from\n> + having locks on catalog tables (e.g. explicit <command>LOCK</command> command)\n> + in such transactions.\n> + (See <link linkend=\"logicaldecoding-synchronous-caveats\">Synchronous\n> + Replication Support for Logical Decoding</link> for the details.)\n>\n> So, in v05, I've fixed this to point out the caveats directly.\n>\n> + [user] catalog tables exclusively. To avoid this users must refrain from\n> + having locks on catalog tables (e.g. explicit <command>LOCK</command> command)\n> + in such transactions.\n> + (See <xref linkend=\"logicaldecoding-synchronous-caveats\"/> for the details.)\n>\n> Kindly have a look at the patch-set.\n>\n\nThanks for the updated patch:\nFew comments:\n1) We have used Reordering and Clustering for the same command, we\ncould rephrase similarly in both places.\n+ <para>\n+ Reordering <structname>pg_class</structname> by\n<command>CLUSTER</command>\n+ command in a transaction.\n+ </para>\n\n+ <para>\n+ Clustering <structname>pg_trigger</structname> and decoding\n<command>PREPARE\n+ TRANSACTION</command>, if any published table have a trigger and any\n+ operations that will be decoded are conducted.\n+ </para>\n+ </listitem>\n\n2) Here user_catalog_table should be user catalog table\n+ <para>\n+ Executing <command>TRUNCATE</command> on user_catalog_table\nin a transaction.\n+ </para>\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 11 Jun 2021 10:42:36 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Friday, June 11, 2021 2:13 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> Thanks for the updated patch:\r\n> Few comments:\r\n> 1) We have used Reordering and Clustering for the same command, we could\r\n> rephrase similarly in both places.\r\n> + <para>\r\n> + Reordering <structname>pg_class</structname> by\r\n> <command>CLUSTER</command>\r\n> + command in a transaction.\r\n> + </para>\r\n> \r\n> + <para>\r\n> + Clustering <structname>pg_trigger</structname> and decoding\r\n> <command>PREPARE\r\n> + TRANSACTION</command>, if any published table have a trigger\r\n> and any\r\n> + operations that will be decoded are conducted.\r\n> + </para>\r\n> + </listitem>\r\n> \r\n> 2) Here user_catalog_table should be user catalog table\r\n> + <para>\r\n> + Executing <command>TRUNCATE</command> on\r\n> user_catalog_table\r\n> in a transaction.\r\n> + </para>\r\nThanks for your review.\r\n\r\nAttached the patch-set that addressed those two comments.\r\nI also fixed the commit message a bit in the 2PC specific patch to HEAD.\r\nNo other changes.\r\n\r\nPlease check.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Mon, 14 Jun 2021 12:03:00 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Mon, Jun 14, 2021 at 5:33 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Friday, June 11, 2021 2:13 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Thanks for the updated patch:\n> > Few comments:\n> > 1) We have used Reordering and Clustering for the same command, we could\n> > rephrase similarly in both places.\n> > + <para>\n> > + Reordering <structname>pg_class</structname> by\n> > <command>CLUSTER</command>\n> > + command in a transaction.\n> > + </para>\n> >\n> > + <para>\n> > + Clustering <structname>pg_trigger</structname> and decoding\n> > <command>PREPARE\n> > + TRANSACTION</command>, if any published table have a trigger\n> > and any\n> > + operations that will be decoded are conducted.\n> > + </para>\n> > + </listitem>\n> >\n> > 2) Here user_catalog_table should be user catalog table\n> > + <para>\n> > + Executing <command>TRUNCATE</command> on\n> > user_catalog_table\n> > in a transaction.\n> > + </para>\n> Thanks for your review.\n>\n> Attached the patch-set that addressed those two comments.\n> I also fixed the commit message a bit in the 2PC specific patch to HEAD.\n> No other changes.\n>\n> Please check.\n\nThanks for the updated patches, the patch applies cleanly in all branches.\nPlease add a commitfest entry for this, so that we don't miss it.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 15 Jun 2021 10:21:11 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Tuesday, June 15, 2021 1:51 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> > Attached the patch-set that addressed those two comments.\r\n> > I also fixed the commit message a bit in the 2PC specific patch to HEAD.\r\n> > No other changes.\r\n> >\r\n> > Please check.\r\n> \r\n> Thanks for the updated patches, the patch applies cleanly in all branches.\r\n> Please add a commitfest entry for this, so that we don't miss it.\r\nThank you. I've registered the patch-set in [1].\r\nI'll wait for other reviews from other developers, if any.\r\n\r\n\r\n[1] - https://commitfest.postgresql.org/33/3170/\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 15 Jun 2021 11:10:07 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Mon, Jun 14, 2021 at 5:33 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Friday, June 11, 2021 2:13 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Attached the patch-set that addressed those two comments.\n>\n\nFew minor comments:\n1.\n+ <listitem>\n+ <para>\n+ Clustering <structname>pg_class</structname> in a transaction.\n\nCan we change above to: Perform <command>CLUSTER</command> on\n<structname>pg_class</structname> in a transaction.\n\n2.\n+ <listitem>\n+ <para>\n+ Executing <command>TRUNCATE</command> on user catalog table\nin a transaction.\n+ </para>\n\nSquare brackets are missing for user.\n\n3.\n+ <indexterm>\n+ <primary>Overview</primary>\n+ </indexterm>\n..\n..\n+ <indexterm>\n+ <primary>Caveats</primary>\n+ </indexterm>\n\nWhy are these required when we already have titles? I have seen other\nplaces in the docs where we use titles for Overview and Caveats but\nthey didn't have similar usage.\n\n\n4.\n<para>\n+ Performing <command>PREPARE TRANSACTION</command> after\n<command>LOCK</command>\n+ command on <structname>pg_class</structname> and logical\ndecoding of published\n+ table.\n\nCan we change above to: <command>PREPARE TRANSACTION</command> after\n<command>LOCK</command> command on <structname>pg_class</structname>\nand allow logical decoding of two-phase transactions.\n\n5.\n+ <para>\n+ Clustering <structname>pg_trigger</structname> and decoding\n<command>PREPARE\n+ TRANSACTION</command>, if any published table have a trigger and any\n+ operations that will be decoded are conducted.\n+ </para>\n\nCan we change above to: <command>PREPARE TRANSACTION</command> after\n<command>CLUSTER</command> command on\n<structname>pg_trigger</structname> and allow logical decoding of\ntwo-phase transactions. This will lead to deadlock only when published\ntable have a trigger.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 16 Jun 2021 15:50:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Wed, Jun 16, 2021 at 3:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jun 14, 2021 at 5:33 PM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > On Friday, June 11, 2021 2:13 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Attached the patch-set that addressed those two comments.\n> >\n>\n> Few minor comments:\n> 1.\n> + <listitem>\n> + <para>\n> + Clustering <structname>pg_class</structname> in a transaction.\n>\n> Can we change above to: Perform <command>CLUSTER</command> on\n> <structname>pg_class</structname> in a transaction.\n>\n> 2.\n> + <listitem>\n> + <para>\n> + Executing <command>TRUNCATE</command> on user catalog table\n> in a transaction.\n> + </para>\n>\n> Square brackets are missing for user.\n>\n> 3.\n> + <indexterm>\n> + <primary>Overview</primary>\n> + </indexterm>\n> ..\n> ..\n> + <indexterm>\n> + <primary>Caveats</primary>\n> + </indexterm>\n>\n> Why are these required when we already have titles? I have seen other\n> places in the docs where we use titles for Overview and Caveats but\n> they didn't have similar usage.\n>\n\nEven I felt this was not required. I had checked other places and also\nprepared doc by removing it, it works fine.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 17 Jun 2021 07:11:35 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Wednesday, June 16, 2021 7:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Mon, Jun 14, 2021 at 5:33 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Friday, June 11, 2021 2:13 PM vignesh C <vignesh21@gmail.com>\r\n> wrote:\r\n> >\r\n> > Attached the patch-set that addressed those two comments.\r\n> >\r\n> \r\n> Few minor comments:\r\n> 1.\r\n> + <listitem>\r\n> + <para>\r\n> + Clustering <structname>pg_class</structname> in a transaction.\r\n> \r\n> Can we change above to: Perform <command>CLUSTER</command> on\r\n> <structname>pg_class</structname> in a transaction.\r\nLooks better.\r\n\r\n> \r\n> 2.\r\n> + <listitem>\r\n> + <para>\r\n> + Executing <command>TRUNCATE</command> on user catalog\r\n> table\r\n> in a transaction.\r\n> + </para>\r\n> \r\n> Square brackets are missing for user.\r\nThanks for catching it. You are right.\r\n\r\n\r\n> 3.\r\n> + <indexterm>\r\n> + <primary>Overview</primary>\r\n> + </indexterm>\r\n> ..\r\n> ..\r\n> + <indexterm>\r\n> + <primary>Caveats</primary>\r\n> + </indexterm>\r\n> \r\n> Why are these required when we already have titles? I have seen other places\r\n> in the docs where we use titles for Overview and Caveats but they didn't have\r\n> similar usage.\r\nSorry, this was a mistake. We didn't need those sections.\r\n\r\n\r\n> 4.\r\n> <para>\r\n> + Performing <command>PREPARE TRANSACTION</command>\r\n> after\r\n> <command>LOCK</command>\r\n> + command on <structname>pg_class</structname> and logical\r\n> decoding of published\r\n> + table.\r\n> \r\n> Can we change above to: <command>PREPARE\r\n> TRANSACTION</command> after <command>LOCK</command>\r\n> command on <structname>pg_class</structname> and allow logical\r\n> decoding of two-phase transactions.\r\n> \r\n> 5.\r\n> + <para>\r\n> + Clustering <structname>pg_trigger</structname> and decoding\r\n> <command>PREPARE\r\n> + TRANSACTION</command>, if any published table have a trigger\r\n> and any\r\n> + operations that will be decoded are conducted.\r\n> + </para>\r\n> \r\n> Can we change above to: <command>PREPARE\r\n> TRANSACTION</command> after <command>CLUSTER</command>\r\n> command on <structname>pg_trigger</structname> and allow logical\r\n> decoding of two-phase transactions. This will lead to deadlock only when\r\n> published table have a trigger.\r\nYeah, I needed the nuance to turn on logical decoding of two-phase transactions...\r\nYour above suggestions are much tidier and more accurate than mine.\r\nI agree with your all suggestions.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 17 Jun 2021 03:11:27 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Thu, Jun 17, 2021 at 8:41 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Wednesday, June 16, 2021 7:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Mon, Jun 14, 2021 at 5:33 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > On Friday, June 11, 2021 2:13 PM vignesh C <vignesh21@gmail.com>\n> > wrote:\n> > >\n> > > Attached the patch-set that addressed those two comments.\n> > >\n> >\n> > Few minor comments:\n> > 1.\n> > + <listitem>\n> > + <para>\n> > + Clustering <structname>pg_class</structname> in a transaction.\n> >\n> > Can we change above to: Perform <command>CLUSTER</command> on\n> > <structname>pg_class</structname> in a transaction.\n> Looks better.\n>\n> >\n> > 2.\n> > + <listitem>\n> > + <para>\n> > + Executing <command>TRUNCATE</command> on user catalog\n> > table\n> > in a transaction.\n> > + </para>\n> >\n> > Square brackets are missing for user.\n> Thanks for catching it. You are right.\n>\n>\n> > 3.\n> > + <indexterm>\n> > + <primary>Overview</primary>\n> > + </indexterm>\n> > ..\n> > ..\n> > + <indexterm>\n> > + <primary>Caveats</primary>\n> > + </indexterm>\n> >\n> > Why are these required when we already have titles? I have seen other places\n> > in the docs where we use titles for Overview and Caveats but they didn't have\n> > similar usage.\n> Sorry, this was a mistake. We didn't need those sections.\n>\n>\n> > 4.\n> > <para>\n> > + Performing <command>PREPARE TRANSACTION</command>\n> > after\n> > <command>LOCK</command>\n> > + command on <structname>pg_class</structname> and logical\n> > decoding of published\n> > + table.\n> >\n> > Can we change above to: <command>PREPARE\n> > TRANSACTION</command> after <command>LOCK</command>\n> > command on <structname>pg_class</structname> and allow logical\n> > decoding of two-phase transactions.\n> >\n> > 5.\n> > + <para>\n> > + Clustering <structname>pg_trigger</structname> and decoding\n> > <command>PREPARE\n> > + TRANSACTION</command>, if any published table have a trigger\n> > and any\n> > + operations that will be decoded are conducted.\n> > + </para>\n> >\n> > Can we change above to: <command>PREPARE\n> > TRANSACTION</command> after <command>CLUSTER</command>\n> > command on <structname>pg_trigger</structname> and allow logical\n> > decoding of two-phase transactions. This will lead to deadlock only when\n> > published table have a trigger.\n> Yeah, I needed the nuance to turn on logical decoding of two-phase transactions...\n> Your above suggestions are much tidier and more accurate than mine.\n> I agree with your all suggestions.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 17 Jun 2021 16:27:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Thu, Jun 17, 2021 at 4:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jun 17, 2021 at 8:41 AM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n>\n> Pushed!\n>\n[Responding to Simon's comments]\n\n> If LOCK and TRUNCATE is advised against on all user catalog tables, why would CLUSTER only apply to pg_class? Surely its locking\n> level is the same as LOCK?\n>\n\nCluster will also apply to all user catalog tables. I think we can\nextend it slightly as we have mentioned for Lock.\n\n> The use of \"[user]\" isn't fully explained, so it might not be clear that this applies to both Postgres catalog tables and any user tables\n> that have been nominated as catalogs. Probably worth linking to the \"Capabilities\" section to explain.\n>\n\nSounds reasonable.\n\n> It would be worth coalescing the following sections into a single page, since they are just a few lines each:\n> Streaming Replication Protocol Interface\n> Logical Decoding SQL Interface\n> System Catalogs Related to Logical Decoding\n>\n\nI think this is worth considering but we might want to discuss this as\na separate change/patch.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 17 Jun 2021 17:27:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Thu, Jun 17, 2021 at 12:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jun 17, 2021 at 4:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Jun 17, 2021 at 8:41 AM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > Pushed!\n> >\n> [Responding to Simon's comments]\n>\n> > If LOCK and TRUNCATE is advised against on all user catalog tables, why would CLUSTER only apply to pg_class? Surely its locking\n> > level is the same as LOCK?\n> >\n>\n> Cluster will also apply to all user catalog tables. I think we can\n> extend it slightly as we have mentioned for Lock.\n\nOK, good.\n\n> > The use of \"[user]\" isn't fully explained, so it might not be clear that this applies to both Postgres catalog tables and any user tables\n> > that have been nominated as catalogs. Probably worth linking to the \"Capabilities\" section to explain.\n> >\n>\n> Sounds reasonable.\n>\n> > It would be worth coalescing the following sections into a single page, since they are just a few lines each:\n> > Streaming Replication Protocol Interface\n> > Logical Decoding SQL Interface\n> > System Catalogs Related to Logical Decoding\n> >\n>\n> I think this is worth considering but we might want to discuss this as\n> a separate change/patch.\n\nMakes sense.\n\nThanks\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 17 Jun 2021 14:34:14 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Thursday, June 17, 2021 10:34 PM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\r\n> On Thu, Jun 17, 2021 at 12:57 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > On Thu, Jun 17, 2021 at 4:27 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Thu, Jun 17, 2021 at 8:41 AM osumi.takamichi@fujitsu.com\r\n> > > <osumi.takamichi@fujitsu.com> wrote:\r\n> > >\r\n> > > Pushed!\r\n> > >\r\n> > [Responding to Simon's comments]\r\n> >\r\n> > > If LOCK and TRUNCATE is advised against on all user catalog tables,\r\n> > > why would CLUSTER only apply to pg_class? Surely its locking level is the\r\n> same as LOCK?\r\n> > >\r\n> >\r\n> > Cluster will also apply to all user catalog tables. I think we can\r\n> > extend it slightly as we have mentioned for Lock.\r\n> \r\n> OK, good.\r\n> \r\n> > > The use of \"[user]\" isn't fully explained, so it might not be clear\r\n> > > that this applies to both Postgres catalog tables and any user tables that\r\n> have been nominated as catalogs. Probably worth linking to the \"Capabilities\"\r\n> section to explain.\r\n> > >\r\n> >\r\n> > Sounds reasonable.\r\nSimon, I appreciate your suggestions and yes,\r\nif the user catalog table is referenced by the output plugin,\r\nit can be another cause of the deadlock.\r\n\r\nI'm going to post the patch for the those two changes, accordingly.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Fri, 18 Jun 2021 02:40:48 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Friday, June 18, 2021 11:41 AM osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:\r\n> On Thursday, June 17, 2021 10:34 PM Simon Riggs\r\n> <simon.riggs@enterprisedb.com> wrote:\r\n> > On Thu, Jun 17, 2021 at 12:57 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> > wrote:\r\n> > > On Thu, Jun 17, 2021 at 4:27 PM Amit Kapila\r\n> > > <amit.kapila16@gmail.com>\r\n> > wrote:\r\n> > > >\r\n> > > > On Thu, Jun 17, 2021 at 8:41 AM osumi.takamichi@fujitsu.com\r\n> > > > <osumi.takamichi@fujitsu.com> wrote:\r\n> > > >\r\n> > > > Pushed!\r\n> > > >\r\n> > > [Responding to Simon's comments]\r\n> > >\r\n> > > > If LOCK and TRUNCATE is advised against on all user catalog\r\n> > > > tables, why would CLUSTER only apply to pg_class? Surely its\r\n> > > > locking level is the\r\n> > same as LOCK?\r\n> > > >\r\n> > >\r\n> > > Cluster will also apply to all user catalog tables. I think we can\r\n> > > extend it slightly as we have mentioned for Lock.\r\n> >\r\n> > OK, good.\r\n> >\r\n> > > > The use of \"[user]\" isn't fully explained, so it might not be\r\n> > > > clear that this applies to both Postgres catalog tables and any\r\n> > > > user tables that\r\n> > have been nominated as catalogs. Probably worth linking to the\r\n> \"Capabilities\"\r\n> > section to explain.\r\n> > > >\r\n> > >\r\n> > > Sounds reasonable.\r\n> Simon, I appreciate your suggestions and yes, if the user catalog table is\r\n> referenced by the output plugin, it can be another cause of the deadlock.\r\n> \r\n> I'm going to post the patch for the those two changes, accordingly.\r\nHi, I've made the patch-set to cover the discussion above for all-supported versions.\r\nPlease have a look at those.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Fri, 18 Jun 2021 08:55:17 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Fri, Jun 18, 2021 at 2:25 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Friday, June 18, 2021 11:41 AM osumi.takamichi@fujitsu.com <osumi.takamichi@fujitsu.com> wrote:\n\n> > Simon, I appreciate your suggestions and yes, if the user catalog table is\n> > referenced by the output plugin, it can be another cause of the deadlock.\n> >\n> > I'm going to post the patch for the those two changes, accordingly.\n> Hi, I've made the patch-set to cover the discussion above for all-supported versions.\n> Please have a look at those.\n>\n\nI have slightly modified your patch, see if the attached looks okay to\nyou? This is just a HEAD patch, I'll modify the back-branch patches\naccordingly.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Sat, 19 Jun 2021 15:21:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Saturday, June 19, 2021 6:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Fri, Jun 18, 2021 at 2:25 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Friday, June 18, 2021 11:41 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> \r\n> > > Simon, I appreciate your suggestions and yes, if the user catalog\r\n> > > table is referenced by the output plugin, it can be another cause of the\r\n> deadlock.\r\n> > >\r\n> > > I'm going to post the patch for the those two changes, accordingly.\r\n> > Hi, I've made the patch-set to cover the discussion above for all-supported\r\n> versions.\r\n> > Please have a look at those.\r\n> \r\n> I have slightly modified your patch, see if the attached looks okay to you? This\r\n> is just a HEAD patch, I'll modify the back-branch patches accordingly.\r\nThank you for updating the patch.\r\nThe patch becomes much better. Yet, I have one suggestion.\r\n\r\n* doc/src/sgml/logicaldecoding.sgml\r\n <itemizedlist>\r\n <listitem>\r\n <para>\r\n Issuing an explicit <command>LOCK</command> on <structname>pg_class</structname>\r\n- (or any other catalog table) in a transaction.\r\n+ (or any other [user] catalog table) in a transaction.\r\n </para>\r\n </listitem>\r\n\r\n <listitem>\r\n <para>\r\n- Perform <command>CLUSTER</command> on <structname>pg_class</structname> in\r\n- a transaction.\r\n+ Perform <command>CLUSTER</command> on <structname>pg_class</structname> (or any\r\n+ other [user] catalog table) in a transaction.\r\n </para>\r\n </listitem>\r\n\r\n <listitem>\r\n <para>\r\n <command>PREPARE TRANSACTION</command> after <command>LOCK</command> command\r\n- on <structname>pg_class</structname> and allow logical decoding of two-phase\r\n- transactions.\r\n+ on <structname>pg_class</structname> (or any other [user] catalog table) and\r\n+ allow logical decoding of two-phase transactions.\r\n </para>\r\n </listitem>\r\n\r\n <listitem>\r\n <para>\r\n <command>PREPARE TRANSACTION</command> after <command>CLUSTER</command>\r\n- command on <structname>pg_trigger</structname> and allow logical decoding of\r\n- two-phase transactions. This will lead to deadlock only when published table\r\n- have a trigger.\r\n+ command on <structname>pg_trigger</structname> (or any other [user] catalog\r\n+ table) and allow logical decoding of two-phase transactions. This will lead\r\n+ to deadlock only when published table have a trigger.\r\n\r\n\r\nNow we have the four paren supplementary descriptions,\r\nnot to make users miss any other [user] catalog tables.\r\nBecause of this, the built output html gives me some redundant\r\nimpression, for that parts. Accordingly, couldn't we move them\r\nto outside of the itemizedlist section in a simple manner ?\r\n\r\nFor example, to insert a sentence below the list,\r\nafter removing the paren descriptions in the listitem, which says\r\n\"Note that those commands that can cause deadlock apply to not only\r\nexplicitly indicated system catalog tables above but also any other [user] catalog table.\"\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Sun, 20 Jun 2021 03:58:15 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Sun, Jun 20, 2021 at 9:28 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Saturday, June 19, 2021 6:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Fri, Jun 18, 2021 at 2:25 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > On Friday, June 18, 2021 11:41 AM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > > > Simon, I appreciate your suggestions and yes, if the user catalog\n> > > > table is referenced by the output plugin, it can be another cause of the\n> > deadlock.\n> > > >\n> > > > I'm going to post the patch for the those two changes, accordingly.\n> > > Hi, I've made the patch-set to cover the discussion above for all-supported\n> > versions.\n> > > Please have a look at those.\n> >\n> > I have slightly modified your patch, see if the attached looks okay to you? This\n> > is just a HEAD patch, I'll modify the back-branch patches accordingly.\n> Thank you for updating the patch.\n> The patch becomes much better. Yet, I have one suggestion.\n>\n> * doc/src/sgml/logicaldecoding.sgml\n> <itemizedlist>\n> <listitem>\n> <para>\n> Issuing an explicit <command>LOCK</command> on <structname>pg_class</structname>\n> - (or any other catalog table) in a transaction.\n> + (or any other [user] catalog table) in a transaction.\n> </para>\n> </listitem>\n>\n> <listitem>\n> <para>\n> - Perform <command>CLUSTER</command> on <structname>pg_class</structname> in\n> - a transaction.\n> + Perform <command>CLUSTER</command> on <structname>pg_class</structname> (or any\n> + other [user] catalog table) in a transaction.\n> </para>\n> </listitem>\n>\n> <listitem>\n> <para>\n> <command>PREPARE TRANSACTION</command> after <command>LOCK</command> command\n> - on <structname>pg_class</structname> and allow logical decoding of two-phase\n> - transactions.\n> + on <structname>pg_class</structname> (or any other [user] catalog table) and\n> + allow logical decoding of two-phase transactions.\n> </para>\n> </listitem>\n>\n> <listitem>\n> <para>\n> <command>PREPARE TRANSACTION</command> after <command>CLUSTER</command>\n> - command on <structname>pg_trigger</structname> and allow logical decoding of\n> - two-phase transactions. This will lead to deadlock only when published table\n> - have a trigger.\n> + command on <structname>pg_trigger</structname> (or any other [user] catalog\n> + table) and allow logical decoding of two-phase transactions. This will lead\n> + to deadlock only when published table have a trigger.\n>\n>\n> Now we have the four paren supplementary descriptions,\n> not to make users miss any other [user] catalog tables.\n> Because of this, the built output html gives me some redundant\n> impression, for that parts. Accordingly, couldn't we move them\n> to outside of the itemizedlist section in a simple manner ?\n>\n> For example, to insert a sentence below the list,\n> after removing the paren descriptions in the listitem, which says\n> \"Note that those commands that can cause deadlock apply to not only\n> explicitly indicated system catalog tables above but also any other [user] catalog table.\"\n>\n\nSounds reasonable to me. /but also any other/but also to any other/,\nto seems to be missing in the above line. Kindly send an update patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sun, 20 Jun 2021 11:53:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Sunday, June 20, 2021 3:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Sun, Jun 20, 2021 at 9:28 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > * doc/src/sgml/logicaldecoding.sgml\r\n...\r\n> >\r\n> > Now we have the four paren supplementary descriptions, not to make\r\n> > users miss any other [user] catalog tables.\r\n> > Because of this, the built output html gives me some redundant\r\n> > impression, for that parts. Accordingly, couldn't we move them to\r\n> > outside of the itemizedlist section in a simple manner ?\r\n> >\r\n> > For example, to insert a sentence below the list, after removing the\r\n> > paren descriptions in the listitem, which says \"Note that those\r\n> > commands that can cause deadlock apply to not only explicitly\r\n> > indicated system catalog tables above but also any other [user] catalog table.\"\r\n> \r\n> Sounds reasonable to me. /but also any other/but also to any other/, to\r\n> seems to be missing in the above line. Kindly send an update patch.\r\nExcuse me, I don't understand the second sentence.\r\nI wrote \"but also\" clause in my example.\r\n\r\nAlso, attached the patch for the change to the HEAD.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Sun, 20 Jun 2021 12:49:39 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Sunday, June 20, 2021 9:50 PM I wrote:\r\n> On Sunday, June 20, 2021 3:23 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > On Sun, Jun 20, 2021 at 9:28 AM osumi.takamichi@fujitsu.com\r\n> > <osumi.takamichi@fujitsu.com> wrote:\r\n> > > * doc/src/sgml/logicaldecoding.sgml\r\n> ...\r\n> > >\r\n> > > Now we have the four paren supplementary descriptions, not to make\r\n> > > users miss any other [user] catalog tables.\r\n> > > Because of this, the built output html gives me some redundant\r\n> > > impression, for that parts. Accordingly, couldn't we move them to\r\n> > > outside of the itemizedlist section in a simple manner ?\r\n> > >\r\n> > > For example, to insert a sentence below the list, after removing the\r\n> > > paren descriptions in the listitem, which says \"Note that those\r\n> > > commands that can cause deadlock apply to not only explicitly\r\n> > > indicated system catalog tables above but also any other [user] catalog\r\n> table.\"\r\n> >\r\n> > Sounds reasonable to me. /but also any other/but also to any other/,\r\n> > to seems to be missing in the above line. Kindly send an update patch.\r\n> Excuse me, I don't understand the second sentence.\r\n> I wrote \"but also\" clause in my example.\r\n> \r\n> Also, attached the patch for the change to the HEAD.\r\nI've updated the patch to follow the correction Amit-san mentioned.\r\nPlease check.\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Mon, 21 Jun 2021 03:18:10 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: locking [user] catalog tables vs 2pc vs logical rep"
},
{
"msg_contents": "On Mon, Jun 21, 2021 at 8:48 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Sunday, June 20, 2021 9:50 PM I wrote:\n> > On Sunday, June 20, 2021 3:23 PM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > > On Sun, Jun 20, 2021 at 9:28 AM osumi.takamichi@fujitsu.com\n> > > <osumi.takamichi@fujitsu.com> wrote:\n> > > > * doc/src/sgml/logicaldecoding.sgml\n> > ...\n> > > >\n> > > > Now we have the four paren supplementary descriptions, not to make\n> > > > users miss any other [user] catalog tables.\n> > > > Because of this, the built output html gives me some redundant\n> > > > impression, for that parts. Accordingly, couldn't we move them to\n> > > > outside of the itemizedlist section in a simple manner ?\n> > > >\n> > > > For example, to insert a sentence below the list, after removing the\n> > > > paren descriptions in the listitem, which says \"Note that those\n> > > > commands that can cause deadlock apply to not only explicitly\n> > > > indicated system catalog tables above but also any other [user] catalog\n> > table.\"\n> > >\n> > > Sounds reasonable to me. /but also any other/but also to any other/,\n> > > to seems to be missing in the above line. Kindly send an update patch.\n> > Excuse me, I don't understand the second sentence.\n> > I wrote \"but also\" clause in my example.\n> >\n> > Also, attached the patch for the change to the HEAD.\n> I've updated the patch to follow the correction Amit-san mentioned.\n> Please check.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 24 Jun 2021 15:02:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: locking [user] catalog tables vs 2pc vs logical rep"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nRecently, I took some performance measurements for CREATE TABLE AS. \nhttps://www.postgresql.org/message-id/34549865667a4a3bb330ebfd035f85d3%40G08CNEXMBPEKD05.g08.fujitsu.local\n\nThen I found an issue about the tuples unbalance distribution(99% tuples read by one worker) among workers under specified case which lead the underlying parallel select part makes no more performance gain as we expect.\nIt's happening in master(HEAD). \n\nI think this is not a normal phenomenon, because we pay the costs which parallel mode needs, but we didn't get the benefits we want.\nSo, is there a way to improve it to achieve the same benefits as we use parallel select?\n\nBelow is test detail:\n1. test specification environment\n CentOS 8.2, 128G RAM, 40 processors(Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz), disk SAS\n2. test execute\n PSA test_uneven_workers.sql file includes my test data and steps.\n3. test results\nCREATE TABLE ... AS SELECT ... , underlying select query 130 million rows(about 5G data size) from 200 million(about 8G source data size). Each case run 30 times.\n\n| | query 130 million |\n|------------------------------------|----------------|-----|\t\n|max_parallel_workers_per_gather | Execution Time |%reg |\n|------------------------------------|----------------|-----|\n|max_parallel_workers_per_gather = 2 | 141002.030 |-1% |\n|max_parallel_workers_per_gather = 4 | 140957.221 |-1% |\n|max_parallel_workers_per_gather = 8 | 142445.061 | 0% |\n|max_parallel_workers_per_gather = 0 | 142580.535 | |\n\nAccording to above results, we almost can't get benefit, especially when we increase max_parallel_workers_per_gather to 8 or larger one.\nWhy the parallel select doesn't achieve the desired performance I think is because the tuples unbalance distributed among workers as showed in query plan . \n\nQuery plan:\nmax_parallel_workers_per_gather = 8, look at worker 4, 99% tuples read by it.\npostgres=# explain analyze verbose create table test1 as select func_restricted(),b,c from x1 where a%2=0 or a%3=0;\t\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Gather (cost=1000.00..2351761.77 rows=1995002 width=12) (actual time=0.411..64451.616 rows=133333333 loops=1)\n Output: func_restricted(), b, c\n Workers Planned: 7\n Workers Launched: 7\n -> Parallel Seq Scan on public.x1 (cost=0.00..1652511.07 rows=285000 width=8) (actual time=0.016..3553.626 rows=16666667 loops=8)\n Output: b, c\n Filter: (((x1.a % 2) = 0) OR ((x1.a % 3) = 0))\n Rows Removed by Filter: 8333333\n Worker 0: actual time=0.014..21.415 rows=126293 loops=1\n Worker 1: actual time=0.015..21.564 rows=126293 loops=1\n Worker 2: actual time=0.014..21.575 rows=126294 loops=1\n Worker 3: actual time=0.016..21.701 rows=126293 loops=1\n Worker 4: actual time=0.019..28263.677 rows=132449393 loops=1\n Worker 5: actual time=0.019..21.470 rows=126180 loops=1\n Worker 6: actual time=0.015..34.441 rows=126293 loops=1 Planning Time: 0.210 ms Execution Time: 142392.808 ms\n\nOccurrence condition:\n1. query plan is kind of \"serial insert + parallel select\".\n2. underlying select query large data size(e.g. query 130 million from 200 million). It won't happen in small data size(millions of) from what I've tested so far.\n\nAccording to above, IMHO, I guess it may be caused by the leader write rate can't catch the worker read rate, then the tuples of one worker blocked in the queue, become more and more.\n\nAny thoughts ?\n\nRegards,\nTang",
"msg_date": "Tue, 23 Feb 2021 03:43:55 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Tuples unbalance distribution among workers in underlying parallel\n select with serial insert"
}
] |
[
{
"msg_contents": "Planning is expensive and we use plancache to bypass its effect. I find the\n$subject recently which is caused by we register NAMESPACEOID invalidation\nmessage for pg_temp_%s as well as other normal namespaces. Is it a\nmust?\n\nWe can demo the issue with the below case:\n\nSess1:\ncreate table t (a int);\nprepare s as select * from t;\npostgres=# execute s;\nINFO: There is no cached plan now\n a\n---\n(0 rows)\n\npostgres=# execute s; -- The plan is cached.\n a\n---\n(0 rows)\n\n\nSess2:\ncreate temp table m (a int);\n\nSess1:\n\npostgres=# execute s; -- The cached plan is reset.\nINFO: There is no cached plan now\n a\n---\n(0 rows)\n\n\nWhat I want to do now is bypass the invalidation message totally if it is a\npg_temp_%d\nnamespace. (RELATION_IS_OTHER_TEMP). With this change, the impact is not\nonly\nthe plan cache is not reset but also all the other stuff in\nSysCacheInvalidate/CallSyscacheCallbacks will not be called (for pg_temp_%d\nchange\nonly). I think pg_temp_%d is not meaningful for others, so I think the\nbypassing is OK.\nI still have not kicked off any coding so far, I want to know if it is a\ncorrect thing to do?\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nPlanning is expensive and we use plancache to bypass its effect. I find the$subject recently which is caused by we register NAMESPACEOID invalidationmessage for pg_temp_%s as well as other normal namespaces. Is it amust?We can demo the issue with the below case:Sess1: create table t (a int);prepare s as select * from t;postgres=# execute s;INFO: There is no cached plan now a---(0 rows)postgres=# execute s; -- The plan is cached. a---(0 rows)Sess2:create temp table m (a int);Sess1:postgres=# execute s; -- The cached plan is reset.INFO: There is no cached plan now a---(0 rows)What I want to do now is bypass the invalidation message totally if it is a pg_temp_%dnamespace. (RELATION_IS_OTHER_TEMP). With this change, the impact is not onlythe plan cache is not reset but also all the other stuff inSysCacheInvalidate/CallSyscacheCallbacks will not be called (for pg_temp_%d changeonly). I think pg_temp_%d is not meaningful for others, so I think the bypassing is OK.I still have not kicked off any coding so far, I want to know if it is a correct thing to do?-- Best RegardsAndy Fan (https://www.aliyun.com/)",
"msg_date": "Tue, 23 Feb 2021 12:07:37 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_temp_%d namespace creation can invalidate all the cached plan in\n other backends"
},
{
"msg_contents": "On Tue, Feb 23, 2021 at 12:07 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> Planning is expensive and we use plancache to bypass its effect. I find the\n> $subject recently which is caused by we register NAMESPACEOID invalidation\n> message for pg_temp_%s as well as other normal namespaces. Is it a\n> must?\n>\n> We can demo the issue with the below case:\n>\n> Sess1:\n> create table t (a int);\n> prepare s as select * from t;\n> postgres=# execute s;\n> INFO: There is no cached plan now\n> a\n> ---\n> (0 rows)\n>\n> postgres=# execute s; -- The plan is cached.\n> a\n> ---\n> (0 rows)\n>\n>\n> Sess2:\n> create temp table m (a int);\n>\n> Sess1:\n>\n> postgres=# execute s; -- The cached plan is reset.\n> INFO: There is no cached plan now\n> a\n> ---\n> (0 rows)\n>\n>\n> What I want to do now is bypass the invalidation message totally if it is\n> a pg_temp_%d\n> namespace. (RELATION_IS_OTHER_TEMP).\n>\n\nPlease ignore the word \"RELATION_IS_OTHER_TEMP\", it is pasted here by\naccident..\n\n\n> With this change, the impact is not only\n> the plan cache is not reset but also all the other stuff in\n> SysCacheInvalidate/CallSyscacheCallbacks will not be called (for\n> pg_temp_%d change\n> only). I think pg_temp_%d is not meaningful for others, so I think the\n> bypassing is OK.\n> I still have not kicked off any coding so far, I want to know if it is a\n> correct thing to do?\n>\n> --\n> Best Regards\n> Andy Fan (https://www.aliyun.com/)\n>\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Tue, Feb 23, 2021 at 12:07 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:Planning is expensive and we use plancache to bypass its effect. I find the$subject recently which is caused by we register NAMESPACEOID invalidationmessage for pg_temp_%s as well as other normal namespaces. Is it amust?We can demo the issue with the below case:Sess1: create table t (a int);prepare s as select * from t;postgres=# execute s;INFO: There is no cached plan now a---(0 rows)postgres=# execute s; -- The plan is cached. a---(0 rows)Sess2:create temp table m (a int);Sess1:postgres=# execute s; -- The cached plan is reset.INFO: There is no cached plan now a---(0 rows)What I want to do now is bypass the invalidation message totally if it is a pg_temp_%dnamespace. (RELATION_IS_OTHER_TEMP). Please ignore the word \"RELATION_IS_OTHER_TEMP\", it is pasted here by accident.. With this change, the impact is not onlythe plan cache is not reset but also all the other stuff inSysCacheInvalidate/CallSyscacheCallbacks will not be called (for pg_temp_%d changeonly). I think pg_temp_%d is not meaningful for others, so I think the bypassing is OK.I still have not kicked off any coding so far, I want to know if it is a correct thing to do?-- Best RegardsAndy Fan (https://www.aliyun.com/)\n-- Best RegardsAndy Fan (https://www.aliyun.com/)",
"msg_date": "Tue, 23 Feb 2021 12:14:50 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_temp_%d namespace creation can invalidate all the cached plan\n in other backends"
},
{
"msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> Planning is expensive and we use plancache to bypass its effect. I find the\n> $subject recently which is caused by we register NAMESPACEOID invalidation\n> message for pg_temp_%s as well as other normal namespaces. Is it a\n> must?\n\nSince we don't normally delete those namespaces once they exist,\nthe number of such events is negligible over the life of a database\n(at least in production scenarios). I'm having a very hard time\ngetting excited about spending effort here.\n\nAlso, you can't just drop the inval event, because even if you\nbelieve it's irrelevant to other backends (a questionable\nassumption), it certainly is relevant locally.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Feb 2021 00:50:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_temp_%d namespace creation can invalidate all the cached plan\n in other backends"
},
{
"msg_contents": "On Tue, Feb 23, 2021 at 1:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andy Fan <zhihui.fan1213@gmail.com> writes:\n> > Planning is expensive and we use plancache to bypass its effect. I find\n> the\n> > $subject recently which is caused by we register NAMESPACEOID\n> invalidation\n> > message for pg_temp_%s as well as other normal namespaces. Is it a\n> > must?\n>\n> Since we don't normally delete those namespaces once they exist,\n> the number of such events is negligible over the life of a database\n> (at least in production scenarios).\n\n\nI do miss this part during my test. Thanks for sharing this.\n\n\n> I'm having a very hard time\n> getting excited about spending effort here.\n>\n\nWhile I admit this should happen rarely in production, I still think we\nmay need to fix it. This is kind of tech debt. For example, why my\napplication has a spike on time xx:yy:zz (Assume it happens even\nit is rare). I think there is a very limited DBA who can find out this\nreason easily. Even he can find out it, he is still hard to make others\nto understand and be convinced. So why shouldn't we just avoid it\nif the effort is not huge?\n\n(I do find this in my production case, where the case starts\nfrom this invalidation message, and crashes at ResetPlanCache.\nI'm using a modified version, so the crash probably not the community\nversion's fault and we will fix it separately. )\n\nAlso, you can't just drop the inval event, because even if you\n> believe it's irrelevant to other backends (a questionable\n> assumption), it certainly is relevant locally.\n>\n\nThanks for this hint! Can just finding a place to run\nSysCacheInvalidate/CallSyscacheCallbacks locally fix this issue?\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Tue, Feb 23, 2021 at 1:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andy Fan <zhihui.fan1213@gmail.com> writes:\n> Planning is expensive and we use plancache to bypass its effect. I find the\n> $subject recently which is caused by we register NAMESPACEOID invalidation\n> message for pg_temp_%s as well as other normal namespaces. Is it a\n> must?\n\nSince we don't normally delete those namespaces once they exist,\nthe number of such events is negligible over the life of a database\n(at least in production scenarios). I do miss this part during my test. Thanks for sharing this. I'm having a very hard time\ngetting excited about spending effort here.While I admit this should happen rarely in production, I still think wemay need to fix it. This is kind of tech debt. For example, why myapplication has a spike on time xx:yy:zz (Assume it happens evenit is rare). I think there is a very limited DBA who can find out thisreason easily. Even he can find out it, he is still hard to make othersto understand and be convinced. So why shouldn't we just avoid it if the effort is not huge? (I do find this in my production case, where the case startsfrom this invalidation message, and crashes at ResetPlanCache.I'm using a modified version, so the crash probably not the communityversion's fault and we will fix it separately. )\nAlso, you can't just drop the inval event, because even if you\nbelieve it's irrelevant to other backends (a questionable\nassumption), it certainly is relevant locally.Thanks for this hint! Can just finding a place to runSysCacheInvalidate/CallSyscacheCallbacks locally fix this issue?-- Best RegardsAndy Fan (https://www.aliyun.com/)",
"msg_date": "Tue, 23 Feb 2021 16:04:47 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_temp_%d namespace creation can invalidate all the cached plan\n in other backends"
}
] |
[
{
"msg_contents": "Hi,\n\nover the last ~year I spent a lot of time trying to figure out how we could\nadd AIO (asynchronous IO) and DIO (direct IO) support to postgres. While\nthere's still a *lot* of open questions, I think I now have a decent handle on\nmost of the bigger architectural questions. Thus this long email.\n\n\nJust to be clear: I don't expect the current to design to survive as-is. If\nthere's a few sentences below that sound a bit like describing the new world,\nthat's because they're from the README.md in the patch series...\n\n\n## Why Direct / unbuffered IO?\n\nThe main reason to want to use Direct IO are:\n\n- Lower CPU usage / higher throughput. Particularly on modern storage\n buffered writes are bottlenecked by the operating system having to\n copy data from the kernel's page cache to postgres buffer pool using\n the CPU. Whereas direct IO can often move the data directly between\n the storage devices and postgres' buffer cache, using DMA. While\n that transfer is ongoing, the CPU is free to perform other work,\n with little impact.\n- Avoiding double buffering between operating system cache and\n postgres' shared_buffers.\n- Better control over the timing and pace of dirty data writeback.\n- Potential for concurrent WAL writes (via O_DIRECT | O_DSYNC writes)\n\n\nThe main reason *not* to use Direct IO are:\n\n- Without AIO, Direct IO is unusably slow for most purposes.\n- Even with AIO, many parts of postgres need to be modified to perform\n explicit prefetching.\n- In situations where shared_buffers cannot be set appropriately\n large, e.g. because there are many different postgres instances\n hosted on shared hardware, performance will often be worse then when\n using buffered IO.\n\n\n## Why Asynchronous IO\n\n- Without AIO we cannot use DIO\n\n- Without asynchronous IO (AIO) PG has to rely on the operating system\n to hide the cost of synchronous IO from Postgres. While this works\n surprisingly well in a lot of workloads, it does not do as good a job\n on prefetching and controlled writeback as we would like.\n- There are important expensive operations like fdatasync() where the\n operating system cannot hide the storage latency. This is particularly\n important for WAL writes, where the ability to asynchronously issue\n fdatasync() or O_DSYNC writes can yield significantly higher\n throughput.\n- Fetching data into shared buffers asynchronously and concurrently with query\n execution means there is more CPU time for query execution.\n\n\n## High level difficulties adding AIO/DIO support\n\n- Optionally using AIO leads to convoluted and / or duplicated code.\n\n- Platform dependency: The common AIO APIs are typically specific to one\n platform (linux AIO, linux io_uring, windows IOCP, windows overlapped IO) or\n a few platforms (posix AIO, but there's many differences).\n\n- There are a lot of separate places doing IO in PG. Moving all of these to\n use efficiently use AIO is an, um, large undertaking.\n\n- Nothing in the buffer management APIs expects there to be more than one IO\n to be in progress at the same time - which is required to do AIO.\n\n\n## Portability & Duplication\n\nTo avoid the issue of needing non-AIO codepaths to support platforms without\nnative AIO support a worker process based AIO implementation exists (and is\ncurrently the default). This also is convenient to check if a problem is\nrelated to the native IO implementation or not.\n\nThanks to Thomas Munro for helping a *lot* around this area. He wrote\nthe worker mode, the posix aio mode, added CI, did a lot of other\ntesting, listened to me...\n\n\n## Deadlock and Starvation Dangers due to AIO\n\nUsing AIO in a naive way can easily lead to deadlocks in an environment where\nthe source/target of AIO are shared resources, like pages in postgres'\nshared_buffers.\n\nConsider one backend performing readahead on a table, initiating IO for a\nnumber of buffers ahead of the current \"scan position\". If that backend then\nperforms some operation that blocks, or even just is slow, the IO completion\nfor the asynchronously initiated read may not be processed.\n\nThis AIO implementation solves this problem by requiring that AIO methods\neither allow AIO completions to be processed by any backend in the system\n(e.g. io_uring, and indirectly posix, via signal handlers), or to guarantee\nthat AIO processing will happen even when the issuing backend is blocked\n(e.g. worker mode, which offloads completion processing to the AIO workers).\n\n\n## AIO API overview\n\nThe main steps to use AIO (without higher level helpers) are:\n\n1) acquire an \"unused\" AIO: pgaio_io_get()\n\n2) start some IO, this is done by functions like\n pgaio_io_start_(read|write|fsync|flush_range)_(smgr|sb|raw|wal)\n\n The (read|write|fsync|flush_range) indicates the operation, whereas\n (smgr|sb|raw|wal) determines how IO completions, errors, ... are handled.\n\n (see below for more details about this design choice - it might or not be\n right)\n\n3) optionally: assign a backend-local completion callback to the IO\n (pgaio_io_on_completion_local())\n\n4) 2) alone does *not* cause the IO to be submitted to the kernel, but to be\n put on a per-backend list of pending IOs. The pending IOs can be explicitly\n be flushed pgaio_submit_pending(), but will also be submitted if the\n pending list gets to be too large, or if the current backend waits for the\n IO.\n\n The are two main reasons not to submit the IO immediately:\n - If adjacent, we can merge several IOs into one \"kernel level\" IO during\n submission. Larger IOs are considerably more efficient.\n - Several AIO APIs allow to submit a batch of IOs in one system call.\n\n5) wait for the IO: pgaio_io_wait() waits for an IO \"owned\" by the current\n backend. When other backends may need to wait for an IO to finish,\n pgaio_io_ref() can put a reference to that AIO in shared memory (e.g. a\n BufferDesc), which can be waited for using pgaio_io_wait_ref().\n\n6) Process the results of the request. If a callback was registered in 3),\n this isn't always necessary. The results of AIO can be accessed using\n pgaio_io_result() which returns an integer where negative numbers are\n -errno, and positive numbers are the [partial] success conditions\n (e.g. potentially indicating a short read).\n\n7) release ownership of the io (pgaio_io_release()) or reuse the IO for\n another operation (pgaio_io_recycle())\n\n\nMost places that want to use AIO shouldn't themselves need to care about\nmanaging the number of writes in flight, or the readahead distance. To help\nwith that there are two helper utilities, a \"streaming read\" and a \"streaming\nwrite\".\n\nThe \"streaming read\" helper uses a callback to determine which blocks to\nprefetch - that allows to do readahead in a sequential fashion but importantly\nalso allows to asynchronously \"read ahead\" non-sequential blocks.\n\nE.g. for vacuum, lazy_scan_heap() has a callback that uses the visibility map\nto figure out which block needs to be read next. Similarly lazy_vacuum_heap()\nuses the tids in LVDeadTuples to figure out which blocks are going to be\nneeded. Here's the latter as an example:\nhttps://github.com/anarazel/postgres/commit/a244baa36bfb252d451a017a273a6da1c09f15a3#diff-3198152613d9a28963266427b380e3d4fbbfabe96a221039c6b1f37bc575b965R1906\n\n\n## IO initialization layering\n\nOne difficulty I had in this process was how to initialize IOs in light of the\nlayering (from bufmgr.c over smgr.c and md.c to fd.c and back, but also\ne.g. xlog.c). Sometimes AIO needs to be initialized on the bufmgr.c level,\nsometimes on the md.c level, sometimes on the level of fd.c. But to be able to\nreact to the completion of any such IO metadata about the operation is needed.\n\nEarly on fd.c initialized IOs, and the context information was just passed\nthrough to fd.c. But that seems quite wrong - fd.c shouldn't have to know\nabout which Buffer an IO is about. But higher levels shouldn't know about\nwhich files an operation resides in either, so they can't do all the work\neither...\n\nTo avoid that, I ended up splitting the \"start an AIO\" operation into a higher\nlevel part, e.g. pgaio_io_start_read_smgr() - which doesn't know about which\nsmgr implementation is in use and thus also not what file/offset we're dealing\nwith, which calls into smgr->md->fd to actually \"prepare\" the IO (i.e. figure\nout file / offset). This currently looks like:\n\nvoid\npgaio_io_start_read_smgr(PgAioInProgress *io, struct SMgrRelationData* smgr, ForkNumber forknum,\n\t\t\t\t\t\t BlockNumber blocknum, char *bufdata)\n{\n\tpgaio_io_prepare(io, PGAIO_OP_READ);\n\n\tsmgrstartread(io, smgr, forknum, blocknum, bufdata);\n\n\tio->scb_data.read_smgr.tag = (AioBufferTag){\n\t\t.rnode = smgr->smgr_rnode,\n\t\t.forkNum = forknum,\n\t\t.blockNum = blocknum\n\t};\n\n\tpgaio_io_stage(io, PGAIO_SCB_READ_SMGR);\n}\n\nOnce this reaches the fd.c layer the new FileStartRead() function calls\npgaio_io_prep_read() on the IO - but doesn't need to know anything about weird\nhigher level stuff like relfilenodes.\n\nThe _sb (_sb for shared_buffers) variant stores the Buffer, backend and mode\n(as in ReadBufferMode).\n\n\nI'm not sure this is the right design - but it seems a lot better than what I\nhad earlier...\n\n\n## Callbacks\n\nIn the core AIO pieces there are two different types of callbacks at the\nmoment:\n\nShared callbacks, which can be invoked by any backend (normally the issuing\nbackend / the AIO workers, but can be other backends if they are waiting for\nthe IO to complete). For operations on shared resources (e.g. shared buffer\nreads/writes, or WAL writes) these shared callback needs to transition the\nstate of the object the IO is being done for to completion. E.g. for a shared\nbuffer read that means setting BM_VALID / unsetting BM_IO_IN_PROGRESS.\n\nThe main reason these callbacks exist is that they make it safe for a backend\nto issue non-blocking IO on buffers (see the deadlock section above). As any\nblocked backend can cause the IO to complete, the deadlock danger is gone.\n\n\nLocal callbacks, one of which the issuer of an IO can associate with the\nIO. These can be used to issue further readahead. I initially did not have\nthese, but I found it hard to have a controllable numbers of IO in\nflight. They are currently mainly used for error handling (e.g. erroring out\nwhen XLogFileInit() cannot create the file due to ENOSPC), and to issue more\nIO (e.g. readahead for heapam).\n\nThe local callback system isn't quite right, and there's\n\n\n## AIO conversions\n\nCurrently the patch series converts a number of subsystems to AIO. They are of\nvery varying quality. I mainly did the conversions that I considered either be\nof interest architecturally, or that caused a fair bit of pain due to slowness\n(e.g. VACUUMing without AIO is no fun at all when using DIO). Some also for\nfun ;)\n\nMost conversions are fairly simple. E.g. heap scans, checkpointer, bgwriter,\nVACUUM are all not too complicated.\n\nThere are two conversions that are good bit more complicated/experimental:\n\n1) Asynchronous, concurrent, WAL writes. This is important because we right\n now are very bottlenecked by IO latency, because there effectively only\n ever is one WAL IO in flight at the same time. Even though in many\n situations it is possible to issue a WAL write, have one [set of] backends\n wait for that write as its completions satisfies their XLogFlush() needs,\n but concurrently already issue the next WAL write(s) that other backends\n need.\n\n The code here is very crufty, but I think the concept is mostly right.\n\n2) Asynchronous buffer replacement. Even with buffered IO we experience a lot\n of pain when ringbuffers need to write out data (VACUUM!). But with DIO the\n issue gets a lot worse - the kernel can't hide the write latency from us\n anymore. This change makes each backend asynchronously clean out buffers\n that it will need soon. When a ringbuffer is is use this means cleaning out\n buffers in the ringbuffer, when not, performing the clock sweep and cleaning\n out victim buffers. Due to 1) the XLogFlush() can also be done\n asynchronously.\n\n\nThere are a *lot* of places that haven't been converted to use AIO.\n\n\n## Stats\n\nThere are two new views: pg_stat_aios showing AIOs that are currently\nin-progress, pg_stat_aio_backends showing per-backend statistics about AIO.\n\n\n## Code:\n\nhttps://github.com/anarazel/postgres/tree/aio\n\nI was not planning to attach all the patches on the way to AIO - it's too many\nright now... I hope I can reduce the size of the series iteratively into\neasier to look at chunks.\n\n\n## TL;DR: Performance numbers\n\nThis is worth an email on its own, and it's pretty late here already and I\nwant to rerun benchmarks before posting more numbers. So here are just a few\nthat I could run before falling asleep.\n\n\n1) 60s of parallel COPY BINARY of a 89MB into separate tables (s_b = 96GB):\n\nslow NVMe SSD\nbranch\t dio clients tps/stddev\tcheckpoint write time\nmaster\t n 8\t\t3.0/2296 ms\t4.1s / 379647 buffers = 723MiB/s\naio\t n 8\t\t3.8/1985 ms\t11.5s / 1028669 buffers = 698MiB/\naio\t y 8\t\t4.7/204 ms\t10.0s / 1164933 buffers = 910MiB/s\n\nraid of 2 fast NVMe SSDs (on pcie3):\nbranch\t dio clients tps/stddev\tcheckpoint write time\nmaster\t n 8\t\t9.7/62 ms\t7.6s / 1206376 buffers = 1240MiB/s\naio\t n 8\t\t11.4/82 ms\t14.3s / 2838129 buffers = 1550MiB/s\naio\t y 8\t\t18.1/56 ms\t8.9s / 4486170 buffers = 3938MiB/s\n\n\n2) pg prewarm speed\n\nraid of 2 fast NVMe SSDs (on pcie3):\n\npg_prewarm(62GB, read)\nbranch\t dio\ttime\t\tbw\nmaster\t n\t17.4s\t\t3626MiB/s\naio\t n\t10.3s\t\t6126MiB/s (higher cpu usage)\naio\t y\t9.8s\t\t6438MiB/s\n\npg_prewarm(62GB, buffer)\nbranch\t dio\ttime\t\tbw\nmaster\t n\t38.3s\t\t1647MiB/s\naio\t n\t13.6s\t\t4639MiB/s (higher cpu usage)\naio\t y\t10.7s\t\t5897MiB/s\n\n\n\n3) parallel sequential scan speed\n\nparallel sequential scan + count(*) of 59GB table:\n\nbranch\t dio\t max_parallel\ttime\nmaster\t n\t 0\t\t\t40.5s\nmaster\t n\t 1\t\t\t22.6s\nmaster\t n\t 2\t\t\t16.4s\nmaster\t n\t 4\t\t\t10.9s\nmaster\t n\t 8\t\t\t9.3s\n\naio\t y\t 0\t\t\t33.1s\naio\t y\t 1\t\t\t17.2s\naio\t y\t 2\t\t\t11.8s\naio\t y\t 4\t\t\t9.0s\naio\t y\t 8\t\t\t9.2s\n\n\n\nOn local SSDs there's some, but not a huge performance advantage in most\ntransactional r/w workloads. But on cloud storage - which has a lot higher\nlatency - AIO can yield huge advantages. I've seen over 4x.\n\n\nThere's definitely also cases where AIO currently hurts - most of those I just\ndidn't get aroung to address.\n\nThere's a lot more cases in which DIO currently hurts - mostly because the\nnecessary smarts haven't yet been added.\n\n\nComments? Questions?\n\nI plan to send separate emails about smaller chunks of this seperately -\nthe whole topic is just too big. In particular I plan to send something\naround buffer locking / state management - it's a one of the core issues\naround this imo.\n\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Tue, 23 Feb 2021 02:03:44 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "On Tue, 23 Feb 2021 at 05:04, Andres Freund <andres@anarazel.de> wrote:\n>\n> ## Callbacks\n>\n> In the core AIO pieces there are two different types of callbacks at the\n> moment:\n>\n> Shared callbacks, which can be invoked by any backend (normally the issuing\n> backend / the AIO workers, but can be other backends if they are waiting for\n> the IO to complete). For operations on shared resources (e.g. shared buffer\n> reads/writes, or WAL writes) these shared callback needs to transition the\n> state of the object the IO is being done for to completion. E.g. for a shared\n> buffer read that means setting BM_VALID / unsetting BM_IO_IN_PROGRESS.\n>\n> The main reason these callbacks exist is that they make it safe for a backend\n> to issue non-blocking IO on buffers (see the deadlock section above). As any\n> blocked backend can cause the IO to complete, the deadlock danger is gone.\n\nSo firstly this is all just awesome work and I have questions but I\ndon't want them to come across in any way as criticism or as a demand\nfor more work. This is really great stuff, thank you so much!\n\nThe callbacks make me curious about two questions:\n\n1) Is there a chance that a backend issues i/o, the i/o completes in\nsome other backend and by the time this backend gets around to looking\nat the buffer it's already been overwritten again? Do we have to\ninitiate I/O again or have you found a way to arrange that this\nbackend has the buffer pinned from the time the i/o starts even though\nit doesn't handle the comletion?\n\n2) Have you made (or considered making) things like sequential scans\n(or more likely bitmap index scans) asynchronous at a higher level.\nThat is, issue a bunch of asynchronous i/o and then handle the pages\nand return the tuples as the pages arrive. Since sequential scans and\nbitmap scans don't guarantee to read the pages in order they're\ngenerally free to return tuples from any page in any order. I'm not\nsure how much of a win that would actually be since all the same i/o\nwould be getting executed and the savings in shared buffers would be\nsmall but if there are mostly hot pages you could imagine interleaving\na lot of in-memory pages with the few i/os instead of sitting idle\nwaiting for the async i/o to return.\n\n\n\n> ## Stats\n>\n> There are two new views: pg_stat_aios showing AIOs that are currently\n> in-progress, pg_stat_aio_backends showing per-backend statistics about AIO.\n\nThis is impressive. How easy is it to correlate with system aio stats?\n\n\n-- \ngreg\n\n\n",
"msg_date": "Tue, 23 Feb 2021 14:58:32 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "Hi,\n\nOn 2021-02-23 14:58:32 -0500, Greg Stark wrote:\n> So firstly this is all just awesome work and I have questions but I\n> don't want them to come across in any way as criticism or as a demand\n> for more work.\n\nI posted it to get argued with ;).\n\n\n> The callbacks make me curious about two questions:\n>\n> 1) Is there a chance that a backend issues i/o, the i/o completes in\n> some other backend and by the time this backend gets around to looking\n> at the buffer it's already been overwritten again? Do we have to\n> initiate I/O again or have you found a way to arrange that this\n> backend has the buffer pinned from the time the i/o starts even though\n> it doesn't handle the comletion?\n\nThe initiator of the IO can just keep a pin for the buffer to prevent\nthat.\n\nThere's a lot of complexity around how to handle pinning and locking\naround asynchronous buffer IO. I plan to send a separate email with a\nmore details.\n\nIn short: I made it so that for shared buffer IO holds a separate\nrefcount for the duration of the IO - that way the issuer can release\nits own pin without causing a problem (consider e.g. an error while an\nIO is in flight). The pin held by the IO gets released in the completion\ncallback. There's similar trickery with locks - but more about that later.\n\n\n> 2) Have you made (or considered making) things like sequential scans\n> (or more likely bitmap index scans) asynchronous at a higher level.\n> That is, issue a bunch of asynchronous i/o and then handle the pages\n> and return the tuples as the pages arrive. Since sequential scans and\n> bitmap scans don't guarantee to read the pages in order they're\n> generally free to return tuples from any page in any order. I'm not\n> sure how much of a win that would actually be since all the same i/o\n> would be getting executed and the savings in shared buffers would be\n> small but if there are mostly hot pages you could imagine interleaving\n> a lot of in-memory pages with the few i/os instead of sitting idle\n> waiting for the async i/o to return.\n\nI have not. Mostly because it'd basically break the entire regression\ntest suite. And probably a lot of user expectations (yes,\nsynchronize_seqscan exists, but it pretty rarely triggers).\n\nI'm not sure how big the win would be - the readahead for heapam.c that\nis in the patch set tries to keep ahead of the \"current scan position\"\nby a certain amount - so it'll issue the reads for the \"missing\" pages\nbefore they're needed. Hopefully early enough to avoid unnecessary\nstalls. But I'm pretty sure there'll be cases where that'd require a\nprohibitively long \"readahead distance\".\n\nI think there's a lot of interesting things along these lines that we\ncould tackle, but since they involve changing results and/or larger\nchanges to avoid those (e.g. detecting when sequential scan order isn't\nvisible to the user) I think it'd make sense to separate them from the\naio patchset itself.\n\nIf we had the infrastructure to detect whether seqscan order matters, we\ncould also switch to the tuple-iteration order to \"backwards\" in\nheapgetpage() - right now iterating forward in \"itemid order\" causes a\nlot of cache misses because the hardware prefetcher doesn't predict that\nthe tuples are layed out \"decreasing pointer order\".\n\n\n> > ## Stats\n> >\n> > There are two new views: pg_stat_aios showing AIOs that are currently\n> > in-progress, pg_stat_aio_backends showing per-backend statistics about AIO.\n>\n> This is impressive. How easy is it to correlate with system aio stats?\n\nCould you say a bit more about what you are trying to correlate?\n\nHere's some example IOs from pg_stat_aios.\n\n┌─[ RECORD 1 ]─┬───────────────────────────────────────────────────────────────────────────────────────────────────┐\n│ backend_type │ client backend │\n│ id │ 98 │\n│ gen │ 13736 │\n│ op │ write │\n│ scb │ sb │\n│ flags │ PGAIOIP_IN_PROGRESS | PGAIOIP_INFLIGHT │\n│ ring │ 5 │\n│ owner_pid │ 1866051 │\n│ merge_with │ (null) │\n│ result │ 0 │\n│ desc │ fd: 38, offset: 329588736, nbytes: 8192, already_done: 0, release_lock: 1, buffid: 238576 │\n\n├─[ RECORD 24 ]┼───────────────────────────────────────────────────────────────────────────────────────────────┤\n│ backend_type │ checkpointer │\n│ id │ 1501 │\n│ gen │ 15029 │\n│ op │ write │\n│ scb │ sb │\n│ flags │ PGAIOIP_IN_PROGRESS | PGAIOIP_INFLIGHT │\n│ ring │ 3 │\n│ owner_pid │ 1865275 │\n│ merge_with │ 1288 │\n│ result │ 0 │\n│ desc │ fd: 24, offset: 105136128, nbytes: 8192, already_done: 0, release_lock: 1, buffid: 202705 │\n\n├─[ RECORD 31 ]┼───────────────────────────────────────────────────────────────────────────────────────────────────┤\n│ backend_type │ walwriter │\n│ id │ 90 │\n│ gen │ 26498 │\n│ op │ write │\n│ scb │ wal │\n│ flags │ PGAIOIP_IN_PROGRESS | PGAIOIP_INFLIGHT │\n│ ring │ 5 │\n│ owner_pid │ 1865281 │\n│ merge_with │ 181 │\n│ result │ 0 │\n│ desc │ write_no: 17, fd: 12, offset: 6307840, nbytes: 1048576, already_done: 0, bufdata: 0x7f94dd670000 │\n└──────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────┘\n\n\nAnd the per-backend AIO view shows information like this (too wide to\ndisplay all cols):\n\nSELECT\n pid, last_context, backend_type,\n executed_total_count,issued_total_count,submissions_total_count, inflight_count\nFROM pg_stat_aio_backends sab JOIN pg_stat_activity USING (pid);\n┌─────────┬──────────────┬──────────────────────────────┬──────────────────────┬────────────────────┬─────────────────────────┬────────────────┐\n│ pid │ last_context │ backend_type │ executed_total_count │ issued_total_count │ submissions_total_count │ inflight_count │\n├─────────┼──────────────┼──────────────────────────────┼──────────────────────┼────────────────────┼─────────────────────────┼────────────────┤\n│ 1865291 │ 3 │ logical replication launcher │ 0 │ 0 │ 0 │ 0 │\n│ 1865296 │ 0 │ client backend │ 85 │ 85 │ 85 │ 0 │\n│ 1865341 │ 7 │ client backend │ 9574416 │ 2905321 │ 345642 │ 0 │\n...\n│ 1873501 │ 3 │ client backend │ 3565 │ 3565 │ 467 │ 10 │\n...\n│ 1865277 │ 0 │ background writer │ 695402 │ 575906 │ 13513 │ 0 │\n│ 1865275 │ 3 │ checkpointer │ 4664110 │ 3488530 │ 1399896 │ 0 │\n│ 1865281 │ 3 │ walwriter │ 77203 │ 7759 │ 7747 │ 3 │\n└─────────┴──────────────┴──────────────────────────────┴──────────────────────┴────────────────────┴─────────────────────────┴────────────────┘\n\nIt's not super obvious at this point but executed_total_count /\nissued_total_count shows the rate at which IOs have been\nmerged. issued_total_count / submissions_total_count shows how many\n(already merged) IOs were submitted together in one \"submission batch\"\n(for io_method=io_uring, io_uring_enter()).\n\n\nSo pg_stat_aios should allow to enrich some system stats - e.g. by being\nable to split out WAL writes from data file writes. And - except that\nthe current callbacks for that aren't great - it should even allow to\nsplit the IO by different relfilenodes etc.\n\n\nI assume pg_stat_aio_backends also can be helpful, e.g. by seeing which\nbackends currently the deep IO queues that cause latency delays in other\nbackends, and which backends do a lot of sequential IO (high\nexecuted_total_count / issued_total_count) and which only random...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Feb 2021 13:05:45 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "On Tue, Feb 23, 2021 at 11:03 PM Andres Freund <andres@anarazel.de> wrote:\n> over the last ~year I spent a lot of time trying to figure out how we could\n> add AIO (asynchronous IO) and DIO (direct IO) support to postgres. While\n> there's still a *lot* of open questions, I think I now have a decent handle on\n> most of the bigger architectural questions. Thus this long email.\n\nHello,\n\nVery cool to see this project escaping onto -hackers!\n\nI have done some work on a couple of low level parts of it, and I\nwanted to show a quick \"hey, where'd my system calls go?\" demo, which\nmight help illustrate some very simple things about this stuff. Even\nthough io_uring is the new hotness in systems programming, I'm going\nto use io_mode=worker here. It's the default in the current patch\nset, it works on all our supported OSes and is easier to understand\nwithout knowledge of shiny new or obscure old AIO system interfaces.\nI'll also use io_workers=1, an artificially low setting to make it\neasy to spy on (pseudo) async I/O with strace/truss/dtruss on a single\nprocess, and max_parallel_workers_per_gather=0 to keep executor\nparallelism from confusing matters.\n\nThe first thing to notice is that there's an \"io worker\" process, and\nwhile filling up a big table with \"insert into t select\ngenerate_series(1, 100000000)\", it's doing a constant stream of 128KB\npwritev() calls. These are writing out 16 blocks from shared buffers\nat a time:\n\n pwritev(44, [{iov_base=..., iov_len=73728},\n {iov_base=..., iov_len=24576},\n {iov_base=..., iov_len=32768}], 3, 228032512) = 131072\n\nThe reason there are 3 vectors there rather than 16 is just that some\nof the buffers happened to be adjacent in memory and we might as well\nuse the smallest number of vectors. Just after we've started up and\nthe buffer pool is empty, it's easy to find big single vector I/Os,\nbut things soon get more fragmented (blocks adjacent on disk become\nless likely to be adjacent in shared buffers) and that number goes up,\nbut that shouldn't make much difference to the OS or hardware assuming\ndecent scatter/gather support through the stack. If io_data_direct=on\n(not the default) and the blocks are in one physical extent on the\nfile system, that might even go all the way down to the disk as a\nsingle multi-segment write command for the storage hardware DMA engine\nto beam directly in/out of our buffer pool without CPU involvement.\n\nMixed into that stream of I/O worker system calls, you'll also see WAL\ngoing out to disk:\n\n pwritev(15, [{iov_base=..., iov_len=1048576}], 1, 4194304) = 1048576\n\nMeanwhile, the user session process running the big INSERT can be seen\nsignalling the I/O worker to wake it up. The same thing happens for\nbgwriter, checkpointer, autovacuum and walwriter: you can see them all\nhanding off most of their I/O work to the pool of I/O workers, with a\nbit of new signalling going on (which we try to minimise, and can\nprobably minimise much more). (You might be able to see some evidence\nof Andres's new buffer cleaning scheme too, which avoids some bad\npatterns of interleaving small reads and writes, but I'm skipping\nright over here...)\n\nWorking through a very simple example of how the I/O comes to be\nconsolidated and parallelised, let's look at a simple non-parallel\nSELECT COUNT(*) query on a large table. The I/O worker does a stream\nof scattered reads into our buffer pool:\n\n preadv(51, [{iov_base=..., iov_len=24576},\n {iov_base=..., iov_len=8192},\n {iov_base=..., iov_len=16384},\n {iov_base=..., iov_len=16384},\n {iov_base=..., iov_len=16384},\n {iov_base=..., iov_len=49152}], 6, 190808064) = 131072\n\nMeanwhile our user session backend can be seen waking it up whenever\nit's trying to start I/O and finds it snoozing:\n\n kill(1803835, SIGURG) = 0\n kill(1803835, SIGURG) = 0\n kill(1803835, SIGURG) = 0\n kill(1803835, SIGURG) = 0\n kill(1803835, SIGURG) = 0\n\nNotice that there are no sleeping system calls in the query backend,\nmeaning the I/O in this example is always finished by the time the\nexecutor gets around to accessing the page it requested, so we're\nstaying far enough ahead and we can be 100% CPU bound. In unpatched\nPostgreSQL we'd hope to have no actual sleeping in such a simple case\nanyway, thanks to the OS's readahead heuristics; but (1) we'd still do\nindividual pread(8KB) calls, meaning that the user's query is at least\nhaving to pay the CPU cost of a return trip into the kernel and a\ncopyout of 8KB from kernel space to user space, here avoided, (2) in\nio_data_direct=on mode, there's no page cache and thus no kernel read\nahead, so we need to replace that mechanism with something anyway, (3)\nit's needed for non-sequential access like btree scans.\n\nSometimes I/Os are still run in user backends, for example because (1)\nexisting non-AIO code paths are still reached, (2) in worker mode,\nsome kinds of I/Os can't be handed off to another process due to lack\nof a way to open some fds or because we're in single process mode, (3)\nbecause a heuristic kicks in when we know there's only one I/O to run\nand we know we'll immediately wait for it and we can skip a lot of\ncommunication with a traditional synchronous syscall (worker mode only\nfor no, needs to be done for others).\n\nIn order to be able to generate a stream of big vectored reads/writes,\nand start them far enough ahead of time that they're finished before\nwe need the data, there are several layers of new instructure that\nAndres already mentioned and can explain far better than I, but super\nbriefly:\n\nheapam.c uses a \"pg_streaming_read\" object (aio_util.c) to get buffers\nto scan, instead of directly calling ReadBuffer(). It gives the\npg_streaming_read a callback of its own, so that heapam.c remains in\ncontrol of what is read, but the pg_streaming_read is in control of\nreadahead distance and also \"submission\". heapam.c's callback calls\nReadBufferAsync() to initiate reads of pages that it will need soon,\nwhich it does with pgaio_io_start_read_sb() if there's a cache miss.\nThis results in 8KB reads queued up in the process's pending I/O list,\nwith pgaio_read_sb_complete as the completion function to run when\neach read has eventually completed. When the pending list grows to a\ncertain length, it is submitted by pg_streaming_read code. That\ninvolves first \"combining\" pending I/Os: this is where read/write of\nadjacent ranges of files are merged into larger I/Os up to a limit.\nThen the I/Os are submitted to the OS, and we'll eventually learn\nabout their completion, via io_method-specific means (see\naio_worker.c, aio_uring.c, aio_posix.c and one day probably also\naio_win32.c). At that point, merged I/Os will be uncombined.\nSkipping over some complication about retrying on some kinds of\nfailure/partial I/O, that leads to ReadBufferCompleteWrite() being\ncalled for each buffer. (Far be it from me to try to explain the\nrather complex interlocking required to deal with pins and locks\nbetween ReadBufferAsync() and ReadBufferCompleteWrite() in\n(potentially) another process while the I/O is in progress, at this\nstage.)\n\nPlaces in the tree that want to do carefully controlled I/O depth\nmanagement can consider using pg_streaming_{read,write}, providing\ntheir own callback to do the real work (though it's not necessary, and\nnot all AIO uses suit the \"streaming\" model). There's also the\ntraditional PrefetchBuffer() mechanism, which can still be used to\ninitiate buffer reads as before. It's comparatively primitive; since\nyou don't know when the I/O completes, you have to use conservative\nmodels as I do in my proposed WAL prefetching patch. That patch (like\nprobably many others like CF #2799) works just fine on top of the AIO\nbranch, with some small tweaks: it happily shifts all I/O system calls\nout of the recovery process, so that instead of calling\nposix_fadvise() and then a bit later pread() for each cold page\naccessed, it makes one submission system call for every N cold pages\n(or, in some cases, no system calls at all). A future better\nintegration would probably use pg_streaming_read for precise control\nof the I/O depth instead of the little LSN queue it currently uses,\nbut I haven't tried to write that yet.\n\nIf you do simple large INSERTs and SELECTs with one of the native\nio_method settings instead of worker mode, it'd be much the same, in\nterms of most of the architecture. The information in the pg_stat_XXX\nviews is almost exactly the same. There are two major differences:\n(1) the other methods have no I/O worker processes, because the kernel\nmanages the I/O (or in some unfortunate cases runtime libraries fake\nit with threads), (2) the \"shared completion callbacks\" (see\naio_scb.c) are run by I/O workers in worker mode, but are run by\nwhichever process \"drains\" the I/O in the other modes. That is,\ninitiating processes never hear about I/Os completing from the\noperating system, they just eventually wait on them and find that\nthey're already completed (though they do run the \"local callback\" if\nthere there is one, which is for example the point at which eg\npg_streaming_read might initiate more I/O), or alternatively see that\nthey aren't, and wait on a condition variable for an I/O worker to\nsignal completion. So far this seems like a good choice...\n\nHope that helped show off a couple of features of this scheme.\n\n\n",
"msg_date": "Wed, 24 Feb 2021 19:19:00 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "Hi,\n\nThank you for the amazing and great work.\n\nOn 23.02.2021 15:03, Andres Freund wrote:\n> ## Stats\n>\n> There are two new views: pg_stat_aios showing AIOs that are currently\n> in-progress, pg_stat_aio_backends showing per-backend statistics about AIO.\n\nAs a DBA I would like to propose a few amendments that might help with \npractical usage of stats when feature will be finally implemented. My \nsuggestions aren’t related to the central idea of the proposed changes, \nbut rather to the stats part.\n\nA quick side note, there are two terms in Prometheus \n(https://prometheus.io/docs/concepts/metric_types/):\n1. Counter. A counter is a cumulative metric that represents a single \nmonotonically increasing counter whose value can only increase or be \nreset to zero on restart.\n2. Gauge. A gauge is a metric that represents a single numerical value \nthat can arbitrarily go up and down.\n\nFor the purposes of long-term stats collection, COUNTERs are preferred \nover GAUGEs, because COUNTERs allow us to understand how metrics are \nchanged overtime without missing out potential spikes in activity. As a \nresult, we have a much better historic perspective.\n\nMeasuring and collecting GAUGEs is limited to the moments in time when \nthe stats are taken (snapshots) so the changes that took place between \nthe snapshots remain unmeasured. In systems with a high rate of \ntransactions per second (even 1 second interval between the snapshots) \nGAUGEs measuring won’t provide the full picture. In addition, most of \nthe monitoring systems like Prometheus, Zabbix, etc. use longer \nintervals (from 10-15 to 60 seconds).\n\nThe main idea is to try to expose almost all numeric stats as COUNTERs - \nthis increases overall observabilty of implemented feature.\n\npg_stat_aios.\nIn general, this stat is a set of text values, and at the same time it \nlooks GAUGE-like (similar to pg_stat_activity or pg_locks), and is only \nrelevant for the moment when the user is looking at it. I think it would \nbe better to rename this view to pg_stat_progress_aios. And keep \npg_stat_aios for other AIO stats with global COUNTERs (like stuff in \npg_stat_user_tables or pg_stat_statements, or system-wide /proc/stat, \n/proc/diskstats).\n\npg_stat_aio_backends.\nThis stat is based on COUNTERs, which is great, but the issue here is \nthat its lifespan is limited by the lifespan of the backend processes - \nonce the backend exits the stat will no longer be available - which \ncould be inappropriate in workloads with short-lived backends.\n\nI think there might be few existing examples in the current code that \ncould be repurposed to implement the suggestions above (such as \npg_stat_user_tables, pg_stat_database, etc). With this in mind, I think \nhaving these changes incorporated shouldn’t take significant effort \nconsidering the benefit it will bring to the final user.\n\nOnce again huge respect to your work on this changes and good look.\n\nRegards, Alexey\n\n\n\n",
"msg_date": "Wed, 24 Feb 2021 21:15:14 +0500",
"msg_from": "Alexey Lesovsky <alexey.lesovsky@dataegret.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "I guess what I would be looking for in stats would be a way to tell\nwhat the bandwidth, latency, and queue depth is. Ideally one day\nbroken down by relation/index and pg_stat_statement record.\n\nI think seeing the actual in flight async requests in a connection is\nprobably not going to be very useful in production. It's very low\nlevel and in production the user is just going to find that\noverwhelming detail. It is kind of cool to see the progress in\nsequential operations but I think that should be solved in a higher\nlevel way than this anyways.\n\nWhat we need to calculate these values would be the kinds of per-op\nstats nfsiostat uses from /proc/self/mountstats:\nhttps://utcc.utoronto.ca/~cks/space/blog/linux/NFSMountstatsNFSOps\n\nSo number of async reads we've initiated, how many callbacks have been\ncalled, total cumulative elapsed time between i/o issued and i/o\ncompleted, total bytes of i/o initiated, total bytes of i/o completed.\nAs well a counter of requests which returned errors (eof? i/o error?)\nIf there are other locks or queues internally to postgres total time\nspent in those states.\n\nI have some vague idea that we should have a generic infrastructure\nfor stats that automatically counts things associated with plan nodes\nand automatically bubbles that data up to the per-transaction,\nper-backend, per-relation, and pg_stat_statements stats. But that's a\nwhole other ball of wax :)\n\n\n",
"msg_date": "Wed, 24 Feb 2021 14:59:19 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "> On Tue, Feb 23, 2021 at 02:03:44AM -0800, Andres Freund wrote:\n>\n> over the last ~year I spent a lot of time trying to figure out how we could\n> add AIO (asynchronous IO) and DIO (direct IO) support to postgres. While\n> there's still a *lot* of open questions, I think I now have a decent handle on\n> most of the bigger architectural questions. Thus this long email.\n>\n> Just to be clear: I don't expect the current to design to survive as-is. If\n> there's a few sentences below that sound a bit like describing the new world,\n> that's because they're from the README.md in the patch series...\n\nThanks!\n\n> Comments? Questions?\n>\n> I plan to send separate emails about smaller chunks of this seperately -\n> the whole topic is just too big. In particular I plan to send something\n> around buffer locking / state management - it's a one of the core issues\n> around this imo.\n\nI'm curious about control knobs for this feature, it's somewhat related\nto the stats questions also discussed in this thread. I guess most\nimportant of those are max_aio_in_flight, io_max_concurrency etc, and\nthey're going to be a hard limits, right? I'm curious if it makes sense\nto explore possibility to have these sort of \"backpressure\", e.g. if\nnumber of inflight requests is too large calculate inflight_limit a bit\nlower than possible (to avoid hard performance deterioration when the db\nis trying to do too much IO, and rather do it smooth). From what I\nremember io_uring does have something similar only for SQPOLL. Another\nsimilar question if this could be used for throttling of some overloaded\nworkers in case of misconfigured clients or such?\n\n\n",
"msg_date": "Wed, 24 Feb 2021 21:41:16 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "Hi,\n\nOn 2021-02-24 21:15:14 +0500, Alexey Lesovsky wrote:\n> pg_stat_aios.\n> In general, this stat is a set of text values, and at the same time it looks\n> GAUGE-like (similar to pg_stat_activity or pg_locks), and is only relevant\n> for the moment when the user is looking at it. I think it would be better to\n> rename this view to pg_stat_progress_aios. And keep pg_stat_aios for other\n> AIO stats with global COUNTERs (like stuff in pg_stat_user_tables or\n> pg_stat_statements, or system-wide /proc/stat, /proc/diskstats).\n\nRight - arguably it really shouldn't even have _stat_ in the name... I\ndon't particularly like the idea of adding _progress_ as that seems it'd\nlead to confusing it with pg_stat_progress_vacuum etc - and it's quite a\ndifferent beast.\n\n\n> pg_stat_aio_backends.\n> This stat is based on COUNTERs, which is great, but the issue here is that\n> its lifespan is limited by the lifespan of the backend processes - once the\n> backend exits the stat will no longer be available - which could be\n> inappropriate in workloads with short-lived backends.\n\nThere's a todo somewhere to roll over the per-connection stats into a\nglobal stats piece on disconnect. In addition I was thinking of adding a\nview that sums up the value of \"already disconnected backends\" and the\ncurrently connected ones. Would that mostly address your concerns?\n\n\n> I think there might be few existing examples in the current code that could\n> be repurposed to implement the suggestions above (such as\n> pg_stat_user_tables, pg_stat_database, etc). With this in mind, I think\n> having these changes incorporated shouldn’t take significant effort\n> considering the benefit it will bring to the final user.\n\nYea - I kind of was planning to go somewhere roughly in the direction\nyou suggest, but took a few shortcuts due to the size of the\nproject. Having the views made it a lot easier to debug / develop, but\nsupporting longer lived stuff wasn't yet crucial. But I agree, we really\nshould have it...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 Feb 2021 13:03:16 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "Hi,\n\nOn 2021-02-24 14:59:19 -0500, Greg Stark wrote:\n> I guess what I would be looking for in stats would be a way to tell\n> what the bandwidth, latency, and queue depth is. Ideally one day\n> broken down by relation/index and pg_stat_statement record.\n\nI think doing it at that granularity will likely be too expensive...\n\n\n> I think seeing the actual in flight async requests in a connection is\n> probably not going to be very useful in production.\n\nI think it's good for analyzing concrete performance issues, but\nprobably not that much more. Although, it's not too hard to build\nsampling based on top of it with a tiny bit of work (should display the\nrelfilenode etc).\n\n\n> So number of async reads we've initiated, how many callbacks have been\n> called, total cumulative elapsed time between i/o issued and i/o\n> completed, total bytes of i/o initiated, total bytes of i/o completed.\n\nMuch of that is already in pg_stat_aio_backends - but is lost after\ndisconnect (easy to solve). We don't track bytes of IO currently, but\nthat'd not be hard.\n\nHowever, it's surprisingly hard to do the measurement between \"issued\"\nand \"completed\" in a meaningful way. It's obviously not hard to measure\nthe time at which the request was issued, but there's no real way to\ndetermine the time at which it was completed. If a backend is busy doing\nother things (e.g. invoke aggregate transition functions), we'll not see\nthe completion immediately, and therefore not have an accurate\ntimestamp.\n\nWith several methods of doing AIO we can set up signals that fire on\ncompletion, but that's pretty darn expensive. And it's painful to write\nsuch signal handlers in a safe way.\n\n\n> I have some vague idea that we should have a generic infrastructure\n> for stats that automatically counts things associated with plan nodes\n> and automatically bubbles that data up to the per-transaction,\n> per-backend, per-relation, and pg_stat_statements stats. But that's a\n> whole other ball of wax :)\n\nHeh, yea, let's tackle that separately ;)\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 Feb 2021 13:23:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "Hi,\n\nOn 2021-02-24 21:41:16 +0100, Dmitry Dolgov wrote:\n> I'm curious about control knobs for this feature, it's somewhat related\n> to the stats questions also discussed in this thread. I guess most\n> important of those are max_aio_in_flight, io_max_concurrency etc, and\n> they're going to be a hard limits, right?\n\nYea - there's a lot more work needed in that area.\n\nio_max_concurrency especially really should be a GUC, I was just too\nlazy for that so far.\n\n\n> I'm curious if it makes sense\n> to explore possibility to have these sort of \"backpressure\", e.g. if\n> number of inflight requests is too large calculate inflight_limit a bit\n> lower than possible (to avoid hard performance deterioration when the db\n> is trying to do too much IO, and rather do it smooth).\n\nIt's decidedly nontrivial to compute \"too large\" - and pretty workload\ndependant (e.g. lower QDs are better latency sensitive OLTP, higher QD\nis better for bulk r/w heavy analytics). So I don't really want to go\nthere for now - the project is already very large.\n\nWhat I do think is needed and feasible (there's a bunch of TODOs in the\ncode about it already) is to be better at only utilizing deeper queues\nwhen lower queues don't suffice. So we e.g. don't read ahead more than a\nfew blocks for a scan where the query is spending most of the time\n\"elsewhere.\n\nThere's definitely also some need for a bit better global, instead of\nper-backend, control over the number of IOs in flight. That's not too\nhard to implement - the hardest probably is to avoid it becoming a\nscalability issue.\n\nI think the area with the most need for improvement is figuring out how\nwe determine the queue depths for different things using IO. Don't\nreally want to end up with 30 parameters influencing what queue depth to\nuse for (vacuum, index builds, sequential scans, index scans, bitmap\nheap scans, ...) - but they benefit from a deeper queue will differ\nbetween places.\n\n\n> From what I remember io_uring does have something similar only for\n> SQPOLL. Another similar question if this could be used for throttling\n> of some overloaded workers in case of misconfigured clients or such?\n\nYou mean dynamically? Or just by setting the concurrency lower for\ncertain users? I think doing so dynamically is way too complicated for\nnow. But I'd expect configuring it on a per-user basis or such to be a\nreasonable thing. That might require splitting it into two GUCs - one\nSUSET one and a second one that's settable by any user, but can only\nlower the depth.\n\nI think it'll be pretty useful to e.g. configure autovacuum to have a\nlow queue depth instead of using the current cost limiting. That way the\nimpact on the overall system is limitted, but it's not slowed down\nunnecessarily as much.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 Feb 2021 13:45:10 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "> On Wed, Feb 24, 2021 at 01:45:10PM -0800, Andres Freund wrote:\n>\n> > I'm curious if it makes sense\n> > to explore possibility to have these sort of \"backpressure\", e.g. if\n> > number of inflight requests is too large calculate inflight_limit a bit\n> > lower than possible (to avoid hard performance deterioration when the db\n> > is trying to do too much IO, and rather do it smooth).\n>\n> What I do think is needed and feasible (there's a bunch of TODOs in the\n> code about it already) is to be better at only utilizing deeper queues\n> when lower queues don't suffice. So we e.g. don't read ahead more than a\n> few blocks for a scan where the query is spending most of the time\n> \"elsewhere.\n>\n> There's definitely also some need for a bit better global, instead of\n> per-backend, control over the number of IOs in flight. That's not too\n> hard to implement - the hardest probably is to avoid it becoming a\n> scalability issue.\n>\n> I think the area with the most need for improvement is figuring out how\n> we determine the queue depths for different things using IO. Don't\n> really want to end up with 30 parameters influencing what queue depth to\n> use for (vacuum, index builds, sequential scans, index scans, bitmap\n> heap scans, ...) - but they benefit from a deeper queue will differ\n> between places.\n\nYeah, sounds like an interesting opportunity for improvements. I'm\npreparing few benchmarks to understand better how this all works, so\nwill keep this in mind.\n\n> > From what I remember io_uring does have something similar only for\n> > SQPOLL. Another similar question if this could be used for throttling\n> > of some overloaded workers in case of misconfigured clients or such?\n>\n> You mean dynamically? Or just by setting the concurrency lower for\n> certain users? I think doing so dynamically is way too complicated for\n> now. But I'd expect configuring it on a per-user basis or such to be a\n> reasonable thing. That might require splitting it into two GUCs - one\n> SUSET one and a second one that's settable by any user, but can only\n> lower the depth.\n>\n> I think it'll be pretty useful to e.g. configure autovacuum to have a\n> low queue depth instead of using the current cost limiting. That way the\n> impact on the overall system is limitted, but it's not slowed down\n> unnecessarily as much.\n\nYes, you got it right, not dynamically, but rather expose this to be\nconfigured on e.g. per-user basis.\n\n\n",
"msg_date": "Thu, 25 Feb 2021 09:22:43 +0100",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "Hi,\n\nOn 25.02.2021 02:03, Andres Freund wrote:\n>> pg_stat_aio_backends.\n>> This stat is based on COUNTERs, which is great, but the issue here is that\n>> its lifespan is limited by the lifespan of the backend processes - once the\n>> backend exits the stat will no longer be available - which could be\n>> inappropriate in workloads with short-lived backends.\n> There's a todo somewhere to roll over the per-connection stats into a\n> global stats piece on disconnect. In addition I was thinking of adding a\n> view that sums up the value of \"already disconnected backends\" and the\n> currently connected ones. Would that mostly address your concerns?\n\nYes, approach with separated stats for live and disconnected backends \nlooks good and solves problem with \"stats loss\".\n\nOr it can be done like a stats for shared objects in pg_stat_databases, \nwhere there is a special NULL database is used.\n\nRegards, Alexey\n\n\n",
"msg_date": "Thu, 25 Feb 2021 22:07:22 +0500",
"msg_from": "Alexey Lesovsky <alexey.lesovsky@dataegret.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "Sorry for another late reply, finally found some time to formulate couple of\nthoughts.\n\n> On Thu, Feb 25, 2021 at 09:22:43AM +0100, Dmitry Dolgov wrote:\n> > On Wed, Feb 24, 2021 at 01:45:10PM -0800, Andres Freund wrote:\n> >\n> > > I'm curious if it makes sense\n> > > to explore possibility to have these sort of \"backpressure\", e.g. if\n> > > number of inflight requests is too large calculate inflight_limit a bit\n> > > lower than possible (to avoid hard performance deterioration when the db\n> > > is trying to do too much IO, and rather do it smooth).\n> >\n> > What I do think is needed and feasible (there's a bunch of TODOs in the\n> > code about it already) is to be better at only utilizing deeper queues\n> > when lower queues don't suffice. So we e.g. don't read ahead more than a\n> > few blocks for a scan where the query is spending most of the time\n> > \"elsewhere.\n> >\n> > There's definitely also some need for a bit better global, instead of\n> > per-backend, control over the number of IOs in flight. That's not too\n> > hard to implement - the hardest probably is to avoid it becoming a\n> > scalability issue.\n> >\n> > I think the area with the most need for improvement is figuring out how\n> > we determine the queue depths for different things using IO. Don't\n> > really want to end up with 30 parameters influencing what queue depth to\n> > use for (vacuum, index builds, sequential scans, index scans, bitmap\n> > heap scans, ...) - but they benefit from a deeper queue will differ\n> > between places.\n\nTalking about parameters, from what I understand the actual number of queues\n(e.g. io_uring) created is specified by PGAIO_NUM_CONTEXTS, shouldn't it be\nconfigurable? Maybe in fact there should be not that many knobs after all - if\nthe model assumes the storage has:\n\n* Some number of hardware queues, then the number of queues AIO implementation\n needs to use depends on it. For example, lowering number of contexts between\n different benchmark runs I could see that some of the hardware queues were\n significantly underutilized. Potentially there could be also such\n thing as too many contexts.\n\n* Certain bandwidth, then the submit batch size (io_max_concurrency or\n PGAIO_SUBMIT_BATCH_SIZE) depends on it. This will allow to distinguish\n attached storage with high bandwidth and high latency vs local storages.\n\n From what I see max_aio_in_flight is used as a queue depth for contexts, which\nis workload dependent and not easy to figure out as you mentioned. To avoid\nhaving 30 different parameters maybe it's more feasible to introduce \"shallow\"\nand \"deep\" queues, where particular depth for those could be derived from depth\nof hardware queues. The question which activity should use which queue is not\neasy, but if I get it right from queuing theory (assuming IO producers are\nstationary processes and fixed IO latency from the storage) it depends on IO\narrivals distribution in every particular case and this in turn could be\nroughly estimated for each type of activity. One can expect different IO\narrivals distributions for e.g. a normal point-query backend and a checkpoint\nor vacuum process, no matter what are the other conditions (collecting those\nfor few benchmark runs gives indeed pretty distinct distributions).\n\nIf I understand correctly, those contexts defined by PGAIO_NUM_CONTEXTS are the\nmain working horse, right? I'm asking because there is also something called\nlocal_ring, but it seems there are no IOs submitted into those. Assuming that\ncontexts are a main way of submitting IO, it would be also interesting to\nexplore isolated for different purposes contexts. I haven't finished yet my\nchanges here to give any results, but at least doing some tests with fio show\ndifferent latencies, when two io_urings are processing mixed read/writes vs\nisolated read or writes. On the side note, at the end of the day there are so\nmany queues - application queue, io_uring, mq software queue, hardware queue -\nI'm really curious if it would amplify tail latencies.\n\nAnother thing I've noticed is AIO implementation is much more significantly\naffected by side IO activity than synchronous one. E.g. AIO version tps drops\nfrom tens of thousands to a couple of hundreds just because of some kworker\nstarted to flush dirty buffers (especially with disabled writeback throttling),\nwhile synchronous version doesn't suffer that much. Not sure what to make of\nit. Btw, overall I've managed to get better numbers from AIO implementation on\nIO bounded test cases with local NVME device, but non IO bounded were mostly a\nbit slower - is it expected, or am I missing something?\n\nInteresting thing to note is that io_uring implementation apparently relaxed\nrequirements for polling operations, now one needs to have only CAP_SYS_NICE\ncapability, not CAP_SYS_ADMIN. I guess theoretically there are no issues using\nit within the current design?\n\n\n",
"msg_date": "Fri, 2 Apr 2021 18:06:36 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "On Tue, Feb 23, 2021 at 5:04 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> ## AIO API overview\n>\n> The main steps to use AIO (without higher level helpers) are:\n>\n> 1) acquire an \"unused\" AIO: pgaio_io_get()\n>\n> 2) start some IO, this is done by functions like\n> pgaio_io_start_(read|write|fsync|flush_range)_(smgr|sb|raw|wal)\n>\n> The (read|write|fsync|flush_range) indicates the operation, whereas\n> (smgr|sb|raw|wal) determines how IO completions, errors, ... are handled.\n>\n> (see below for more details about this design choice - it might or not be\n> right)\n>\n> 3) optionally: assign a backend-local completion callback to the IO\n> (pgaio_io_on_completion_local())\n>\n> 4) 2) alone does *not* cause the IO to be submitted to the kernel, but to be\n> put on a per-backend list of pending IOs. The pending IOs can be explicitly\n> be flushed pgaio_submit_pending(), but will also be submitted if the\n> pending list gets to be too large, or if the current backend waits for the\n> IO.\n>\n> The are two main reasons not to submit the IO immediately:\n> - If adjacent, we can merge several IOs into one \"kernel level\" IO during\n> submission. Larger IOs are considerably more efficient.\n> - Several AIO APIs allow to submit a batch of IOs in one system call.\n>\n> 5) wait for the IO: pgaio_io_wait() waits for an IO \"owned\" by the current\n> backend. When other backends may need to wait for an IO to finish,\n> pgaio_io_ref() can put a reference to that AIO in shared memory (e.g. a\n> BufferDesc), which can be waited for using pgaio_io_wait_ref().\n>\n> 6) Process the results of the request. If a callback was registered in 3),\n> this isn't always necessary. The results of AIO can be accessed using\n> pgaio_io_result() which returns an integer where negative numbers are\n> -errno, and positive numbers are the [partial] success conditions\n> (e.g. potentially indicating a short read).\n>\n> 7) release ownership of the io (pgaio_io_release()) or reuse the IO for\n> another operation (pgaio_io_recycle())\n>\n>\n> Most places that want to use AIO shouldn't themselves need to care about\n> managing the number of writes in flight, or the readahead distance. To help\n> with that there are two helper utilities, a \"streaming read\" and a \"streaming\n> write\".\n>\n> The \"streaming read\" helper uses a callback to determine which blocks to\n> prefetch - that allows to do readahead in a sequential fashion but importantly\n> also allows to asynchronously \"read ahead\" non-sequential blocks.\n>\n> E.g. for vacuum, lazy_scan_heap() has a callback that uses the visibility map\n> to figure out which block needs to be read next. Similarly lazy_vacuum_heap()\n> uses the tids in LVDeadTuples to figure out which blocks are going to be\n> needed. Here's the latter as an example:\n> https://github.com/anarazel/postgres/commit/a244baa36bfb252d451a017a273a6da1c09f15a3#diff-3198152613d9a28963266427b380e3d4fbbfabe96a221039c6b1f37bc575b965R1906\n>\n\nAttached is a patch on top of the AIO branch which does bitmapheapscan\nprefetching using the PgStreamingRead helper already used by sequential\nscan and vacuum on the AIO branch.\n\nThe prefetch iterator is removed and the main iterator in the\nBitmapHeapScanState node is now used by the PgStreamingRead helper.\n\nSome notes about the code:\n\nEach IO will have its own TBMIterateResult allocated and returned by the\nPgStreamingRead helper and freed later by\nheapam_scan_bitmap_next_block() before requesting the next block.\nPreviously it was allocated once and saved in the TBMIterator in the\nBitmapHeapScanState node and reused. Because of this, the table AM API\nroutine, table_scan_bitmap_next_block() now defines the TBMIterateResult\nas an output parameter.\n\nThe PgStreamingRead helper pgsr_private parameter for BitmapHeapScan is\nnow the actual BitmapHeapScanState node. It needed access to the\niterator, the heap scan descriptor, and a few fields in the\nBitmapHeapScanState node that could be moved elsewhere or duplicated\n(visibility map buffer and can_skip_fetch, for example). So, it is\npossible to either create a new struct or move fields around to avoid\nthis--but, I'm not sure if that would actually be better.\n\nBecause the PgStreamingReadHelper needs to be set up with the\nBitmapHeapScanState node but also needs some table AM specific\nfunctions, I thought it made more sense to initialize it using a new\ntable AM API routine. Instead of fully implementing that I just wrote a\nwrapper function, table_bitmap_scan_setup() which just calls\nbitmapheap_pgsr_alloc() to socialize the idea before implementing it.\n\nI haven't made the GIN code reasonable yet either (it uses the TID\nbitmap functions that I've changed).\n\nThere are various TODOs in the code posing questions both to the\nreviewer and myself for future versions of the patch.\n\nOh, also, I haven't updated the failing partition_prune regression test\nbecause I haven't had a chance to look at the EXPLAIN code which adds\nthe text which is not being produced to see if it is actually a bug in\nmy code or not.\n\nOh, and I haven't done testing to see how effective the prefetching is\n-- that is a larger project that I have yet to tackle.\n\n- Melanie",
"msg_date": "Wed, 28 Jul 2021 13:37:48 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "Hi,\n\nOn 2021-07-28 13:37:48 -0400, Melanie Plageman wrote:\n> Attached is a patch on top of the AIO branch which does bitmapheapscan\n> prefetching using the PgStreamingRead helper already used by sequential\n> scan and vacuum on the AIO branch.\n>\n> The prefetch iterator is removed and the main iterator in the\n> BitmapHeapScanState node is now used by the PgStreamingRead helper.\n\nCool! I'm heartened to see \"12 files changed, 272 insertions(+), 495 deletions(-)\"\n\n\nIt's worth calling out that this fixes some abstraction leakyness around\ntableam too...\n\n\n> Each IO will have its own TBMIterateResult allocated and returned by the\n> PgStreamingRead helper and freed later by\n> heapam_scan_bitmap_next_block() before requesting the next block.\n> Previously it was allocated once and saved in the TBMIterator in the\n> BitmapHeapScanState node and reused. Because of this, the table AM API\n> routine, table_scan_bitmap_next_block() now defines the TBMIterateResult\n> as an output parameter.\n>\n> I haven't made the GIN code reasonable yet either (it uses the TID\n> bitmap functions that I've changed).\n\nI don't quite understand the need to change the tidbitmap interface, or\nmaybe rather I'm not convinced that pessimistically preallocating space\nis a good idea?\n\n\n> I don't see a need for it right now. If you wanted you\n> Because the PgStreamingReadHelper needs to be set up with the\n> BitmapHeapScanState node but also needs some table AM specific\n> functions, I thought it made more sense to initialize it using a new\n> table AM API routine. Instead of fully implementing that I just wrote a\n> wrapper function, table_bitmap_scan_setup() which just calls\n> bitmapheap_pgsr_alloc() to socialize the idea before implementing it.\n\nThat makes sense.\n\n\n> static bool\n> heapam_scan_bitmap_next_block(TableScanDesc scan,\n> -\t\t\t\t\t\t\t TBMIterateResult *tbmres)\n> + TBMIterateResult **tbmres)\n\nISTM that we possibly shouldn't expose the TBMIterateResult outside of\nthe AM after this change? It feels somewhat like an implementation\ndetail now. It seems somewhat odd to expose a ** to set a pointer that\nnodeBitmapHeapscan.c then doesn't really deal with itself.\n\n\n> @@ -695,8 +693,7 @@ tbm_begin_iterate(TIDBitmap *tbm)\n> \t * Create the TBMIterator struct, with enough trailing space to serve the\n> \t * needs of the TBMIterateResult sub-struct.\n> \t */\n> -\titerator = (TBMIterator *) palloc(sizeof(TBMIterator) +\n> -\t\t\t\t\t\t\t\t\t MAX_TUPLES_PER_PAGE * sizeof(OffsetNumber));\n> +\titerator = (TBMIterator *) palloc(sizeof(TBMIterator));\n> \titerator->tbm = tbm;\n\nHm?\n\n\n> diff --git a/src/include/storage/aio.h b/src/include/storage/aio.h\n> index 9a07f06b9f..8e1aa48827 100644\n> --- a/src/include/storage/aio.h\n> +++ b/src/include/storage/aio.h\n> @@ -39,7 +39,7 @@ typedef enum IoMethod\n> } IoMethod;\n>\n> /* We'll default to bgworker. */\n> -#define DEFAULT_IO_METHOD IOMETHOD_WORKER\n> +#define DEFAULT_IO_METHOD IOMETHOD_IO_URING\n\nI agree with the sentiment, but ... :)\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 28 Jul 2021 11:10:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 2:10 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2021-07-28 13:37:48 -0400, Melanie Plageman wrote:\n>\n> > Each IO will have its own TBMIterateResult allocated and returned by the\n> > PgStreamingRead helper and freed later by\n> > heapam_scan_bitmap_next_block() before requesting the next block.\n> > Previously it was allocated once and saved in the TBMIterator in the\n> > BitmapHeapScanState node and reused. Because of this, the table AM API\n> > routine, table_scan_bitmap_next_block() now defines the TBMIterateResult\n> > as an output parameter.\n> >\n> > I haven't made the GIN code reasonable yet either (it uses the TID\n> > bitmap functions that I've changed).\n>\n> I don't quite understand the need to change the tidbitmap interface, or\n> maybe rather I'm not convinced that pessimistically preallocating space\n> is a good idea?\n>\n\nTBMIterator cannot contain a TBMIterateResult because it prefetches\nblocks and calls tbm_iterate() for each one, which would overwrite the\nrelevant information in the TBMIterateResult before it has been returned\nto heapam_scan_bitmap_next_block().*\n\nThus, we need at least as many TBMIterateResults as the size of the\nprefetch window at its largest.\n\nWe could save some memory if we separated the data in TBMIterateResult\nand made a new struct, let's call it BitmapBlockState, with just the\nblock number, buffer number, and recheck to be used and returned by\nbitmapheapscan_pgsr_next_single().\n\nWe need both block and buffer because we need to distinguish between\nhit_end, skip_fetch, and invalid block number conditions in the caller.\nWe need recheck before initiating IO to determine if we should\nskip_fetch.\n\nThen a separate struct which is much the same as the existing\nTBMIterateResult could be maintained in the BitmapHeapScanState node and\npassed into heapam_scan_bitmap_next_block() along with the bitmap (a new\nparameter).\n\nIn heapam_scan_bitmap_next_block(), after getting the BitmapBlockState\nfrom pg_streaming_read_get_next(), we could call tbm_find_pageentry()\nwith the block number and bitmap.\nFor a non-lossy page, we could then scrape the offsets and ntuples using\nthe PageTableEntry. If it is lossy, we would set recheck and ntuples\naccordingly. (I do wonder if that allows us to distinguish between a\nlossy page and a block number that is erroneous and isn't in the\nbitmap--but maybe that can't happen.)\n\nHowever, we would still have as many palloc() calls (one for every block\nto create the BitmapBlockState. We would have less outstanding memory by\nlimiting the number of offsets arrays created.\nWe would still need to pass the recheck flag, ntuples, and buffer back\nup to BitmapHeapNext(), so, at that point we would still need a data\nstructure that is basically the same as the existing TBMIterateResult.\n\nAlternatively, we could keep an array of TBMIterateResults the size of\nthe prefetch window and reuse them -- though I'm not sure where to keep\nit and how to manage it when the window gets resized.\n\nIn my current patch, I allocate and free one TBMIterateResult for each\nblock. The amount of outstanding memory will be #ios in prefetch window\n* sizeof(TBMIterateResult).\n\nWe don't want to always palloc() memory for the TBMIterateResult inside\nof tbm_iterate(), since other users (like GIN) still only need one\nTBMIterateResult\n\nSo, if the TBMIterateResult is not inside of the TBMIterator and\ntbm_iterate() does not allocate the memory, we need to pass it in as an\noutput parameter, and, if we do that, it felt odd to also return it --\nhence the function signature change.\n\nOne alternative I tried was having the TBMIterator have a pointer to the\nTBMIterateResult and then users of it can allocate the TBMIterateResult\nand set it in the TBMIterator before calling tbm_iterate(). But, then we\nneed to expose the TBMIterator outside of the TID bitmap API. Also, it\nfelt weird to have a member of the iterator which must not be NULL when\ntbm_iterate() is called but which isn't set up in tbm_begin_iterate().\n\n>\n>\n> > static bool\n> > heapam_scan_bitmap_next_block(TableScanDesc scan,\n> > -\nTBMIterateResult *tbmres)\n> > + TBMIterateResult **tbmres)\n>\n> ISTM that we possibly shouldn't expose the TBMIterateResult outside of\n> the AM after this change? It feels somewhat like an implementation\n> detail now. It seems somewhat odd to expose a ** to set a pointer that\n> nodeBitmapHeapscan.c then doesn't really deal with itself.\n>\n\nAll the members of the TBMIterateResult are populated in\nbitmapheapscan_pgsr_next_single() and then\nmost of it is used by heapam_scan_bitmap_next_block() to\n - detect error conditions and done-ness\n - fill in the HeapScanDesc with the information needed by\n heapam_scan_bitmap_next_tuple() (rs_cbuf [which is basically\n redundant with TBMIterateResult->buffer] and rs_vistuples)\n\nHowever, some of the information is used up in BitmapHeapNext() and in\nheapam_scan_bitmap_next_tuple() and doesn't go in the HeapScanDesc:\n - BitmapHeapNext() uses the state of the TBMIterateResult to determine\n if the bitmap is exhausted, since the return value of\n table_scan_bitmap_next_block() indicates an error condition and not\n done-ness\n - BitmapHeapNext() uses recheck to determine whether or not to\n recheck qual conditions\n - heapam_scan_bitmap_next_tuple() uses the validity of the buffer to\n determine if it should return empty tuples\n - heapam_scan_bitmap_next_tuple() uses ntuples to determine how many\n empty tuples to return\n\nSo, if we don't want to pass around a TBMIterateResult, we would have to\n1) change the return value of heapam_scan_bitmap_next_block() and 2)\nfind another appropriate place for the information above (or another way\nto represent the encoded information).\n\nIt is also worth noting that heapam_scan_bitmap_next_tuple() took a\nTBMIterateResult before without using it, so I assume your foresaw other\ntable AMs using it?\n\nOverall, the whole thing still feels a bit yucky to me. It doesn't quite\nfeel like the right things are in the right places, but, I haven't put\nmy finger on the culprit.\n\nI do think putting the buffer in the TBMIterateResult is an\ninappropriate addition to the TID Bitmap API.\n\nAlso, I would like to move this code:\n\nif (node->tbmres->ntuples >= 0)\nnode->exact_pages++;\nelse\nnode->lossy_pages++;\n\nfrom where it is in BitmapHeapNext(). It seems odd that that is the only\npart of BitmapHeapNext() that reaches inside of the TBMIterateResult.\nAlso, as it is, it is incorrect--it doesn't count the first page. I\ncould duplicate it under the first call to\ntable_scan_bitmap_next_block(), but I wasn't looking forward to doing\nso.\n\n>\n> > @@ -695,8 +693,7 @@ tbm_begin_iterate(TIDBitmap *tbm)\n> > * Create the TBMIterator struct, with enough trailing space to\nserve the\n> > * needs of the TBMIterateResult sub-struct.\n> > */\n> > - iterator = (TBMIterator *) palloc(sizeof(TBMIterator) +\n> > -\nMAX_TUPLES_PER_PAGE * sizeof(OffsetNumber));\n> > + iterator = (TBMIterator *) palloc(sizeof(TBMIterator));\n> > iterator->tbm = tbm;\n>\n> Hm?\n>\n\nI removed the TBMIterateResult from the TBMIterator, so, we should no\nlonger allocate memory for the offsets array when creating the\nTBMIterator.\n\n* I think that having TBMIterateResult inside of TBMIterator is not\n well-defined C language behavior. In [1], it says\n\n \"Structures with flexible array members (or unions who have a\n recursive-possibly structure member with flexible array member) cannot\n appear as array elements or as members of other structures.\"\n\n[1] https://en.cppreference.com/w/c/language/struct\n\nOn Wed, Jul 28, 2021 at 2:10 PM Andres Freund <andres@anarazel.de> wrote:> On 2021-07-28 13:37:48 -0400, Melanie Plageman wrote:>> > Each IO will have its own TBMIterateResult allocated and returned by the> > PgStreamingRead helper and freed later by> > heapam_scan_bitmap_next_block() before requesting the next block.> > Previously it was allocated once and saved in the TBMIterator in the> > BitmapHeapScanState node and reused. Because of this, the table AM API> > routine, table_scan_bitmap_next_block() now defines the TBMIterateResult> > as an output parameter.> >> > I haven't made the GIN code reasonable yet either (it uses the TID> > bitmap functions that I've changed).>> I don't quite understand the need to change the tidbitmap interface, or> maybe rather I'm not convinced that pessimistically preallocating space> is a good idea?>TBMIterator cannot contain a TBMIterateResult because it prefetchesblocks and calls tbm_iterate() for each one, which would overwrite therelevant information in the TBMIterateResult before it has been returnedto heapam_scan_bitmap_next_block().*Thus, we need at least as many TBMIterateResults as the size of theprefetch window at its largest.We could save some memory if we separated the data in TBMIterateResultand made a new struct, let's call it BitmapBlockState, with just theblock number, buffer number, and recheck to be used and returned bybitmapheapscan_pgsr_next_single().We need both block and buffer because we need to distinguish betweenhit_end, skip_fetch, and invalid block number conditions in the caller.We need recheck before initiating IO to determine if we shouldskip_fetch.Then a separate struct which is much the same as the existingTBMIterateResult could be maintained in the BitmapHeapScanState node andpassed into heapam_scan_bitmap_next_block() along with the bitmap (a newparameter).In heapam_scan_bitmap_next_block(), after getting the BitmapBlockStatefrom pg_streaming_read_get_next(), we could call tbm_find_pageentry()with the block number and bitmap.For a non-lossy page, we could then scrape the offsets and ntuples usingthe PageTableEntry. If it is lossy, we would set recheck and ntuplesaccordingly. (I do wonder if that allows us to distinguish between alossy page and a block number that is erroneous and isn't in thebitmap--but maybe that can't happen.)However, we would still have as many palloc() calls (one for every blockto create the BitmapBlockState. We would have less outstanding memory bylimiting the number of offsets arrays created.We would still need to pass the recheck flag, ntuples, and buffer backup to BitmapHeapNext(), so, at that point we would still need a datastructure that is basically the same as the existing TBMIterateResult.Alternatively, we could keep an array of TBMIterateResults the size ofthe prefetch window and reuse them -- though I'm not sure where to keepit and how to manage it when the window gets resized.In my current patch, I allocate and free one TBMIterateResult for eachblock. The amount of outstanding memory will be #ios in prefetch window* sizeof(TBMIterateResult).We don't want to always palloc() memory for the TBMIterateResult insideof tbm_iterate(), since other users (like GIN) still only need oneTBMIterateResultSo, if the TBMIterateResult is not inside of the TBMIterator andtbm_iterate() does not allocate the memory, we need to pass it in as anoutput parameter, and, if we do that, it felt odd to also return it --hence the function signature change.One alternative I tried was having the TBMIterator have a pointer to theTBMIterateResult and then users of it can allocate the TBMIterateResultand set it in the TBMIterator before calling tbm_iterate(). But, then weneed to expose the TBMIterator outside of the TID bitmap API. Also, itfelt weird to have a member of the iterator which must not be NULL whentbm_iterate() is called but which isn't set up in tbm_begin_iterate().>>> > static bool> > heapam_scan_bitmap_next_block(TableScanDesc scan,> > - TBMIterateResult *tbmres)> > + TBMIterateResult **tbmres)>> ISTM that we possibly shouldn't expose the TBMIterateResult outside of> the AM after this change? It feels somewhat like an implementation> detail now. It seems somewhat odd to expose a ** to set a pointer that> nodeBitmapHeapscan.c then doesn't really deal with itself.>All the members of the TBMIterateResult are populated inbitmapheapscan_pgsr_next_single() and thenmost of it is used by heapam_scan_bitmap_next_block() to - detect error conditions and done-ness - fill in the HeapScanDesc with the information needed by heapam_scan_bitmap_next_tuple() (rs_cbuf [which is basically redundant with TBMIterateResult->buffer] and rs_vistuples)However, some of the information is used up in BitmapHeapNext() and inheapam_scan_bitmap_next_tuple() and doesn't go in the HeapScanDesc: - BitmapHeapNext() uses the state of the TBMIterateResult to determine if the bitmap is exhausted, since the return value of table_scan_bitmap_next_block() indicates an error condition and not done-ness - BitmapHeapNext() uses recheck to determine whether or not to recheck qual conditions - heapam_scan_bitmap_next_tuple() uses the validity of the buffer to determine if it should return empty tuples - heapam_scan_bitmap_next_tuple() uses ntuples to determine how many empty tuples to returnSo, if we don't want to pass around a TBMIterateResult, we would have to1) change the return value of heapam_scan_bitmap_next_block() and 2)find another appropriate place for the information above (or another wayto represent the encoded information).It is also worth noting that heapam_scan_bitmap_next_tuple() took aTBMIterateResult before without using it, so I assume your foresaw othertable AMs using it?Overall, the whole thing still feels a bit yucky to me. It doesn't quitefeel like the right things are in the right places, but, I haven't putmy finger on the culprit.I do think putting the buffer in the TBMIterateResult is aninappropriate addition to the TID Bitmap API.Also, I would like to move this code:\t\tif (node->tbmres->ntuples >= 0)\t\t\tnode->exact_pages++;\t\telse\t\t\tnode->lossy_pages++;from where it is in BitmapHeapNext(). It seems odd that that is the onlypart of BitmapHeapNext() that reaches inside of the TBMIterateResult.Also, as it is, it is incorrect--it doesn't count the first page. Icould duplicate it under the first call totable_scan_bitmap_next_block(), but I wasn't looking forward to doingso.>> > @@ -695,8 +693,7 @@ tbm_begin_iterate(TIDBitmap *tbm)> > * Create the TBMIterator struct, with enough trailing space to serve the> > * needs of the TBMIterateResult sub-struct.> > */> > - iterator = (TBMIterator *) palloc(sizeof(TBMIterator) +> > - MAX_TUPLES_PER_PAGE * sizeof(OffsetNumber));> > + iterator = (TBMIterator *) palloc(sizeof(TBMIterator));> > iterator->tbm = tbm;>> Hm?>I removed the TBMIterateResult from the TBMIterator, so, we should nolonger allocate memory for the offsets array when creating theTBMIterator.* I think that having TBMIterateResult inside of TBMIterator is not well-defined C language behavior. In [1], it says \"Structures with flexible array members (or unions who have a recursive-possibly structure member with flexible array member) cannot appear as array elements or as members of other structures.\"[1] https://en.cppreference.com/w/c/language/struct",
"msg_date": "Fri, 30 Jul 2021 15:35:30 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "Hi,\n\nOn 2021-07-30 15:35:30 -0400, Melanie Plageman wrote:\n> * I think that having TBMIterateResult inside of TBMIterator is not\n> well-defined C language behavior. In [1], it says\n> \n> \"Structures with flexible array members (or unions who have a\n> recursive-possibly structure member with flexible array member) cannot\n> appear as array elements or as members of other structures.\"\n\n> [1] https://en.cppreference.com/w/c/language/struct\n\nI think it is ok as long as the struct with the flexible array member is\nat the end of the struct it is embedded in. I think even by the letter\nof the standard, but it's as always hard to parse...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 30 Jul 2021 12:52:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "On Sat, Jul 31, 2021 at 7:52 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2021-07-30 15:35:30 -0400, Melanie Plageman wrote:\n> > * I think that having TBMIterateResult inside of TBMIterator is not\n> > well-defined C language behavior. In [1], it says\n> >\n> > \"Structures with flexible array members (or unions who have a\n> > recursive-possibly structure member with flexible array member) cannot\n> > appear as array elements or as members of other structures.\"\n>\n> > [1] https://en.cppreference.com/w/c/language/struct\n>\n> I think it is ok as long as the struct with the flexible array member is\n> at the end of the struct it is embedded in. I think even by the letter\n> of the standard, but it's as always hard to parse...\n\nThat's clearly the de facto situation (I think that was the case on\nthe most popular compilers long before flexible array members were\neven standardised), but I think it might technically still be not\nallowed since this change has not yet been accepted AFAICS:\n\nhttp://www.open-std.org/jtc1/sc22/wg14/www/docs/n2083.htm\n\nIn any case, we already do it which is why wrasse (Sun Studio\ncompiler) warns about indkey in pg_index.h. Curiously, indkey is not\nalways the final member of the containing struct, depending on\nCATALOG_VARLEN...\n\n\n",
"msg_date": "Sat, 31 Jul 2021 12:42:47 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> In any case, we already do it which is why wrasse (Sun Studio\n> compiler) warns about indkey in pg_index.h. Curiously, indkey is not\n> always the final member of the containing struct, depending on\n> CATALOG_VARLEN...\n\nHm? CATALOG_VARLEN is never to be defined, see genbki.h.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Jul 2021 21:17:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 1:37 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Tue, Feb 23, 2021 at 5:04 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > ## AIO API overview\n> >\n> > The main steps to use AIO (without higher level helpers) are:\n> >\n> > 1) acquire an \"unused\" AIO: pgaio_io_get()\n> >\n> > 2) start some IO, this is done by functions like\n> > pgaio_io_start_(read|write|fsync|flush_range)_(smgr|sb|raw|wal)\n> >\n> > The (read|write|fsync|flush_range) indicates the operation, whereas\n> > (smgr|sb|raw|wal) determines how IO completions, errors, ... are handled.\n> >\n> > (see below for more details about this design choice - it might or not be\n> > right)\n> >\n> > 3) optionally: assign a backend-local completion callback to the IO\n> > (pgaio_io_on_completion_local())\n> >\n> > 4) 2) alone does *not* cause the IO to be submitted to the kernel, but to be\n> > put on a per-backend list of pending IOs. The pending IOs can be explicitly\n> > be flushed pgaio_submit_pending(), but will also be submitted if the\n> > pending list gets to be too large, or if the current backend waits for the\n> > IO.\n> >\n> > The are two main reasons not to submit the IO immediately:\n> > - If adjacent, we can merge several IOs into one \"kernel level\" IO during\n> > submission. Larger IOs are considerably more efficient.\n> > - Several AIO APIs allow to submit a batch of IOs in one system call.\n> >\n> > 5) wait for the IO: pgaio_io_wait() waits for an IO \"owned\" by the current\n> > backend. When other backends may need to wait for an IO to finish,\n> > pgaio_io_ref() can put a reference to that AIO in shared memory (e.g. a\n> > BufferDesc), which can be waited for using pgaio_io_wait_ref().\n> >\n> > 6) Process the results of the request. If a callback was registered in 3),\n> > this isn't always necessary. The results of AIO can be accessed using\n> > pgaio_io_result() which returns an integer where negative numbers are\n> > -errno, and positive numbers are the [partial] success conditions\n> > (e.g. potentially indicating a short read).\n> >\n> > 7) release ownership of the io (pgaio_io_release()) or reuse the IO for\n> > another operation (pgaio_io_recycle())\n> >\n> >\n> > Most places that want to use AIO shouldn't themselves need to care about\n> > managing the number of writes in flight, or the readahead distance. To help\n> > with that there are two helper utilities, a \"streaming read\" and a \"streaming\n> > write\".\n> >\n> > The \"streaming read\" helper uses a callback to determine which blocks to\n> > prefetch - that allows to do readahead in a sequential fashion but importantly\n> > also allows to asynchronously \"read ahead\" non-sequential blocks.\n> >\n> > E.g. for vacuum, lazy_scan_heap() has a callback that uses the visibility map\n> > to figure out which block needs to be read next. Similarly lazy_vacuum_heap()\n> > uses the tids in LVDeadTuples to figure out which blocks are going to be\n> > needed. Here's the latter as an example:\n> > https://github.com/anarazel/postgres/commit/a244baa36bfb252d451a017a273a6da1c09f15a3#diff-3198152613d9a28963266427b380e3d4fbbfabe96a221039c6b1f37bc575b965R1906\n> >\n>\n> Attached is a patch on top of the AIO branch which does bitmapheapscan\n> prefetching using the PgStreamingRead helper already used by sequential\n> scan and vacuum on the AIO branch.\n>\n> The prefetch iterator is removed and the main iterator in the\n> BitmapHeapScanState node is now used by the PgStreamingRead helper.\n>\n...\n>\n> Oh, and I haven't done testing to see how effective the prefetching is\n> -- that is a larger project that I have yet to tackle.\n>\n\nI have done some testing on how effective it is now.\n\nI've also updated the original patch to count the first page (in the\nlossy/exact page counts mentioned down-thread) as well as to remove\nunused prefetch fields and comments.\nI've also included a second patch which adds IO wait time information to\nEXPLAIN output when used like:\n EXPLAIN (buffers, analyze) SELECT ...\n\nThe same commit also introduces a temporary dev GUC\nio_bitmap_prefetch_depth which I am using to experiment with the\nprefetch window size.\n\nI wanted to share some results from changing the prefetch window to\ndemonstrate how prefetching is working.\n\nThe short version of my results is that the prefetching works:\n\n- with the prefetch window set to 1, the IO wait time is 1550 ms\n- with the prefetch window set to 128, the IO wait time is 0.18 ms\n\nDDL and repro details below:\n\nOn Andres' AIO branch [1] with my bitmap heapscan prefetching patch set\napplied built with the following build flags:\n-02 -fno-omit-frame-pointer --with-liburing\n\nAnd these non-default PostgreSQL settings:\n io_data_direct=1\n io_data_force_async=off\n io_method=io_uring\n log_min_duration_statement=0\n log_duration=on\n set track_io_timing to on;\n\n set max_parallel_workers_per_gather to 0;\n set enable_seqscan to off;\n set enable_indexscan to off;\n set enable_bitmapscan to on;\n\n set effective_io_concurrency to 128;\n set io_bitmap_prefetch_depth to 128;\n\nUsing this DDL:\n\ndrop table if exists bar;\ncreate table bar(a int, b text, c text, d int);\ncreate index bar_idx on bar(a);\ninsert into bar select i, md5(i::text), 'abcdefghijklmnopqrstuvwxyz',\ni from generate_series(1,1000)i;\ninsert into bar select i%3, md5(i::text),\n'abcdefghijklmnopqrstuvwxyz', i from generate_series(1,1000)i;\ninsert into bar select i, md5(i::text), 'abcdefghijklmnopqrstuvwxyz',\ni from generate_series(1,200)i;\ninsert into bar select i%100, md5(i::text),\n'abcdefghijklmnopqrstuvwxyz', i from generate_series(1,10000000)i;\ninsert into bar select i%2000, md5(i::text),\n'abcdefghijklmnopqrstuvwxyz', i from generate_series(1,10000000)i;\ninsert into bar select i%10, md5(i::text),\n'abcdefghijklmnopqrstuvwxyz', i from generate_series(1,10000000)i;\ninsert into bar select i, md5(i::text), 'abcdefghijklmnopqrstuvwxyz',\ni from generate_series(1,10000000)i;\ninsert into bar select i%100, md5(i::text),\n'abcdefghijklmnopqrstuvwxyz', i from generate_series(1,10000000)i;\ninsert into bar select i, md5(i::text), 'abcdefghijklmnopqrstuvwxyz',\ni from generate_series(1,2000)i;\ninsert into bar select i%10, md5(i::text),\n'abcdefghijklmnopqrstuvwxyz', i from generate_series(1,2000)i;\nanalyze;\n\nAnd this query:\n\nselect * from bar where a > 100 offset 10000000000000;\n\nwith the prefetch window set to 1,\nthe query execution time is:\n5496.129 ms\n\nand IO wait time is:\n1550.915\n\nmplageman=# explain (buffers, analyze, timing off) select * from bar\nwhere a > 100 offset 10000000000000;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Limit (cost=1462959.87..1462959.87 rows=1 width=68) (actual rows=0 loops=1)\n Buffers: shared hit=1 read=280571\n I/O Timings: read=1315.845 wait=1550.915\n -> Bitmap Heap Scan on bar (cost=240521.25..1462959.87\nrows=19270298 width=68) (actual rows=19497800 loops=1)\n Recheck Cond: (a > 100)\n Rows Removed by Index Recheck: 400281\n Heap Blocks: exact=47915 lossy=197741\n Buffers: shared hit=1 read=280571\n I/O Timings: read=1315.845 wait=1550.915\n -> Bitmap Index Scan on bar_idx (cost=0.00..235703.67\nrows=19270298 width=0) (actual rows=19497800 loops=1)\n Index Cond: (a > 100)\n Buffers: shared hit=1 read=34915\n I/O Timings: read=1315.845\n Planning:\n Buffers: shared hit=96 read=30\n I/O Timings: read=3.399\n Planning Time: 4.378 ms\n Execution Time: 5473.404 ms\n(18 rows)\n\nwith the prefetch window set to 128,\nthe query execution time is:\n3222 ms\n\nand IO wait time is;\n0.178 ms\n\nmplageman=# explain (buffers, analyze, timing off) select * from bar\nwhere a > 100 offset 10000000000000;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Limit (cost=1462959.87..1462959.87 rows=1 width=68) (actual rows=0 loops=1)\n Buffers: shared hit=1 read=280571\n I/O Timings: read=1339.795 wait=0.178\n -> Bitmap Heap Scan on bar (cost=240521.25..1462959.87\nrows=19270298 width=68) (actual rows=19497800 loops=1)\n Recheck Cond: (a > 100)\n Rows Removed by Index Recheck: 400281\n Heap Blocks: exact=47915 lossy=197741\n Buffers: shared hit=1 read=280571\n I/O Timings: read=1339.795 wait=0.178\n -> Bitmap Index Scan on bar_idx (cost=0.00..235703.67\nrows=19270298 width=0) (actual rows=19497800 loops=1)\n Index Cond: (a > 100)\n Buffers: shared hit=1 read=34915\n I/O Timings: read=1339.795\n Planning:\n Buffers: shared hit=96 read=30\n I/O Timings: read=3.488\n Planning Time: 4.279 ms\n Execution Time: 3434.522 ms\n(18 rows)\n\n- Melanie\n\n[1] https://github.com/anarazel/postgres/tree/aio",
"msg_date": "Mon, 9 Aug 2021 17:27:18 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "Hi,\n\nAttached is an updated patch AIO series. The major changes are:\n- rebased onto master (Andres)\n- lots of progress on posix AIO backend (Thomas)\n- lots of progress towards a windows native AIO implementation - not yet quite\n merged (Thomas & David)\n- considerably improved \"worker\" io_method (Thomas)\n- some preliminary patches merged (Thomas) and thus dropped\n- error handling overhaul, AIO references now use resource owners\n- quite a few more localized bugfixes\n- further CI improvements\n\nUnfortunately there's a few tests that don't pass on windows. At least some of\nthose failures also happen on master - hence the alternative output file added\nin the last commit.\n\nThanks to Thomas there's now a new wiki page for AIO support:\nhttps://wiki.postgresql.org/wiki/AIO\nIt's currently mostly a shared todo list....\n\nMy own next steps are to try to get some of the preliminary patches merged\ninto master, and to address some of the higher level things that aren't yet\nquite right with the AIO interface, and to split the \"main\" AIO patch into\nsmaller patches.\n\nI hope that we soon send in a new version with native AIO support for\nwindows. I'm mostly interested in that to make sure that we get the shared\ninfrastructure right.\n\nMelanie has some work improving bitmap heap scan AIO support and some IO stats\n/ explain improvements.\n\nI think a decent and reasonably simple example for the way the AIO interface\ncan be used to do faster IO is\nv3-0028-aio-Use-AIO-in-nbtree-vacuum-scan.patch.gz which adds AIO for nbtree\nvacuum. It's not perfectly polished, but I think it shows that it's not too\nhard to add AIO usage to individual once the general infrastructure is in\nplace.\n\nI've attached the code for posterity, but the series is large enough that I\ndon't think it makes sense to do that all that often... The code is at\nhttps://github.com/anarazel/postgres/tree/aio\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 31 Aug 2021 22:56:59 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "On Wed, Sep 1, 2021 at 5:57 PM Andres Freund <andres@anarazel.de> wrote:\n> - lots of progress on posix AIO backend (Thomas)\n\nA quick note on this piece: Though it's still a work in progress with\na few things that need to be improved, I've tested this on a whole lot\nof different OSes now. I originally tried to use realtime signals\n(big mistake), but after a couple of reworks I think it's starting to\nlook plausible and quite portable. Of the ~10 or so OSes we support\nand test in the build farm, ~8 of them have this API, and of those I\nhave only one unknown: HPUX (I have no access and I am beginning to\nsuspect it is an ex-parrot), and one mysteriously-doesn't-work: NetBSD\n(I'd be grateful for any clues from NetBSD gurus and happy to provide\nbuild/test instructions if anyone would like to take a look).\n\n\n",
"msg_date": "Thu, 23 Sep 2021 18:28:00 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "Hi,\n\nOn Wed, Sep 1, 2021 at 1:57 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> I've attached the code for posterity, but the series is large enough that I\n> don't think it makes sense to do that all that often...\n\nAgreed.\n\n> The code is at\n> https://github.com/anarazel/postgres/tree/aio\n\nJust FYI the cfbot says that this version of the patchset doesn't\napply anymore, and it seems that your branch was only rebased to\n43c1c4f (Sept. 21th) which doesn't rebase cleanly:\n\nerror: could not apply 8a20594f2f... lwlock, xlog: Report caller wait\nevent for LWLockWaitForVar.\n\nSince it's still a WIP and a huge patchset I'm not sure if I should\nswitch the cf entry to Waiting on Author or not as it's probably going\nto rot quite fast anyway. Just to be safe I'll go ahead and change\nthe status. If that's unhelpful just let me know and I'll switch it\nback to needs review, as people motivated enough to review the patch\ncan still work with 43c1c4f as a starting point.\n\n\n",
"msg_date": "Wed, 12 Jan 2022 15:18:16 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "Hi Andres,\n\n> > The code is at\n> > https://github.com/anarazel/postgres/tree/aio\n>\n> Just FYI the cfbot says that this version of the patchset doesn't\n> apply anymore, and it seems that your branch was only rebased to\n> 43c1c4f (Sept. 21th) which doesn't rebase cleanly:\n\nAfter watching your recent talk \"IO in PostgreSQL: Past, Present,\nFuture\" [1] I decided to invest some of my time into this patchset. It\nlooks like at very least it could use a reviewer, or maybe two :)\nUnfortunately, it's a bit difficult to work with the patchset at the\nmoment. Any chance we may expect a rebased version for the July CF?\n\n> Comments? Questions?\n\nPersonally, I'm very enthusiastic about this patchset. However, a set\nof 39 patches seems to be unrealistic to test and/or review and/or\nkeep up to date. The 64 bit XIDs patchset [2] is much less\ncomplicated, but still it got the feedback that it should be splitted\nto more patches and CF entries. Any chance we could decompose this\neffort?\n\nFor instance, I doubt that we need all the backends in the first\nimplementation. The fallback \"worker\" one, and io_uring one will\nsuffice. Other backends can be added as separate features. Considering\nthat in any case the \"worker\" backend shouldn't cause any significant\nperformance degradation, maybe we could start even without io_uring.\nBTW, do we need Posix AIO at all, given your feedback on this API?\n\nAlso, what if we migrate to AIO/DIO one part of the system at a time?\nAs I understood from your talk, sequential scans will benefit most\nfrom AIO/DIO. Will it be possible to improve them first, while part of\nthe system will continue using buffered IO?\n\n[1]: https://www.youtube.com/watch?v=3Oj7fBAqVTw\n[2]: https://commitfest.postgresql.org/38/3594/\n\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 10 May 2022 17:01:24 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "On Wed, Sep 1, 2021 at 11:27 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> Attached is an updated patch AIO series. The major changes are:\n\nHi Andres, is there a plan to get fallocate changes alone first? I think\nfallocate API can help parallel inserts work (bulk relation extension\ncurrently writes zero filled-pages) and make pre-padding while allocating\nWAL files faster.\n\nRegards,\nBharath Rupireddy.\n\nOn Wed, Sep 1, 2021 at 11:27 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> Attached is an updated patch AIO series. The major changes are:\n\nHi Andres, is there a plan to get fallocate changes alone first? I think fallocate API can help parallel inserts work (bulk relation extension currently writes zero filled-pages) and make pre-padding while allocating WAL files faster.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Sun, 15 May 2022 20:41:10 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "On Tue, Aug 31, 2021 at 10:56:59PM -0700, Andres Freund wrote:\n> I've attached the code for posterity, but the series is large enough that I\n> don't think it makes sense to do that all that often... The code is at\n> https://github.com/anarazel/postgres/tree/aio\n\nI don't know what's the exact status here, but as there has been no\nactivity for the past five months, I have just marked the entry as RwF\nfor now.\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 14:45:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "Hi,\n\nOn 2022-10-12 14:45:26 +0900, Michael Paquier wrote:\n> On Tue, Aug 31, 2021 at 10:56:59PM -0700, Andres Freund wrote:\n> > I've attached the code for posterity, but the series is large enough that I\n> > don't think it makes sense to do that all that often... The code is at\n> > https://github.com/anarazel/postgres/tree/aio\n> \n> I don't know what's the exact status here, but as there has been no\n> activity for the past five months, I have just marked the entry as RwF\n> for now.\n\nWe're trying to get a number of smaller prerequisite patches merged this CF\n(aligned alloc, direction IO, dclist, bulk relation extension, ...). Once\nthat's done I'm planning to send out a new version of the (large) remainder of\nthe changes.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 8 Nov 2022 19:02:48 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
},
{
"msg_contents": "> 2021年9月1日 13:56,Andres Freund <andres@anarazel.de> 写道:\n> \n> Hi,\n> \n> Attached is an updated patch AIO series. The major changes are:\n> - rebased onto master (Andres)\n> - lots of progress on posix AIO backend (Thomas)\n> - lots of progress towards a windows native AIO implementation - not yet quite\n> merged (Thomas & David)\n> - considerably improved \"worker\" io_method (Thomas)\n> - some preliminary patches merged (Thomas) and thus dropped\n> - error handling overhaul, AIO references now use resource owners\n> - quite a few more localized bugfixes\n> - further CI improvements\n> \n> Unfortunately there's a few tests that don't pass on windows. At least some of\n> those failures also happen on master - hence the alternative output file added\n> in the last commit.\n> \n> Thanks to Thomas there's now a new wiki page for AIO support:\n> https://wiki.postgresql.org/wiki/AIO\n> It's currently mostly a shared todo list....\n> \n> My own next steps are to try to get some of the preliminary patches merged\n> into master, and to address some of the higher level things that aren't yet\n> quite right with the AIO interface, and to split the \"main\" AIO patch into\n> smaller patches.\n> \n> I hope that we soon send in a new version with native AIO support for\n> windows. I'm mostly interested in that to make sure that we get the shared\n> infrastructure right.\n> \n> Melanie has some work improving bitmap heap scan AIO support and some IO stats\n> / explain improvements.\n> \n> I think a decent and reasonably simple example for the way the AIO interface\n> can be used to do faster IO is\n> v3-0028-aio-Use-AIO-in-nbtree-vacuum-scan.patch.gz which adds AIO for nbtree\n> vacuum. It's not perfectly polished, but I think it shows that it's not too\n> hard to add AIO usage to individual once the general infrastructure is in\n> place.\n> \n> I've attached the code for posterity, but the series is large enough that I\n> don't think it makes sense to do that all that often... The code is at\n> https://github.com/anarazel/postgres/tree/aio\n\nHI Andres:\n\nI noticed this feature and did some testing.\ncode in GitHub's aio branch:\nFunction \nstatic void\npgaio_write_smgr_retry(PgAioInProgress *io)\n{\nuint32 off;\nAioBufferTag *tag = &io->scb_data.write_smgr.tag;\nSMgrRelation reln = smgropen(tag->rlocator.locator, tag->rlocator.backend);\n\nio->op_data.read.fd = smgrfd(reln, tag->forkNum, tag->blockNum, &off);\nAssert(off == io->op_data.read.offset);\n}\n\nseems should to be:\nio->op_data.write.fd = smgrfd(reln, tag->forkNum, tag->blockNum, &off);\nAssert(off == io->op_data.write.offset);\n\n\nBest regards,\nWenjing\n\n> \n> Greetings,\n> \n> Andres Freund\n> <v3-0001-windows-Only-consider-us-to-be-running-as-service.patch.gz><v3-0002-WIP-Fix-non-aio-bug-leading-to-checkpointer-not-s.patch.gz><v3-0003-aio-WIP-align-PGAlignedBlock-to-page-size.patch.gz><v3-0004-Add-allocator-support-for-larger-allocation-align.patch.gz><v3-0005-ilist.h-debugging-improvements.patch.gz><v3-0006-lwlock-xlog-Report-caller-wait-event-for-LWLockWa.patch.gz><v3-0007-heapam-Don-t-re-inquire-block-number-for-each-tup.patch.gz><v3-0008-Add-pg_prefetch_mem-macro-to-load-cache-lines.patch.gz><v3-0009-heapam-WIP-cacheline-prefetching-for-hot-pruning.patch.gz><v3-0010-WIP-Change-instr_time-to-just-store-nanoseconds-t.patch.gz><v3-0011-aio-Add-some-error-checking-around-pinning.patch.gz><v3-0012-aio-allow-lwlocks-to-be-unowned.patch.gz><v3-0013-condvar-add-ConditionVariableCancelSleepEx.patch.gz><v3-0014-Use-a-global-barrier-to-fix-DROP-TABLESPACE-on-Wi.patch.gz><v3-0015-pg_buffercache-Add-pg_buffercache_stats.patch.gz><v3-0016-bufmgr-Add-LockBufHdr-fastpath.patch.gz><v3-0017-lwlock-WIP-add-extended-locking-functions.patch.gz><v3-0018-io-Add-O_DIRECT-non-buffered-IO-mode.patch.gz><v3-0019-io-Increase-default-ringbuffer-size.patch.gz><v3-0020-Use-aux-process-resource-owner-in-walsender.patch.gz><v3-0021-Ensure-a-resowner-exists-for-all-paths-that-may-p.patch.gz><v3-0022-aio-Add-asynchronous-IO-infrastructure.patch.gz><v3-0023-aio-Use-AIO-in-pg_prewarm.patch.gz><v3-0024-aio-Use-AIO-in-bulk-relation-extension.patch.gz><v3-0025-aio-Use-AIO-in-checkpointer-bgwriter.patch.gz><v3-0026-aio-bitmap-heap-scan-Minimal-and-hacky-improvemen.patch.gz><v3-0027-aio-Use-AIO-in-heap-vacuum-s-lazy_scan_heap-and-l.patch.gz><v3-0028-aio-Use-AIO-in-nbtree-vacuum-scan.patch.gz><v3-0029-aio-Use-AIO-for-heap-table-scans.patch.gz><v3-0030-aio-Use-AIO-in-SyncDataDirectory.patch.gz><v3-0031-aio-Use-AIO-in-ProcessSyncRequests.patch.gz><v3-0032-aio-wal-concurrent-WAL-flushes.patch.gz><v3-0033-wal-Use-LWLockAcquireOrWait-in-AdvanceXLInsertBuf.patch.gz><v3-0034-wip-wal-async-commit-reduce-frequency-of-latch-se.patch.gz><v3-0035-aio-wal-padding-of-partial-records.patch.gz><v3-0036-aio-wal-extend-pg_stat_wal.patch.gz><v3-0037-aio-initial-sketch-for-design-document.patch.gz><v3-0038-aio-CI-and-README.md.patch.gz><v3-0039-XXX-Add-temporary-workaround-for-partition_prune-.patch.gz>\n\n\n2021年9月1日 13:56,Andres Freund <andres@anarazel.de> 写道:Hi,Attached is an updated patch AIO series. The major changes are:- rebased onto master (Andres)- lots of progress on posix AIO backend (Thomas)- lots of progress towards a windows native AIO implementation - not yet quite merged (Thomas & David)- considerably improved \"worker\" io_method (Thomas)- some preliminary patches merged (Thomas) and thus dropped- error handling overhaul, AIO references now use resource owners- quite a few more localized bugfixes- further CI improvementsUnfortunately there's a few tests that don't pass on windows. At least some ofthose failures also happen on master - hence the alternative output file addedin the last commit.Thanks to Thomas there's now a new wiki page for AIO support:https://wiki.postgresql.org/wiki/AIOIt's currently mostly a shared todo list....My own next steps are to try to get some of the preliminary patches mergedinto master, and to address some of the higher level things that aren't yetquite right with the AIO interface, and to split the \"main\" AIO patch intosmaller patches.I hope that we soon send in a new version with native AIO support forwindows. I'm mostly interested in that to make sure that we get the sharedinfrastructure right.Melanie has some work improving bitmap heap scan AIO support and some IO stats/ explain improvements.I think a decent and reasonably simple example for the way the AIO interfacecan be used to do faster IO isv3-0028-aio-Use-AIO-in-nbtree-vacuum-scan.patch.gz which adds AIO for nbtreevacuum. It's not perfectly polished, but I think it shows that it's not toohard to add AIO usage to individual once the general infrastructure is inplace.I've attached the code for posterity, but the series is large enough that Idon't think it makes sense to do that all that often... The code is athttps://github.com/anarazel/postgres/tree/aioHI Andres:I noticed this feature and did some testing.\ncode in GitHub's aio branch:Function static void\npgaio_write_smgr_retry(PgAioInProgress *io)\n{\nuint32 off;\nAioBufferTag *tag = &io->scb_data.write_smgr.tag;\nSMgrRelation reln = smgropen(tag->rlocator.locator, tag->rlocator.backend);\n\nio->op_data.read.fd = smgrfd(reln, tag->forkNum, tag->blockNum, &off);\nAssert(off == io->op_data.read.offset);\n}seems should to be:\nio->op_data.write.fd = smgrfd(reln, tag->forkNum, tag->blockNum, &off);\nAssert(off == io->op_data.write.offset);Best regards,WenjingGreetings,Andres Freund<v3-0001-windows-Only-consider-us-to-be-running-as-service.patch.gz><v3-0002-WIP-Fix-non-aio-bug-leading-to-checkpointer-not-s.patch.gz><v3-0003-aio-WIP-align-PGAlignedBlock-to-page-size.patch.gz><v3-0004-Add-allocator-support-for-larger-allocation-align.patch.gz><v3-0005-ilist.h-debugging-improvements.patch.gz><v3-0006-lwlock-xlog-Report-caller-wait-event-for-LWLockWa.patch.gz><v3-0007-heapam-Don-t-re-inquire-block-number-for-each-tup.patch.gz><v3-0008-Add-pg_prefetch_mem-macro-to-load-cache-lines.patch.gz><v3-0009-heapam-WIP-cacheline-prefetching-for-hot-pruning.patch.gz><v3-0010-WIP-Change-instr_time-to-just-store-nanoseconds-t.patch.gz><v3-0011-aio-Add-some-error-checking-around-pinning.patch.gz><v3-0012-aio-allow-lwlocks-to-be-unowned.patch.gz><v3-0013-condvar-add-ConditionVariableCancelSleepEx.patch.gz><v3-0014-Use-a-global-barrier-to-fix-DROP-TABLESPACE-on-Wi.patch.gz><v3-0015-pg_buffercache-Add-pg_buffercache_stats.patch.gz><v3-0016-bufmgr-Add-LockBufHdr-fastpath.patch.gz><v3-0017-lwlock-WIP-add-extended-locking-functions.patch.gz><v3-0018-io-Add-O_DIRECT-non-buffered-IO-mode.patch.gz><v3-0019-io-Increase-default-ringbuffer-size.patch.gz><v3-0020-Use-aux-process-resource-owner-in-walsender.patch.gz><v3-0021-Ensure-a-resowner-exists-for-all-paths-that-may-p.patch.gz><v3-0022-aio-Add-asynchronous-IO-infrastructure.patch.gz><v3-0023-aio-Use-AIO-in-pg_prewarm.patch.gz><v3-0024-aio-Use-AIO-in-bulk-relation-extension.patch.gz><v3-0025-aio-Use-AIO-in-checkpointer-bgwriter.patch.gz><v3-0026-aio-bitmap-heap-scan-Minimal-and-hacky-improvemen.patch.gz><v3-0027-aio-Use-AIO-in-heap-vacuum-s-lazy_scan_heap-and-l.patch.gz><v3-0028-aio-Use-AIO-in-nbtree-vacuum-scan.patch.gz><v3-0029-aio-Use-AIO-for-heap-table-scans.patch.gz><v3-0030-aio-Use-AIO-in-SyncDataDirectory.patch.gz><v3-0031-aio-Use-AIO-in-ProcessSyncRequests.patch.gz><v3-0032-aio-wal-concurrent-WAL-flushes.patch.gz><v3-0033-wal-Use-LWLockAcquireOrWait-in-AdvanceXLInsertBuf.patch.gz><v3-0034-wip-wal-async-commit-reduce-frequency-of-latch-se.patch.gz><v3-0035-aio-wal-padding-of-partial-records.patch.gz><v3-0036-aio-wal-extend-pg_stat_wal.patch.gz><v3-0037-aio-initial-sketch-for-design-document.patch.gz><v3-0038-aio-CI-and-README.md.patch.gz><v3-0039-XXX-Add-temporary-workaround-for-partition_prune-.patch.gz>",
"msg_date": "Tue, 17 Jan 2023 17:27:11 +0800",
"msg_from": "Wenjing Zeng <wjzeng2012@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous and \"direct\" IO support for PostgreSQL."
}
] |
[
{
"msg_contents": "Hi all. I have a library that helps with querying Postgres from TypeScript, and a user just filed this issue:\n\n https://github.com/jawj/zapatos/issues/74\n\nThe library uses the xmax method (ubiquitous on Stack Overflow) to detect whether an upsert query resulted in an insert or an update. It seems this is unreliable.\n\nA request was made back in 2019 for an official method to be implemented, but as far as I can see it got no replies and is not on any roadmap:\n\n https://www.postgresql.org/message-id/1565486215.7551.0%40finefun.com.au\n\nThis is a +1 for that request.\n\nI’ve not previously contributed to Postgres, but if nobody else wants to take this on I might be willing to try, ideally with a bit of guidance on where to look and what to do.\n\nAll the best,\nGeorge\n\n",
"msg_date": "Tue, 23 Feb 2021 11:56:03 +0000",
"msg_from": "George MacKerron <george@mackerron.co.uk>",
"msg_from_op": true,
"msg_subject": "INSERT ... ON CONFLICT ... : expose INSERT vs UPDATE status"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nHere is a scenario that segfault during delete (with version >= 12):\n\ncreate table parent (\ncol1 text primary key\n);\n\ncreate table child (\ncol1 text primary key,\nFOREIGN KEY (col1) REFERENCES parent(col1) on delete cascade\n);\n\nCREATE or replace FUNCTION trigger_function()\nRETURNS TRIGGER\nLANGUAGE PLPGSQL\nAS $$\nBEGIN\nraise notice 'trigger = %, old table = %',\n TG_NAME,\n (select string_agg(old_table::text, ', ' order by col1) from \nold_table);\nreturn NULL;\nEND;\n$$\n;\n\ncreate trigger bdt_trigger after delete on child REFERENCING OLD TABLE \nAS old_table for each statement EXECUTE function trigger_function();\n\nalter table child add column col2 text not null default 'tutu';\ninsert into parent(col1) values ('1');\ninsert into child(col1) values ('1');\ninsert into parent(col1) values ('2');\ninsert into child(col1) values ('2');\ndelete from parent;\n\nproduces:\n\nCREATE TABLE\nCREATE TABLE\nCREATE FUNCTION\nCREATE TRIGGER\nALTER TABLE\nINSERT 0 1\nINSERT 0 1\nINSERT 0 1\nINSERT 0 1\npsql:./segfault.repro.sql:35: server closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\npsql:./segfault.repro.sql:35: fatal: connection to server was lost\n\nThe column being added to the child relation needs to have a default \nvalue for the segfault to be triggered.\n\nThe stack is the following:\n\n#0 0x000000000049aa59 in execute_attr_map_slot (attrMap=0x11736c8, \nin_slot=0x1179c90, out_slot=0x1179da8) at tupconvert.c:193\n#1 0x0000000000700ec0 in AfterTriggerSaveEvent (estate=0x1171820, \nrelinfo=0x1171cc8, event=1, row_trigger=true, oldslot=0x1179c90, \nnewslot=0x0, recheckIndexes=0x0, modifiedCols=0x0, \ntransition_capture=0x1172438) at trigger.c:5488\n#2 0x00000000006fc6a8 in ExecARDeleteTriggers (estate=0x1171820, \nrelinfo=0x1171cc8, tupleid=0x7ffd2b7f2e40, fdw_trigtuple=0x0, \ntransition_capture=0x1172438) at trigger.c:2565\n#3 0x0000000000770794 in ExecDelete (mtstate=0x1171a90, \nresultRelInfo=0x1171cc8, tupleid=0x7ffd2b7f2e40, oldtuple=0x0, \nplanSlot=0x1173630, epqstate=0x1171b88, estate=0x1171820, \nprocessReturning=true, canSetTag=true, changingPart=false, \ntupleDeleted=0x0, epqreturnslot=0x0)\n at nodeModifyTable.c:1128\n#4 0x00000000007724cc in ExecModifyTable (pstate=0x1171a90) at \nnodeModifyTable.c:2259\n\nI had a look at it, and it looks to me that the slot being reused in \nAfterTriggerSaveEvent() (when (map != NULL) and not (!storeslot)) has \npreviously been damaged by FreeExecutorState() in standard_ExecutorEnd().\n\nThen I ended up with the enclosed patch proposal that does not make the \nrepro segfaulting and that is passing make check too.\n\nNot sure that what is being done in the patch is the right/best approach \nto solve the issue though.\n\nBertrand\n\nPS: the same repro with a foreign key with update cascade, an update \ntrigger and an update on the col1 column would segfault too (but does \nnot with the enclosed patch proposal).",
"msg_date": "Tue, 23 Feb 2021 21:56:28 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "[BUG] segfault during delete"
},
{
"msg_contents": "Hi Bertrand,\n\nOn Wed, Feb 24, 2021 at 5:56 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> Hi hackers,\n>\n> Here is a scenario that segfault during delete (with version >= 12):\n>\n> create table parent (\n> col1 text primary key\n> );\n>\n> create table child (\n> col1 text primary key,\n> FOREIGN KEY (col1) REFERENCES parent(col1) on delete cascade\n> );\n>\n> CREATE or replace FUNCTION trigger_function()\n> RETURNS TRIGGER\n> LANGUAGE PLPGSQL\n> AS $$\n> BEGIN\n> raise notice 'trigger = %, old table = %',\n> TG_NAME,\n> (select string_agg(old_table::text, ', ' order by col1) from old_table);\n> return NULL;\n> END;\n> $$\n> ;\n>\n> create trigger bdt_trigger after delete on child REFERENCING OLD TABLE AS old_table for each statement EXECUTE function trigger_function();\n>\n> alter table child add column col2 text not null default 'tutu';\n> insert into parent(col1) values ('1');\n> insert into child(col1) values ('1');\n> insert into parent(col1) values ('2');\n> insert into child(col1) values ('2');\n> delete from parent;\n>\n> produces:\n>\n> CREATE TABLE\n> CREATE TABLE\n> CREATE FUNCTION\n> CREATE TRIGGER\n> ALTER TABLE\n> INSERT 0 1\n> INSERT 0 1\n> INSERT 0 1\n> INSERT 0 1\n> psql:./segfault.repro.sql:35: server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> psql:./segfault.repro.sql:35: fatal: connection to server was lost\n>\n> The column being added to the child relation needs to have a default value for the segfault to be triggered.\n>\n> The stack is the following:\n>\n> #0 0x000000000049aa59 in execute_attr_map_slot (attrMap=0x11736c8, in_slot=0x1179c90, out_slot=0x1179da8) at tupconvert.c:193\n> #1 0x0000000000700ec0 in AfterTriggerSaveEvent (estate=0x1171820, relinfo=0x1171cc8, event=1, row_trigger=true, oldslot=0x1179c90, newslot=0x0, recheckIndexes=0x0, modifiedCols=0x0, transition_capture=0x1172438) at trigger.c:5488\n> #2 0x00000000006fc6a8 in ExecARDeleteTriggers (estate=0x1171820, relinfo=0x1171cc8, tupleid=0x7ffd2b7f2e40, fdw_trigtuple=0x0, transition_capture=0x1172438) at trigger.c:2565\n> #3 0x0000000000770794 in ExecDelete (mtstate=0x1171a90, resultRelInfo=0x1171cc8, tupleid=0x7ffd2b7f2e40, oldtuple=0x0, planSlot=0x1173630, epqstate=0x1171b88, estate=0x1171820, processReturning=true, canSetTag=true, changingPart=false, tupleDeleted=0x0, epqreturnslot=0x0)\n> at nodeModifyTable.c:1128\n> #4 0x00000000007724cc in ExecModifyTable (pstate=0x1171a90) at nodeModifyTable.c:2259\n>\n> I had a look at it, and it looks to me that the slot being reused in AfterTriggerSaveEvent() (when (map != NULL) and not (!storeslot)) has previously been damaged by FreeExecutorState() in standard_ExecutorEnd().\n\nRight.\n\n> Then I ended up with the enclosed patch proposal that does not make the repro segfaulting and that is passing make check too.\n>\n> Not sure that what is being done in the patch is the right/best approach to solve the issue though.\n\nHmm, I don't think we should be *freshly* allocating the\nTupleTableSlot every time. Not having to do that is why the commit\nff11e7f4b9ae0 seems to have invented AfterTriggersTableData.storeslot\nin the first place, that is, to cache the slot once created.\n\nProblem with the current way as you've discovered is that the slot\ngets allocated in the execution-span memory context, whereas the\nAfterTriggersTableData instance, of which the slot is a part, is\nsupposed to last the entire (sub-)transaction. So the correct fix I\nthink is to allocate the slot to be stored in\nAfterTriggersTableData.storeslot in the transaction-span memory\ncontext as well.\n\nHaving taken care of that, another problem with the current way is\nthat it adds the slot to es_tupleTable by calling\nExecAllocTupleSlot(), which means that the slot will be released when\nExecResetTupleTable() is called on es_tupleTable as part of\nExecEndPlan(). That would defeat the point of allocating the slot in\nthe transaction-span context. So let's use MakeSingleTableSlot() to\nallocate a standalone slot to be stored in\nAfterTriggersTableData.storeslot and have AfterTriggerFreeQuery() call\nExecDropSingleTupleTableSlot() to release it.\n\n> PS: the same repro with a foreign key with update cascade, an update trigger and an update on the col1 column would segfault too (but does not with the enclosed patch proposal).\n\nActually, we also need to fix similar code in the block for populating\nthe transition NEW TABLE, because without doing so that block too can\ncrash similarly:\n\n if (!TupIsNull(newslot) &&\n ((event == TRIGGER_EVENT_INSERT && insert_new_table) ||\n (event == TRIGGER_EVENT_UPDATE && update_new_table)))\n\nI've attached a patch with my suggested fixes and also test cases.\nPlease take a look.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 24 Feb 2021 17:12:44 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] segfault during delete"
},
{
"msg_contents": "Hi Amit,\n\nOn 2/24/21 9:12 AM, Amit Langote wrote:\n> Hi Bertrand,\n>\n> On Wed, Feb 24, 2021 at 5:56 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> Hi hackers,\n>>\n>> Here is a scenario that segfault during delete (with version >= 12):\n>>\n>> create table parent (\n>> col1 text primary key\n>> );\n>>\n>> create table child (\n>> col1 text primary key,\n>> FOREIGN KEY (col1) REFERENCES parent(col1) on delete cascade\n>> );\n>>\n>> CREATE or replace FUNCTION trigger_function()\n>> RETURNS TRIGGER\n>> LANGUAGE PLPGSQL\n>> AS $$\n>> BEGIN\n>> raise notice 'trigger = %, old table = %',\n>> TG_NAME,\n>> (select string_agg(old_table::text, ', ' order by col1) from old_table);\n>> return NULL;\n>> END;\n>> $$\n>> ;\n>>\n>> create trigger bdt_trigger after delete on child REFERENCING OLD TABLE AS old_table for each statement EXECUTE function trigger_function();\n>>\n>> alter table child add column col2 text not null default 'tutu';\n>> insert into parent(col1) values ('1');\n>> insert into child(col1) values ('1');\n>> insert into parent(col1) values ('2');\n>> insert into child(col1) values ('2');\n>> delete from parent;\n>>\n>> produces:\n>>\n>> CREATE TABLE\n>> CREATE TABLE\n>> CREATE FUNCTION\n>> CREATE TRIGGER\n>> ALTER TABLE\n>> INSERT 0 1\n>> INSERT 0 1\n>> INSERT 0 1\n>> INSERT 0 1\n>> psql:./segfault.repro.sql:35: server closed the connection unexpectedly\n>> This probably means the server terminated abnormally\n>> before or while processing the request.\n>> psql:./segfault.repro.sql:35: fatal: connection to server was lost\n>>\n>> The column being added to the child relation needs to have a default value for the segfault to be triggered.\n>>\n>> The stack is the following:\n>>\n>> #0 0x000000000049aa59 in execute_attr_map_slot (attrMap=0x11736c8, in_slot=0x1179c90, out_slot=0x1179da8) at tupconvert.c:193\n>> #1 0x0000000000700ec0 in AfterTriggerSaveEvent (estate=0x1171820, relinfo=0x1171cc8, event=1, row_trigger=true, oldslot=0x1179c90, newslot=0x0, recheckIndexes=0x0, modifiedCols=0x0, transition_capture=0x1172438) at trigger.c:5488\n>> #2 0x00000000006fc6a8 in ExecARDeleteTriggers (estate=0x1171820, relinfo=0x1171cc8, tupleid=0x7ffd2b7f2e40, fdw_trigtuple=0x0, transition_capture=0x1172438) at trigger.c:2565\n>> #3 0x0000000000770794 in ExecDelete (mtstate=0x1171a90, resultRelInfo=0x1171cc8, tupleid=0x7ffd2b7f2e40, oldtuple=0x0, planSlot=0x1173630, epqstate=0x1171b88, estate=0x1171820, processReturning=true, canSetTag=true, changingPart=false, tupleDeleted=0x0, epqreturnslot=0x0)\n>> at nodeModifyTable.c:1128\n>> #4 0x00000000007724cc in ExecModifyTable (pstate=0x1171a90) at nodeModifyTable.c:2259\n>>\n>> I had a look at it, and it looks to me that the slot being reused in AfterTriggerSaveEvent() (when (map != NULL) and not (!storeslot)) has previously been damaged by FreeExecutorState() in standard_ExecutorEnd().\n> Right.\n\nThanks for reviewing the issue and the patch!\n\n>\n>> Then I ended up with the enclosed patch proposal that does not make the repro segfaulting and that is passing make check too.\n>>\n>> Not sure that what is being done in the patch is the right/best approach to solve the issue though.\n> Hmm, I don't think we should be *freshly* allocating the\n> TupleTableSlot every time. Not having to do that is why the commit\n> ff11e7f4b9ae0 seems to have invented AfterTriggersTableData.storeslot\n> in the first place, that is, to cache the slot once created.\nOh, ok thanks for the explanation.\n>\n> Problem with the current way as you've discovered is that the slot\n> gets allocated in the execution-span memory context, whereas the\n> AfterTriggersTableData instance, of which the slot is a part, is\n> supposed to last the entire (sub-)transaction.\nRight.\n> So the correct fix I\n> think is to allocate the slot to be stored in\n> AfterTriggersTableData.storeslot in the transaction-span memory\n> context as well.\nRight, that makes more sense that what my patch was doing.\n> Having taken care of that, another problem with the current way is\n> that it adds the slot to es_tupleTable by calling\n> ExecAllocTupleSlot(), which means that the slot will be released when\n> ExecResetTupleTable() is called on es_tupleTable as part of\n> ExecEndPlan(). That would defeat the point of allocating the slot in\n> the transaction-span context. So let's use MakeSingleTableSlot() to\n> allocate a standalone slot to be stored in\n> AfterTriggersTableData.storeslot and have AfterTriggerFreeQuery() call\n> ExecDropSingleTupleTableSlot() to release it.\n>\n+1\n>> PS: the same repro with a foreign key with update cascade, an update trigger and an update on the col1 column would segfault too (but does not with the enclosed patch proposal).\n> Actually, we also need to fix similar code in the block for populating\n> the transition NEW TABLE, because without doing so that block too can\n> crash similarly:\n>\n> if (!TupIsNull(newslot) &&\n> ((event == TRIGGER_EVENT_INSERT && insert_new_table) ||\n> (event == TRIGGER_EVENT_UPDATE && update_new_table)))\n>\n> I've attached a patch with my suggested fixes and also test cases.\n> Please take a look.\n\nI had a look and it looks good to me. Also the new regression tests are \ndoing it right and are segfaulting without your patch.\n\nThat's all good to me.\n\nThat fix should be back ported to 12 and 13, do you agree?\n\nThanks a lot for your help and explanations!\n\nBertrand\n\n\n\n",
"msg_date": "Wed, 24 Feb 2021 10:11:44 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] segfault during delete"
},
{
"msg_contents": "On Wed, Feb 24, 2021 at 6:12 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> On 2/24/21 9:12 AM, Amit Langote wrote:\n> > I've attached a patch with my suggested fixes and also test cases.\n> > Please take a look.\n>\n> I had a look and it looks good to me. Also the new regression tests are\n> doing it right and are segfaulting without your patch.\n>\n> That's all good to me.\n\nThanks.\n\n> That fix should be back ported to 12 and 13, do you agree?\n\nYeah, I think so.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Feb 2021 21:59:47 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] segfault during delete"
},
{
"msg_contents": "\nOn 2/24/21 1:59 PM, Amit Langote wrote:\n>\n> On Wed, Feb 24, 2021 at 6:12 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> On 2/24/21 9:12 AM, Amit Langote wrote:\n>>> I've attached a patch with my suggested fixes and also test cases.\n>>> Please take a look.\n>> I had a look and it looks good to me. Also the new regression tests are\n>> doing it right and are segfaulting without your patch.\n>>\n>> That's all good to me.\n> Thanks.\n>\n>> That fix should be back ported to 12 and 13, do you agree?\n> Yeah, I think so.\ngreat, patch added to the CF.\n\nBertrand\n\n\n\n",
"msg_date": "Wed, 24 Feb 2021 14:05:23 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] segfault during delete"
},
{
"msg_contents": "Thanks Amit for working on this fix! It seems correct to me, so I pushed it with trivial changes. Thanks Bertrand for reporting the problem.\n\nIn addition to backpatching the code fix to pg12, I backpatched the test case to pg11. It worked fine for me (with no code changes), but it seems good to have it there just to make sure the buildfarm agrees with us on this.\nThanks Amit for working on this fix! It seems correct to me, so I pushed it with trivial changes. Thanks Bertrand for reporting the problem.In addition to backpatching the code fix to pg12, I backpatched the test case to pg11. It worked fine for me (with no code changes), but it seems good to have it there just to make sure the buildfarm agrees with us on this.",
"msg_date": "Sat, 27 Feb 2021 18:14:04 -0300",
"msg_from": "=?UTF-8?Q?=C3=81lvaro_Herrera?= <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] segfault during delete"
},
{
"msg_contents": "On Sun, Feb 28, 2021 at 6:14 AM Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Thanks Amit for working on this fix! It seems correct to me, so I pushed it with trivial changes. Thanks Bertrand for reporting the problem.\n\nThanks Alvaro.\n\n> In addition to backpatching the code fix to pg12, I backpatched the test case to pg11. It worked fine for me (with no code changes), but it seems good to have it there just to make sure the buildfarm agrees with us on this.\n\nAh, good call.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 1 Mar 2021 11:22:07 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] segfault during delete"
},
{
"msg_contents": "Thanks Amit and Alvaro for having helped to fix this bug so quickly.\n\nBertrand\n\nOn 3/1/21 3:22 AM, Amit Langote wrote:\n> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\n>\n>\n>\n> On Sun, Feb 28, 2021 at 6:14 AM Álvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> Thanks Amit for working on this fix! It seems correct to me, so I pushed it with trivial changes. Thanks Bertrand for reporting the problem.\n> Thanks Alvaro.\n>\n>> In addition to backpatching the code fix to pg12, I backpatched the test case to pg11. It worked fine for me (with no code changes), but it seems good to have it there just to make sure the buildfarm agrees with us on this.\n> Ah, good call.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 4 Mar 2021 08:17:04 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] segfault during delete"
}
] |
[
{
"msg_contents": "Dear Hackers,\n\nIn the previous discussion [1], we noticed that ECPG cannot accept IPv6\nconnection string, it means the following statement does not work well:\n\nEXEC SQL CONNECT TO 'tcp:postgresql://::1/postgres';\n\nThis is caused because colons are gotten entangled in the ECPGconnect(), \nand Wang suggests that we should support IPv6 like libpq:\n\n> The host part may be either host name or an IP address.\n> To specify an IPv6 host address, enclose it in square brackets:\n\nThe square bracket must be searched first for implementing the suggestion,\nand it means some refactoring is needed for ECPGconnect().\n\nI attached two patches, 0001 contains some refactoring, and 0002 contains\nfixes for accepting IPv6. Currently the following statement can be passed:\n\nEXEC SQL CONNECT TO 'tcp:postgresql://[::1]/postgres';\n\nI think this is WIP, because some problems remain:\n\n* Only an SQL literal or a host variable is acceptable.\n I understand we should support other notations, but now hacking.\n* parse_options() was not refactored because\n it does not affect to parsing the host.\n I will try it if should be.\n* New parse functions are have similar part,\n but I did not standardize them because approved characters\n are different.\n\nHow do you think?\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Wed, 24 Feb 2021 01:42:57 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Refactor ECPGconnect and allow IPv6 connection"
},
{
"msg_contents": "Hi, Kuroda-san:\n\nKuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\n\n> * parse_options() was not refactored because\n> it does not affect to parsing the host.\n> I will try it if should be.\n\nIt seems host only can be the name of server, please refer [1].\nAnd if I use command:\n\t./bin/psql \"postgresql://server1:26000/postgres?host=[::1]\"\n\nThe error report is:\n\tpsql: error: could not translate host name \"[::1]\" to address: Name or service not known\n\nSo, I think parse_options() is not need to be refactored.\n\n> How do you think?\n\nIn patch:\n> ecpg_log(\"end of string reached when looking for matching \\\"]\\\" in IPv6 host address: \\\"%s\\\"\\n\", buf);\n\nI think we can use the message as same as the message in fe-connect.c:\n> libpq_gettext(\"end of string reached when looking for matching \\\"]\\\" in IPv6 host address in URI: \\\"%s\\\"\\n\"),\n\nBTW, in fe-connect.c:\n\n\t\t\tif (*p && *p != ':' && *p != '/' && *p != '?' && *p != ',')\n\t\t\t{\n\t\t\t\tappendPQExpBuffer(errorMessage,\n\t\t\t\t\t\t\t\t libpq_gettext(\"unexpected character \\\"%c\\\" at position %d in URI (expected \\\":\\\" or \\\"/\\\"): \\\"%s\\\"\\n\"),\n\t\t\t\t\t\t\t\t *p, (int) (p - buf + 1), uri);\n\t\t\t\tgoto cleanup;\n\t\t\t}\nMaybe we can add the expected character, like (expected ':', '/', '?' or ',') \n\nI think this patch is good to me, I will review this patch later\n\n[1] https://www.postgresql.org/docs/13/libpq-connect.html#LIBPQ-PARAMKEYWORDS\n\n\n\n",
"msg_date": "Wed, 24 Feb 2021 05:53:13 +0000",
"msg_from": "\"wangsh.fnst@fujitsu.com\" <wangsh.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Refactor ECPGconnect and allow IPv6 connection"
},
{
"msg_contents": "Dear Wang,\n\nThank you for giving comments!\nI forgot to write that parse functions imitates libpq's functios,\nbut you understood that immediately. Genius!\n\n> So, I think parse_options() is not need to be refactored.\n\nOK.\n\n> I think we can use the message as same as the message in fe-connect.c:\n> > libpq_gettext(\"end of string reached when looking for matching \\\"]\\\" in IPv6 host address in URI: \\\"%s\\\"\\n\"),\n\nThe word \"URI\" is not used in the ECPG docs and source comments, so I removed.\nIf we want to add, we should define the \"URI\" in the ECPG context.\n\n> Maybe we can add the expected character, like (expected ':', '/', '?' or ',')\n\nFixed, but I think ',' is not allowed in the ECPG.\nAnd I did not add URI because the above reason.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Wed, 24 Feb 2021 06:33:12 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Refactor ECPGconnect and allow IPv6 connection"
},
{
"msg_contents": "Sorry for sending again.\n\n> * Only an SQL literal or a host variable is acceptable.\n> I understand we should support other notations, but now hacking.\n\nI tried to add support notation. Now unquoted string can be used.\nIn the flex file, IPv6 string is parsed with the square bracket, it means\nthe following string is recognized as a pattern: [::1].\nIt is caused because string \"::\" overlap with typecast definition.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Thu, 25 Feb 2021 00:58:33 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Refactor ECPGconnect and allow IPv6 connection"
},
{
"msg_contents": "Dear Hackers,\n\nI reviewed for myself and fixed something:\n\n* refactor parse_options(), same as conninfo_uri_parse_params() in libpq\n Skipping blanks is needed in this functions because ecpg precompiler add additional blanks\n between option parameters. I did not fix precompiler because of the compatibility.\n If it changes, maybe SO_MAJOR_VERSION will be also changed.\n* update doc\n\nParse_new/oldstlye() is not changed.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Fri, 5 Mar 2021 01:56:38 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Refactor ECPGconnect and allow IPv6 connection"
},
{
"msg_contents": "On Fri, Mar 05, 2021 at 01:56:38AM +0000, kuroda.hayato@fujitsu.com wrote:\n> I reviewed for myself and fixed something:\n> \n> * refactor parse_options(), same as conninfo_uri_parse_params() in libpq\n> Skipping blanks is needed in this functions because ecpg precompiler add additional blanks\n> between option parameters. I did not fix precompiler because of the compatibility.\n> If it changes, maybe SO_MAJOR_VERSION will be also changed.\n> * update doc\n\nAs you are writing in your first bullet point and as mentioned\nupthread, it does not strike me as a great idea to have a duplicate\nlogic doing the parsing of URIs, even if libpq accepts multiple\nhosts/ports as an option. Couldn't we have a better consolidation\nhere?\n--\nMichael",
"msg_date": "Fri, 18 Jun 2021 15:59:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor ECPGconnect and allow IPv6 connection"
},
{
"msg_contents": "Dear Michael,\n\nThank you for replying!\n\n> it does not strike me as a great idea to have a duplicate\n> logic doing the parsing of URIs, even if libpq accepts multiple\n> hosts/ports as an option.\n\nYeah, I agree your argument that duplicated parse function should be removed.\n\nECPG parses connection string because it uses PQconnectdbParams()\neven if target is specified in the new-style,\nhence my elementary idea is that the paring can be skipped if PQconnectdb() calls.\n\nI will try to remake patches based on the idea.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Mon, 21 Jun 2021 10:46:18 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Refactor ECPGconnect and allow IPv6 connection"
},
{
"msg_contents": "On Mon, Jun 21, 2021 at 3:46 AM kuroda.hayato@fujitsu.com <\nkuroda.hayato@fujitsu.com> wrote:\n\n> I will try to remake patches based on the idea.\n>\n\nBased upon this comment, and the ongoing discussion about commitfest volume\nand complexity, I've moved this to \"Waiting on Author\".\n\nDavid J.\n\nOn Mon, Jun 21, 2021 at 3:46 AM kuroda.hayato@fujitsu.com <kuroda.hayato@fujitsu.com> wrote:I will try to remake patches based on the idea.Based upon this comment, and the ongoing discussion about commitfest volume and complexity, I've moved this to \"Waiting on Author\".David J.",
"msg_date": "Sat, 7 Aug 2021 13:36:41 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor ECPGconnect and allow IPv6 connection"
}
] |
[
{
"msg_contents": "Hello,\n\nI noticed that contrib/cube data type does not support binary\ninput/output handler\nwhen I tried to dump a table with cube columns, using a tool [*1] that\nuses binary data\nover libpq.\n\n$ pg2arrow -d postgres -t my_table\n../utils/pgsql_client.c:351 SQL execution failed: ERROR: no binary\noutput function available for type cube\n\nThis patch adds cube_send / cube_recv handlers on the contrib/cube data type.\nOnce this patch was applied to, the libpq client can obtain the table\ndata using binary mode.\n\n$ pg2arrow -d postgres -t my_table\nNOTICE: -o, --output=FILENAME option was not given,\n so a temporary file '/tmp/CdC68Q.arrow' was built instead.\n\nThe internal layout of cube, a kind of varlena, has a leading 32bit\nheader and the following float8\narray. (array size is embedded in the header field).\nSo, cube_send just put the data stream according to the internal\nlayout, then cube_recv reconstructs\nthe values inverse.\n\nBest regards,\n\n[*1] pg2arrow - a utility to convert PostgreSQL table to Apache Arrow\nhttp://heterodb.github.io/pg-strom/arrow_fdw/#using-pg2arrow\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>",
"msg_date": "Wed, 24 Feb 2021 12:18:24 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "contrib/cube - binary input/output handlers"
},
{
"msg_contents": "On 24.02.21 04:18, Kohei KaiGai wrote:\n> This patch adds cube_send / cube_recv handlers on the contrib/cube data type.\n> Once this patch was applied to, the libpq client can obtain the table\n> data using binary mode.\n\nSeems reasonable. But you need to write an extension upgrade script and \nbump the extension version.\n\n\n",
"msg_date": "Wed, 3 Mar 2021 15:33:05 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: contrib/cube - binary input/output handlers"
},
{
"msg_contents": "2021年3月3日(水) 23:33 Peter Eisentraut <peter.eisentraut@enterprisedb.com>:\n>\n> On 24.02.21 04:18, Kohei KaiGai wrote:\n> > This patch adds cube_send / cube_recv handlers on the contrib/cube data type.\n> > Once this patch was applied to, the libpq client can obtain the table\n> > data using binary mode.\n>\n> Seems reasonable. But you need to write an extension upgrade script and\n> bump the extension version.\n>\nThanks for your review.\n\nOne thing not straightforward is that a new definition of cube type\nneeds to drop\nthe old definition once, then it leads cascaded deletion to the\nobjects that depends\non the \"cube\" type declared at the cube--1.2.sql.\nDo you have any good ideas?\n\nIdea-1) modify system catalog by UPDATE pg_type carefully.\n It can avoid cascaded deletion.\n\nIdea-2) copy & paste all the declaration after CREATE TYPE in\ncube--1.2.sql to the\n new script, then create these objects again.\n\nBest regards,\n\n\n\n\n\n--\nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Thu, 4 Mar 2021 00:23:56 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: contrib/cube - binary input/output handlers"
},
{
"msg_contents": "Kohei KaiGai <kaigai@heterodb.com> writes:\n> One thing not straightforward is that a new definition of cube type\n> needs to drop\n> the old definition once, then it leads cascaded deletion to the\n> objects that depends\n> on the \"cube\" type declared at the cube--1.2.sql.\n> Do you have any good ideas?\n\nYou can add the I/O functions with ALTER TYPE nowadays.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Mar 2021 10:28:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/cube - binary input/output handlers"
},
{
"msg_contents": "I wrote:\n> You can add the I/O functions with ALTER TYPE nowadays.\n\nTo be concrete, see 949a9f043eb70a4986041b47513579f9a13d6a33\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Mar 2021 10:45:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/cube - binary input/output handlers"
},
{
"msg_contents": "Thanks, the attached patch add cube--1.5 for binary send/recv functions and\nmodification of cube type using the new ALTER TYPE.\n\nBest regards,\n\n2021年3月4日(木) 0:45 Tom Lane <tgl@sss.pgh.pa.us>:\n>\n> I wrote:\n> > You can add the I/O functions with ALTER TYPE nowadays.\n>\n> To be concrete, see 949a9f043eb70a4986041b47513579f9a13d6a33\n>\n> regards, tom lane\n\n\n\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>",
"msg_date": "Sat, 6 Mar 2021 01:25:35 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: contrib/cube - binary input/output handlers"
},
{
"msg_contents": "Kohei KaiGai <kaigai@heterodb.com> writes:\n> Thanks, the attached patch add cube--1.5 for binary send/recv functions and\n> modification of cube type using the new ALTER TYPE.\n\nHm, that was already superseded by events (112d411fb).\nAs long as we get this done for v14, we can just treat it\nas an add-on to cube 1.5, so here's a quick rebase onto HEAD.\n\nScanning the code, I have a couple of gripes. I'm not sure it's\na good plan to just send the \"header\" field raw like that ---\nwould breaking it into a dimension field and a point bool be\nbetter? In any case, the receive function has to be more careful\nthan this about accepting only valid header values.\n\nAlso, I don't think \"offsetof(NDBOX, x[nitems])\" is per project\nstyle. It at least used to be true that MSVC couldn't cope\nwith that, so we prefer\n\n\toffsetof(NDBOX, x) + nitems * sizeof(whatever)\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 05 Mar 2021 11:41:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/cube - binary input/output handlers"
},
{
"msg_contents": "2021年3月6日(土) 1:41 Tom Lane <tgl@sss.pgh.pa.us>:\n>\n> Kohei KaiGai <kaigai@heterodb.com> writes:\n> > Thanks, the attached patch add cube--1.5 for binary send/recv functions and\n> > modification of cube type using the new ALTER TYPE.\n>\n> Hm, that was already superseded by events (112d411fb).\n> As long as we get this done for v14, we can just treat it\n> as an add-on to cube 1.5, so here's a quick rebase onto HEAD.\n>\nThanks for this revising.\n\n> Scanning the code, I have a couple of gripes. I'm not sure it's\n> a good plan to just send the \"header\" field raw like that ---\n> would breaking it into a dimension field and a point bool be\n> better? In any case, the receive function has to be more careful\n> than this about accepting only valid header values.\n>\nI have a different opinion here.\nDo we never reinterpret the unused header fields (bits 8-30) for another purpose\nin the future version?\nIf application saves the raw header field as-is, at least, it can keep\nthe header field\nwithout information loss.\nOn the other hand, if cube_send() individually sent num-of-dimension\nand point flag,\nan application (that is built for the current version) will drop the\nbit fields currently unused,\nbut the new version of server may reinterpret the field for something.\n\nOf course, it's better to have more careful validation at cuda_recv()\nwhen it received\nthe header field.\n\n> Also, I don't think \"offsetof(NDBOX, x[nitems])\" is per project\n> style. It at least used to be true that MSVC couldn't cope\n> with that, so we prefer\n>\n> offsetof(NDBOX, x) + nitems * sizeof(whatever)\n>\nOk, I'll fix it on the next patch.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Sat, 6 Mar 2021 10:42:47 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: contrib/cube - binary input/output handlers"
},
{
"msg_contents": "Kohei KaiGai <kaigai@heterodb.com> writes:\n> 2021年3月6日(土) 1:41 Tom Lane <tgl@sss.pgh.pa.us>:\n>> Scanning the code, I have a couple of gripes. I'm not sure it's\n>> a good plan to just send the \"header\" field raw like that ---\n>> would breaking it into a dimension field and a point bool be\n>> better? In any case, the receive function has to be more careful\n>> than this about accepting only valid header values.\n>> \n> I have a different opinion here.\n> Do we never reinterpret the unused header fields (bits 8-30) for another purpose\n> in the future version?\n\nRight, that's what to be concerned about.\n\nThe best way might be to send the header as-is, as you've done,\nbut for cube_recv to throw error if the reserved bits aren't\nall zero. That way we can't get into a situation where we\naren't sure what's in stored values. If we do expand the header\nin future, values should be forward compatible.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Mar 2021 21:21:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/cube - binary input/output handlers"
},
{
"msg_contents": "2021年3月6日(土) 11:21 Tom Lane <tgl@sss.pgh.pa.us>:\n>\n> Kohei KaiGai <kaigai@heterodb.com> writes:\n> > 2021年3月6日(土) 1:41 Tom Lane <tgl@sss.pgh.pa.us>:\n> >> Scanning the code, I have a couple of gripes. I'm not sure it's\n> >> a good plan to just send the \"header\" field raw like that ---\n> >> would breaking it into a dimension field and a point bool be\n> >> better? In any case, the receive function has to be more careful\n> >> than this about accepting only valid header values.\n> >>\n> > I have a different opinion here.\n> > Do we never reinterpret the unused header fields (bits 8-30) for another purpose\n> > in the future version?\n>\n> Right, that's what to be concerned about.\n>\n> The best way might be to send the header as-is, as you've done,\n> but for cube_recv to throw error if the reserved bits aren't\n> all zero. That way we can't get into a situation where we\n> aren't sure what's in stored values. If we do expand the header\n> in future, values should be forward compatible.\n>\nOk, the attached v4 sends the raw header as-is, then cube_recv\nvalidates the header.\nIf num-of-dimension is larger than CUBE_MAX_DIM, it is obviously\nunused bits (8-30)\nare used or out of the range.\n\nIt also changes the manner of offsetof() as you suggested.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>",
"msg_date": "Sat, 6 Mar 2021 12:19:45 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: contrib/cube - binary input/output handlers"
},
{
"msg_contents": "Kohei KaiGai <kaigai@heterodb.com> writes:\n> Ok, the attached v4 sends the raw header as-is, then cube_recv\n> validates the header.\n> If num-of-dimension is larger than CUBE_MAX_DIM, it is obviously\n> unused bits (8-30)\n> are used or out of the range.\n\nWorks for me.\n\nI noted one additional bug: you have to condition whether to dump\nthe upper coordinates just on the IS_POINT flag, because that is\nall that cube_recv will see. The cube_is_point_internal() hack\ncan be used in some other places, but not here.\n\nAlso, as a matter of style, I didn't like that cube_send was using\nthe LL_COORD/UR_COORD abstraction but cube_recv wasn't. In the\nworst case (if someone tried to change that abstraction) this could\nturn into an actual bug, with cube_recv storing the coordinates in\nthe wrong order. Could have gone either way on which one to change\nto look like the other, but I chose to simplify cube_send to look\nlike cube_recv. This essentially means that we're locking the\nbinary representation to use the physical storage order of the\ncoordinates even if someone gets fancy about their meaning.\n\nPushed with those fixes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Mar 2021 12:11:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: contrib/cube - binary input/output handlers"
}
] |
[
{
"msg_contents": "Hi,\n\nI've written this function to insert several rows at once, and noticed a certain postgresql overhead as you can see from the log file. A lot more data than the\nuser data is actually sent over the net. This has a certain noticeable impact on the user transmission speed.\n\nI noticed that a libpq query always has a number of arguments of the following form:\n\nOid\t\tparamt[cols]\t=\t{ 1082, 701, 701, 701, 701, 701, 20, 701 };\nint\t\tparaml[cols]\t=\t{ 4, 8, 8, 8, 8, 8, 8, 8 };\nint\t\tparamf[cols]\t=\t{ 1, 1, 1, 1, 1, 1, 1, 1 };\n\nresult = PQexecParams(psql_cnn, (char* ) &statement, 1, paramt, (const char** ) paramv, paraml, paramf, 1);\n\nI think the 'paramf' is completely redundant. The data mode, text or binary, is already specified in the last argument to 'PQexecParams' and does not have to be\nrepeated for every value. Am I correct?\n\nThanks,\nMischa Baars.",
"msg_date": "Wed, 24 Feb 2021 09:14:19 +0100",
"msg_from": "\"Michael J. Baars\" <mjbaars1977.pgsql-hackers@cyberfiber.eu>",
"msg_from_op": true,
"msg_subject": "Postgresql network transmission overhead"
},
{
"msg_contents": "On Wed, Feb 24, 2021 at 09:14:19AM +0100, Michael J. Baars wrote:\n> I've written this function to insert several rows at once, and noticed a certain postgresql overhead as you can see from the log file. A lot more data than the\n> user data is actually sent over the net. This has a certain noticeable impact on the user transmission speed.\n> \n> I noticed that a libpq query always has a number of arguments of the following form:\n> \n> Oid\t\tparamt[cols]\t=\t{ 1082, 701, 701, 701, 701, 701, 20, 701 };\n> int\t\tparaml[cols]\t=\t{ 4, 8, 8, 8, 8, 8, 8, 8 };\n> int\t\tparamf[cols]\t=\t{ 1, 1, 1, 1, 1, 1, 1, 1 };\n> \n> result = PQexecParams(psql_cnn, (char* ) &statement, 1, paramt, (const char** ) paramv, paraml, paramf, 1);\n> \n> I think the 'paramf' is completely redundant. The data mode, text or binary, is already specified in the last argument to 'PQexecParams' and does not have to be\n> repeated for every value. Am I correct?\n\nThe last argument is the *result* format.\nThe array is for the format of the *input* bind parameters.\n\nRegarding the redundancy:\n\nhttps://www.postgresql.org/docs/current/libpq-exec.html\n|nParams\n| The number of parameters supplied; it is the length of the arrays paramTypes[], paramValues[], paramLengths[], and paramFormats[]. (The array pointers can be NULL when nParams is zero.)\n|paramTypes[]\n| Specifies, by OID, the data types to be assigned to the parameter symbols. If paramTypes is NULL, or any particular element in the array is zero, the server infers a data type for the parameter symbol in the same way it would do for an untyped literal string.\n|paramValues[]\n| ...\n|paramLengths[]\n| Specifies the actual data lengths of binary-format parameters. It is ignored for null parameters and text-format parameters. The array pointer can be null when there are no binary parameters.\n|paramFormats[]\n| Specifies whether parameters are text (put a zero in the array entry for the corresponding parameter) or binary (put a one in the array entry for the corresponding parameter). If the array pointer is null then all parameters are presumed to be text strings.\n\nnParams specifies the length of the arrays: if you pass an array of length\ngreater than nParams, then the rest of the array is being ignored.\n\nYou don't *have* to specify Types, and Lengths and Formats can be specified as\nNULL for text format params.\n\n> semi-prepared\n\nWhat does semi-prepared mean ?\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 24 Feb 2021 19:18:52 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql network transmission overhead"
},
{
"msg_contents": "On Wed, 2021-02-24 at 19:18 -0600, Justin Pryzby wrote:\n> On Wed, Feb 24, 2021 at 09:14:19AM +0100, Michael J. Baars wrote:\n> > I've written this function to insert several rows at once, and noticed a certain postgresql overhead as you can see from the log file. A lot more data than\n> > the\n> > user data is actually sent over the net. This has a certain noticeable impact on the user transmission speed.\n> > \n> > I noticed that a libpq query always has a number of arguments of the following form:\n> > \n> > Oid\t\tparamt[cols]\t=\t{ 1082, 701, 701, 701, 701, 701, 20, 701 };\n> > int\t\tparaml[cols]\t=\t{ 4, 8, 8, 8, 8, 8, 8, 8 };\n> > int\t\tparamf[cols]\t=\t{ 1, 1, 1, 1, 1, 1, 1, 1 };\n> > \n> > result = PQexecParams(psql_cnn, (char* ) &statement, 1, paramt, (const char** ) paramv, paraml, paramf, 1);\n> > \n> > I think the 'paramf' is completely redundant. The data mode, text or binary, is already specified in the last argument to 'PQexecParams' and does not have\n> > to be\n> > repeated for every value. Am I correct?\n> \n> The last argument is the *result* format.\n> The array is for the format of the *input* bind parameters.\n> \n\nYes, but we are reading from and writing to the same table here. Why specify different formats for the input and the output exactly?\n\nIs this vector being sent over the network?\n\nIn the logfile you can see that the effective user data being written is only 913kb, while the actual being transmitted over the network is 7946kb when writing\none row at a time. That is an overhead of 770%!\n\n> Regarding the redundancy:\n> \n> https://www.postgresql.org/docs/current/libpq-exec.html\n> > nParams\n> > The number of parameters supplied; it is the length of the arrays paramTypes[], paramValues[], paramLengths[], and paramFormats[]. (The array pointers\n> > can be NULL when nParams is zero.)\n> > paramTypes[]\n> > Specifies, by OID, the data types to be assigned to the parameter symbols. If paramTypes is NULL, or any particular element in the array is zero, the\n> > server infers a data type for the parameter symbol in the same way it would do for an untyped literal string.\n> > paramValues[]\n> > ...\n> > paramLengths[]\n> > Specifies the actual data lengths of binary-format parameters. It is ignored for null parameters and text-format parameters. The array pointer can be\n> > null when there are no binary parameters.\n> > paramFormats[]\n> > Specifies whether parameters are text (put a zero in the array entry for the corresponding parameter) or binary (put a one in the array entry for the\n> > corresponding parameter). If the array pointer is null then all parameters are presumed to be text strings.\n> \n> nParams specifies the length of the arrays: if you pass an array of length\n> greater than nParams, then the rest of the array is being ignored.\n> \n> You don't *have* to specify Types, and Lengths and Formats can be specified as\n> NULL for text format params.\n> \n> > semi-prepared\n> \n> What does semi-prepared mean ?\n> \n\nI'm writing a total of 4096+ rows of each n columns to this table. Some preparation is in order such that the timer can be placed around the actual network\ntransmission only, preparing the statement, i.e. the string of input arguments and the input data structures, is being done before the timer is started.\n\nAlso I noticed that when more columns are used, prepared statements with too many rows * columns cannot be loaded into postgresql, probably because the size of\nthe prepared statements are limited to a certain size in memory. It does not return an error of the sorts from PQprepare, only during execution of the\nstatement.\n\nThanks,\nMischa.\n\n\n\n",
"msg_date": "Fri, 26 Feb 2021 07:41:15 +0100",
"msg_from": "\"Michael J. Baars\" <mjbaars1977.pgsql-hackers@cyberfiber.eu>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql network transmission overhead"
},
{
"msg_contents": "\"Michael J. Baars\" <mjbaars1977.pgsql-hackers@cyberfiber.eu> writes:\n> In the logfile you can see that the effective user data being written is only 913kb, while the actual being transmitted over the network is 7946kb when writing\n> one row at a time. That is an overhead of 770%!\n\nSo ... don't write one row at a time.\n\nYou haven't shown any details, but I imagine that most of the overhead\ncomes from per-query stuff like the RowDescription metadata. The intended\nusage pattern for bulk operations is that there's only one RowDescription\nmessage for a whole lot of data rows. There might be reasons you want to\nwork a row at a time, but if your concern is to minimize network traffic,\ndon't do that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Feb 2021 10:11:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql network transmission overhead"
}
] |
[
{
"msg_contents": "Hi,\n\nThe documentation describes how a return code > 125 on the restore_command\nwould prevent the server from starting [1] :\n\n\"\nIt is important that the command return nonzero exit status on failure. The\ncommand *will* be called requesting files that are not present in the\narchive; it must return nonzero when so asked. This is not an error\ncondition. An exception is that if the command was terminated by a signal\n(other than SIGTERM, which is used as part of a database server shutdown)\nor an error by the shell (such as command not found), then recovery will\nabort and the server will not start up.\n\"\n\nBut, I dont see such a note on the archive_command side of thing. [2]\n\nIt could happend in case the archive command is not checked beforehand or\nif the archive command becomes unavailable while PostgreSQL is running.\nrsync can also return 255 in some cases (bad ssh configuration or typos).\nIn this case a fatal error is emitted, the archiver stops and is restarted\nby the postmaster.\n\nThe view pg_stat_archiver is also not updated in this case. Is it on\npurpose ? It could be problematic if someone uses it to check the archiver\nprocess health.\n\nShould we document this ? (I can make a patch)\n\nregards,\nBenoit\n\n[1]\nhttps://www.postgresql.org/docs/13/continuous-archiving.html#BACKUP-PITR-RECOVERY\n[2]\nhttps://www.postgresql.org/docs/13/continuous-archiving.html#BACKUP-ARCHIVING-WAL\n\nHi,The documentation describes how a return code > 125 on the restore_command would prevent the server from starting [1] :\"It is important that the command return nonzero exit status on failure. The command will\n be called requesting files that are not present in the archive; it must\n return nonzero when so asked. This is not an error condition. An \nexception is that if the command was terminated by a signal (other than SIGTERM,\n which is used as part of a database server shutdown) or an error by the\n shell (such as command not found), then recovery will abort and the \nserver will not start up.\"But, I dont see such a note on the archive_command side of thing. [2]It could happend in case the archive command is not checked beforehand or if the archive command becomes unavailable while PostgreSQL is running. rsync can also return 255 in some cases (bad ssh configuration or typos). In this case a fatal error is emitted, the archiver stops and is restarted by the postmaster.The view pg_stat_archiver is also not updated in this case. Is it on purpose ? It could be problematic if someone uses it to check the archiver process health.Should we document this ? (I can make a patch)regards,Benoit[1] https://www.postgresql.org/docs/13/continuous-archiving.html#BACKUP-PITR-RECOVERY[2] https://www.postgresql.org/docs/13/continuous-archiving.html#BACKUP-ARCHIVING-WAL",
"msg_date": "Wed, 24 Feb 2021 13:20:43 +0100",
"msg_from": "talk to ben <blo.talkto@gmail.com>",
"msg_from_op": true,
"msg_subject": "archive_command / pg_stat_archiver & documentation"
},
{
"msg_contents": "Hi,\n\nOn Wed, Feb 24, 2021 at 8:21 PM talk to ben <blo.talkto@gmail.com> wrote:\n>\n> The documentation describes how a return code > 125 on the restore_command would prevent the server from starting [1] :\n>\n> \"\n> It is important that the command return nonzero exit status on failure. The command will be called requesting files that are not present in the archive; it must return nonzero when so asked. This is not an error condition. An exception is that if the command was terminated by a signal (other than SIGTERM, which is used as part of a database server shutdown) or an error by the shell (such as command not found), then recovery will abort and the server will not start up.\n> \"\n>\n> But, I dont see such a note on the archive_command side of thing. [2]\n>\n> It could happend in case the archive command is not checked beforehand or if the archive command becomes unavailable while PostgreSQL is running. rsync can also return 255 in some cases (bad ssh configuration or typos). In this case a fatal error is emitted, the archiver stops and is restarted by the postmaster.\n>\n> The view pg_stat_archiver is also not updated in this case. Is it on purpose ? It could be problematic if someone uses it to check the archiver process health.\n\nThat's on purpose, see for instance that discussion:\nhttps://www.postgresql.org/message-id/flat/55731BB8.1050605%40dalibo.com\n\n> Should we document this ? (I can make a patch)\n\nI thought that this behavior was documented, especially for the lack\nof update of pg_stat_archiver. If it's not the case then we should\ndefinitely fix that!\n\n\n",
"msg_date": "Wed, 24 Feb 2021 21:52:06 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive_command / pg_stat_archiver & documentation"
},
{
"msg_contents": "Le mer. 24 févr. 2021 à 14:52, Julien Rouhaud <rjuju123@gmail.com> a écrit :\n\n> Hi,\n>\n> On Wed, Feb 24, 2021 at 8:21 PM talk to ben <blo.talkto@gmail.com> wrote:\n> >\n> > The documentation describes how a return code > 125 on the\n> restore_command would prevent the server from starting [1] :\n> >\n> > \"\n> > It is important that the command return nonzero exit status on failure.\n> The command will be called requesting files that are not present in the\n> archive; it must return nonzero when so asked. This is not an error\n> condition. An exception is that if the command was terminated by a signal\n> (other than SIGTERM, which is used as part of a database server shutdown)\n> or an error by the shell (such as command not found), then recovery will\n> abort and the server will not start up.\n> > \"\n> >\n> > But, I dont see such a note on the archive_command side of thing. [2]\n> >\n> > It could happend in case the archive command is not checked beforehand\n> or if the archive command becomes unavailable while PostgreSQL is running.\n> rsync can also return 255 in some cases (bad ssh configuration or typos).\n> In this case a fatal error is emitted, the archiver stops and is restarted\n> by the postmaster.\n> >\n> > The view pg_stat_archiver is also not updated in this case. Is it on\n> purpose ? It could be problematic if someone uses it to check the archiver\n> process health.\n>\n> That's on purpose, see for instance that discussion:\n> https://www.postgresql.org/message-id/flat/55731BB8.1050605%40dalibo.com\n>\n\nThanks for pointing that out, I should have checked.\n\n\n> > Should we document this ? (I can make a patch)\n>\n> I thought that this behavior was documented, especially for the lack\n> of update of pg_stat_archiver. If it's not the case then we should\n> definitely fix that!\n>\n\nI tried to do it in the attached patch.\nBuilding the doc worked fine on my computer.",
"msg_date": "Thu, 25 Feb 2021 12:24:57 +0100",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=C3=A9au?= <benoit.lobreau@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive_command / pg_stat_archiver & documentation"
},
{
"msg_contents": "On Thu, Feb 25, 2021 at 7:25 PM Benoit Lobréau <benoit.lobreau@gmail.com> wrote:\n>\n> Le mer. 24 févr. 2021 à 14:52, Julien Rouhaud <rjuju123@gmail.com> a écrit :\n>>\n>> I thought that this behavior was documented, especially for the lack\n>> of update of pg_stat_archiver. If it's not the case then we should\n>> definitely fix that!\n>\n> I tried to do it in the attached patch.\n> Building the doc worked fine on my computer.\n\nGreat, thanks! Can you register it in the next commitfest to make\nsure it won't be forgotten?\n\n\n",
"msg_date": "Thu, 25 Feb 2021 22:35:06 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive_command / pg_stat_archiver & documentation"
},
{
"msg_contents": "Done here : https://commitfest.postgresql.org/32/3012/\n\nLe jeu. 25 févr. 2021 à 15:34, Julien Rouhaud <rjuju123@gmail.com> a écrit :\n\n> On Thu, Feb 25, 2021 at 7:25 PM Benoit Lobréau <benoit.lobreau@gmail.com>\n> wrote:\n> >\n> > Le mer. 24 févr. 2021 à 14:52, Julien Rouhaud <rjuju123@gmail.com> a\n> écrit :\n> >>\n> >> I thought that this behavior was documented, especially for the lack\n> >> of update of pg_stat_archiver. If it's not the case then we should\n> >> definitely fix that!\n> >\n> > I tried to do it in the attached patch.\n> > Building the doc worked fine on my computer.\n>\n> Great, thanks! Can you register it in the next commitfest to make\n> sure it won't be forgotten?\n>\n\nDone here : https://commitfest.postgresql.org/32/3012/Le jeu. 25 févr. 2021 à 15:34, Julien Rouhaud <rjuju123@gmail.com> a écrit :On Thu, Feb 25, 2021 at 7:25 PM Benoit Lobréau <benoit.lobreau@gmail.com> wrote:\n>\n> Le mer. 24 févr. 2021 à 14:52, Julien Rouhaud <rjuju123@gmail.com> a écrit :\n>>\n>> I thought that this behavior was documented, especially for the lack\n>> of update of pg_stat_archiver. If it's not the case then we should\n>> definitely fix that!\n>\n> I tried to do it in the attached patch.\n> Building the doc worked fine on my computer.\n\nGreat, thanks! Can you register it in the next commitfest to make\nsure it won't be forgotten?",
"msg_date": "Fri, 26 Feb 2021 10:03:05 +0100",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=C3=A9au?= <benoit.lobreau@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive_command / pg_stat_archiver & documentation"
},
{
"msg_contents": "On Fri, Feb 26, 2021 at 10:03:05AM +0100, Benoit Lobréau wrote:\n> Done here : https://commitfest.postgresql.org/32/3012/\n\nDocumenting that properly for the archive command, as already done for\nrestore_command, sounds good to me. I am not sure that there is much\npoint in doing a cross-reference to the archiving section for one\nspecific field of pg_stat_archiver.\n\nFor the second paragraph, I would recommend to move that to a\ndifferent <para> to outline this special case, leading to the\nattached.\n\nWhat do you think?\n--\nMichael",
"msg_date": "Mon, 1 Mar 2021 16:36:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: archive_command / pg_stat_archiver & documentation"
},
{
"msg_contents": "On Mon, Mar 1, 2021 at 3:36 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Feb 26, 2021 at 10:03:05AM +0100, Benoit Lobréau wrote:\n> > Done here : https://commitfest.postgresql.org/32/3012/\n>\n> Documenting that properly for the archive command, as already done for\n> restore_command, sounds good to me. I am not sure that there is much\n> point in doing a cross-reference to the archiving section for one\n> specific field of pg_stat_archiver.\n\nAgreed.\n\n> For the second paragraph, I would recommend to move that to a\n> different <para> to outline this special case, leading to the\n> attached.\n\n+1\n\n> What do you think?\n\nLGTM!\n\n\n",
"msg_date": "Mon, 1 Mar 2021 15:55:57 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive_command / pg_stat_archiver & documentation"
},
{
"msg_contents": "Le lun. 1 mars 2021 à 08:36, Michael Paquier <michael@paquier.xyz> a écrit :\n\n> On Fri, Feb 26, 2021 at 10:03:05AM +0100, Benoit Lobréau wrote:\n> > Done here : https://commitfest.postgresql.org/32/3012/\n>\n> Documenting that properly for the archive command, as already done for\n> restore_command, sounds good to me. I am not sure that there is much\n> point in doing a cross-reference to the archiving section for one\n> specific field of pg_stat_archiver.\n>\n\nI wanted to add a warning that using pg_stat_archiver to monitor the good\nhealth of the\narchiver comes with a caveat in the view documentation itself. But couldn't\nfind a concise\nway to do it. So I added a link.\n\nIf you think it's unnecessary, that's ok.\n\n\n> For the second paragraph, I would recommend to move that to a\n> different <para> to outline this special case, leading to the\n> attached.\n>\n\nGood.\n\nLe lun. 1 mars 2021 à 08:36, Michael Paquier <michael@paquier.xyz> a écrit :On Fri, Feb 26, 2021 at 10:03:05AM +0100, Benoit Lobréau wrote:\n> Done here : https://commitfest.postgresql.org/32/3012/\n\nDocumenting that properly for the archive command, as already done for\nrestore_command, sounds good to me. I am not sure that there is much\npoint in doing a cross-reference to the archiving section for one\nspecific field of pg_stat_archiver.I wanted to add a warning that using pg_stat_archiver to monitor the good health of the archiver comes with a caveat in the view documentation itself. But couldn't find a conciseway to do it. So I added a link.If you think it's unnecessary, that's ok. \nFor the second paragraph, I would recommend to move that to a\ndifferent <para> to outline this special case, leading to the\nattached.Good.",
"msg_date": "Mon, 1 Mar 2021 09:33:48 +0100",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=C3=A9au?= <benoit.lobreau@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive_command / pg_stat_archiver & documentation"
},
{
"msg_contents": "On Mon, Mar 1, 2021 at 4:33 PM Benoit Lobréau <benoit.lobreau@gmail.com> wrote:\n>\n> Le lun. 1 mars 2021 à 08:36, Michael Paquier <michael@paquier.xyz> a écrit :\n>>\n>> On Fri, Feb 26, 2021 at 10:03:05AM +0100, Benoit Lobréau wrote:\n>> > Done here : https://commitfest.postgresql.org/32/3012/\n>>\n>> Documenting that properly for the archive command, as already done for\n>> restore_command, sounds good to me. I am not sure that there is much\n>> point in doing a cross-reference to the archiving section for one\n>> specific field of pg_stat_archiver.\n>\n>\n> I wanted to add a warning that using pg_stat_archiver to monitor the good health of the\n> archiver comes with a caveat in the view documentation itself. But couldn't find a concise\n> way to do it. So I added a link.\n>\n> If you think it's unnecessary, that's ok.\n\nMaybe this can be better addressed than with a link in the\ndocumentation. The final outcome is that it can be difficult to\nmonitor the archiver state in such case. That's orthogonal to this\npatch but maybe we can add a new \"archiver_start\" timestamptz column\nin pg_stat_archiver, so monitoring tools can detect a problem if it's\ntoo far away from pg_postmaster_start_time() for instance?\n\n\n",
"msg_date": "Mon, 1 Mar 2021 17:17:06 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive_command / pg_stat_archiver & documentation"
},
{
"msg_contents": "I like the idea !\n\nIf it's not too complicated, I'd like to take a stab at it.\n\nLe lun. 1 mars 2021 à 10:16, Julien Rouhaud <rjuju123@gmail.com> a écrit :\n\n> On Mon, Mar 1, 2021 at 4:33 PM Benoit Lobréau <benoit.lobreau@gmail.com>\n> wrote:\n> >\n> > Le lun. 1 mars 2021 à 08:36, Michael Paquier <michael@paquier.xyz> a\n> écrit :\n> >>\n> >> On Fri, Feb 26, 2021 at 10:03:05AM +0100, Benoit Lobréau wrote:\n> >> > Done here : https://commitfest.postgresql.org/32/3012/\n> >>\n> >> Documenting that properly for the archive command, as already done for\n> >> restore_command, sounds good to me. I am not sure that there is much\n> >> point in doing a cross-reference to the archiving section for one\n> >> specific field of pg_stat_archiver.\n> >\n> >\n> > I wanted to add a warning that using pg_stat_archiver to monitor the\n> good health of the\n> > archiver comes with a caveat in the view documentation itself. But\n> couldn't find a concise\n> > way to do it. So I added a link.\n> >\n> > If you think it's unnecessary, that's ok.\n>\n> Maybe this can be better addressed than with a link in the\n> documentation. The final outcome is that it can be difficult to\n> monitor the archiver state in such case. That's orthogonal to this\n> patch but maybe we can add a new \"archiver_start\" timestamptz column\n> in pg_stat_archiver, so monitoring tools can detect a problem if it's\n> too far away from pg_postmaster_start_time() for instance?\n>\n\nI like the idea !If it's not too complicated, I'd like to take a stab at it.Le lun. 1 mars 2021 à 10:16, Julien Rouhaud <rjuju123@gmail.com> a écrit :On Mon, Mar 1, 2021 at 4:33 PM Benoit Lobréau <benoit.lobreau@gmail.com> wrote:\n>\n> Le lun. 1 mars 2021 à 08:36, Michael Paquier <michael@paquier.xyz> a écrit :\n>>\n>> On Fri, Feb 26, 2021 at 10:03:05AM +0100, Benoit Lobréau wrote:\n>> > Done here : https://commitfest.postgresql.org/32/3012/\n>>\n>> Documenting that properly for the archive command, as already done for\n>> restore_command, sounds good to me. I am not sure that there is much\n>> point in doing a cross-reference to the archiving section for one\n>> specific field of pg_stat_archiver.\n>\n>\n> I wanted to add a warning that using pg_stat_archiver to monitor the good health of the\n> archiver comes with a caveat in the view documentation itself. But couldn't find a concise\n> way to do it. So I added a link.\n>\n> If you think it's unnecessary, that's ok.\n\nMaybe this can be better addressed than with a link in the\ndocumentation. The final outcome is that it can be difficult to\nmonitor the archiver state in such case. That's orthogonal to this\npatch but maybe we can add a new \"archiver_start\" timestamptz column\nin pg_stat_archiver, so monitoring tools can detect a problem if it's\ntoo far away from pg_postmaster_start_time() for instance?",
"msg_date": "Mon, 1 Mar 2021 10:24:06 +0100",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=C3=A9au?= <benoit.lobreau@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive_command / pg_stat_archiver & documentation"
},
{
"msg_contents": "On Mon, Mar 1, 2021 at 5:24 PM Benoit Lobréau <benoit.lobreau@gmail.com> wrote:\n>\n> I like the idea !\n>\n> If it's not too complicated, I'd like to take a stab at it.\n\nGreat! And it shouldn't be too complicated. Note that unfortunately\nthis will likely not be included in pg14 as the last commitfest should\nbegin today.\n\n\n",
"msg_date": "Mon, 1 Mar 2021 17:33:24 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive_command / pg_stat_archiver & documentation"
},
{
"msg_contents": "On Mon, Mar 01, 2021 at 05:17:06PM +0800, Julien Rouhaud wrote:\n> Maybe this can be better addressed than with a link in the\n> documentation. The final outcome is that it can be difficult to\n> monitor the archiver state in such case. That's orthogonal to this\n> patch but maybe we can add a new \"archiver_start\" timestamptz column\n> in pg_stat_archiver, so monitoring tools can detect a problem if it's\n> too far away from pg_postmaster_start_time() for instance?\n\nThere may be other solutions as well. I have applied the doc patch\nfor now.\n--\nMichael",
"msg_date": "Tue, 2 Mar 2021 10:29:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: archive_command / pg_stat_archiver & documentation"
},
{
"msg_contents": "On Tue, Mar 2, 2021 at 9:29 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Mar 01, 2021 at 05:17:06PM +0800, Julien Rouhaud wrote:\n> > Maybe this can be better addressed than with a link in the\n> > documentation. The final outcome is that it can be difficult to\n> > monitor the archiver state in such case. That's orthogonal to this\n> > patch but maybe we can add a new \"archiver_start\" timestamptz column\n> > in pg_stat_archiver, so monitoring tools can detect a problem if it's\n> > too far away from pg_postmaster_start_time() for instance?\n>\n> There may be other solutions as well. I have applied the doc patch\n> for now.\n\nThanks!\n\n\n",
"msg_date": "Tue, 2 Mar 2021 11:10:45 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive_command / pg_stat_archiver & documentation"
},
{
"msg_contents": "Thanks !\n\nLe mar. 2 mars 2021 à 04:10, Julien Rouhaud <rjuju123@gmail.com> a écrit :\n\n> On Tue, Mar 2, 2021 at 9:29 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> >\n> > On Mon, Mar 01, 2021 at 05:17:06PM +0800, Julien Rouhaud wrote:\n> > > Maybe this can be better addressed than with a link in the\n> > > documentation. The final outcome is that it can be difficult to\n> > > monitor the archiver state in such case. That's orthogonal to this\n> > > patch but maybe we can add a new \"archiver_start\" timestamptz column\n> > > in pg_stat_archiver, so monitoring tools can detect a problem if it's\n> > > too far away from pg_postmaster_start_time() for instance?\n> >\n> > There may be other solutions as well. I have applied the doc patch\n> > for now.\n>\n> Thanks!\n>\n\nThanks !Le mar. 2 mars 2021 à 04:10, Julien Rouhaud <rjuju123@gmail.com> a écrit :On Tue, Mar 2, 2021 at 9:29 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Mar 01, 2021 at 05:17:06PM +0800, Julien Rouhaud wrote:\n> > Maybe this can be better addressed than with a link in the\n> > documentation. The final outcome is that it can be difficult to\n> > monitor the archiver state in such case. That's orthogonal to this\n> > patch but maybe we can add a new \"archiver_start\" timestamptz column\n> > in pg_stat_archiver, so monitoring tools can detect a problem if it's\n> > too far away from pg_postmaster_start_time() for instance?\n>\n> There may be other solutions as well. I have applied the doc patch\n> for now.\n\nThanks!",
"msg_date": "Tue, 2 Mar 2021 09:07:46 +0100",
"msg_from": "=?UTF-8?Q?Benoit_Lobr=C3=A9au?= <benoit.lobreau@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive_command / pg_stat_archiver & documentation"
},
{
"msg_contents": "On 3/1/21 8:29 PM, Michael Paquier wrote:\n> On Mon, Mar 01, 2021 at 05:17:06PM +0800, Julien Rouhaud wrote:\n>> Maybe this can be better addressed than with a link in the\n>> documentation. The final outcome is that it can be difficult to\n>> monitor the archiver state in such case. That's orthogonal to this\n>> patch but maybe we can add a new \"archiver_start\" timestamptz column\n>> in pg_stat_archiver, so monitoring tools can detect a problem if it's\n>> too far away from pg_postmaster_start_time() for instance?\n> \n> There may be other solutions as well. I have applied the doc patch\n> for now.\n\nThis was applied (except for a small part). Should we now consider this \ncommitted?\n\nIf not, can we get a new patch for the remaining changes?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 3 Mar 2021 07:37:02 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: archive_command / pg_stat_archiver & documentation"
},
{
"msg_contents": "On Wed, Mar 03, 2021 at 07:37:02AM -0500, David Steele wrote:\n> On 3/1/21 8:29 PM, Michael Paquier wrote:\n> > On Mon, Mar 01, 2021 at 05:17:06PM +0800, Julien Rouhaud wrote:\n> > > Maybe this can be better addressed than with a link in the\n> > > documentation. The final outcome is that it can be difficult to\n> > > monitor the archiver state in such case. That's orthogonal to this\n> > > patch but maybe we can add a new \"archiver_start\" timestamptz column\n> > > in pg_stat_archiver, so monitoring tools can detect a problem if it's\n> > > too far away from pg_postmaster_start_time() for instance?\n> > \n> > There may be other solutions as well. I have applied the doc patch\n> > for now.\n> \n> This was applied (except for a small part). Should we now consider this\n> committed?\n> \n\nI think that we should consider this as committed.\n\n\n",
"msg_date": "Wed, 3 Mar 2021 21:13:09 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: archive_command / pg_stat_archiver & documentation"
},
{
"msg_contents": "On Wed, Mar 03, 2021 at 09:13:09PM +0800, Julien Rouhaud wrote:\n> I think that we should consider this as committed.\n\nIt should, so done now.\n--\nMichael",
"msg_date": "Thu, 4 Mar 2021 11:21:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: archive_command / pg_stat_archiver & documentation"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nRecently we found a table that was slowly, but consistently increasing in size. The table has a low fill-factor set and was updated very frequently. As expected, almost all updates are HOT updates, but for some of the non-HOT updates it always wanted to use a new page, rather than reuse an existing empty page. This led to a steady growth in table size (and a steady growth in the number of empty pages in the table).\n\nI've managed to create a very simple reproducing example that shows the problem (original problem occurred on 12.4, but I've tested this example on latest master). It only occurs for updates where the new tuple is larger than the size of what \"fillfactor\" would normally allow. In real life, this would only be a very small portion of the updates to a certain table of course, but in this example every update will be this large.\n\nCreate a table with a low fill-factor and insert one row into it. Note that, in this case, the row that we're inserting is by itself larger than the \"max fill factor space\".\n\ncreate table t1 (a int primary key, b text) with (fillfactor=10);\ninsert into t1 select 1, (select string_agg('1', '') from generate_series(1,1000)); -- 1000 byte text field\nvacuum t1;\n\npostgres=# select * from pg_freespace('t1');\nblkno | avail\n-------+-------\n 0 | 7104\n(1 row)\n\nThis looks alright - there's 1 page and the available space is indeed roughly 1000 bytes less, because of our tuple and page header.\n\nNow, in a different backend, initiate a longer query.\n\nselect pg_sleep(600); -- just sleep 600 seconds so that we have enough time to do some updates during this\n\nThen, in the original backend, update the tuple 7 times.\n\n-- execute this 7 times\nupdate t1 set b=(select string_agg((random()*9)::int::text, '') from generate_series(1,1000)) where a=1;\n\nCancel the pg_sleep call.\nThen execute\n\nvacuum t1; -- cleans rows and updates the fsm\n\npostgres=# select * from pg_freespace('t1');\nblkno | avail\n-------+-------\n 0 | 8128\n 1 | 7104\n(2 rows)\n\nThis still looks OK. There's an extra page, because a total of 8 tuples needed to kept alive for the pg_sleep query. These didn't fit on one page, so a new page was created.\n\nNow, repeat it (the pg_sleep, update 7 times, cancel the pg_sleep and vacuum).\n\npostgres=# select * from pg_freespace('t1');\nblkno | avail\n-------+-------\n 0 | 8128\n 1 | 8128\n 2 | 7104\n(3 rows)\n\nThis does not look good anymore. The tuple was on page 1, so at first there were several HOT updates on page 1. Then, when page 1 was full, it needed to search for another page to put the tuple. It did not consider page 0, but instead decided to create a new page 2.\n\nRepeating this process would create a new page each time, never reusing the empty old pages.\n\nThe reason it does not consider page 0 is because of this piece of code in function RelationGetBufferForTuple in hio.c:\n\n /* Compute desired extra freespace due to fillfactor option */\n saveFreeSpace = RelationGetTargetPageFreeSpace(relation, HEAP_DEFAULT_FILLFACTOR);\n...\n if (len + saveFreeSpace > MaxHeapTupleSize)\n {\n /* can't fit, don't bother asking FSM */\n targetBlock = InvalidBlockNumber;\n use_fsm = false;\n }\n\nThe problem here is two-folded: for any non-HOT update of a tuple that's larger than the size of the fillfactor, the fsm will not be used, but instead a new page will be chosen.\nThis seems to rely on the false assumption that every existing page has at last one tuple on it.\nSecondly, and this is a bit trickier.. Even if we would ask the FSM to come up with a free page with a free size of \"MaxHeapTupleSize\", it wouldn't find anything... This is, because the FSM tracks free space excluding any unused line pointers. In this example, if we look at block 0:\n\npostgres=# select * from page_header(get_raw_page('t1', 0));\n lsn | checksum | flags | lower | upper | special | pagesize | version | prune_xid\n-----------+----------+-------+-------+-------+---------+----------+---------+-----------\n0/16D75A0 | 0 | 5 | 52 | 8192 | 8192 | 8192 | 4 | 0\n(1 row)\n\npostgres=# select * from heap_page_items(get_raw_page('t1', 0));\nlp | lp_off | lp_flags | lp_len | t_xmin | t_xmax | t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff | t_bits | t_oid | t_data\n----+--------+----------+--------+--------+--------+----------+--------+-------------+------------+--------+--------+-------+--------\n 1 | 0 | 0 | 0 | | | | | | | | | |\n 2 | 0 | 0 | 0 | | | | | | | | | |\n 3 | 0 | 0 | 0 | | | | | | | | | |\n 4 | 0 | 0 | 0 | | | | | | | | | |\n 5 | 0 | 0 | 0 | | | | | | | | | |\n 6 | 0 | 0 | 0 | | | | | | | | | |\n 7 | 0 | 0 | 0 | | | | | | | | | |\n(7 rows)\n\nThere are 7 line pointers on this page, consuming 28 bytes. Plus the 24 byte header, that means that lower=52. However, all line pointers are unused, so the page really is empty. The FSM does not see the page as empty though, as it only looks at \"upper-lower\".\n\nWhen asking the FSM for slightly less space (MaxHeapTupleSize - 50 for example), it does find the free pages. I've confirmed that with such a hack the table is not growing indefinitely anymore. However, this number 50 is rather arbitrary obviously, as it depends on the number of unused line items on a page, so that's not a proper way to fix things.\n\nIn any case, the behavior feels like a bug to me, but I don't know what the best way would be to fix it. Thoughts?\n\n-Floris\n\n\n\n\n\n\n\n\n\n\nHi hackers,\n \nRecently we found a table that was slowly, but consistently increasing in size. The table has a low fill-factor set and was updated very frequently. As expected, almost all updates are HOT updates, but for some of the non-HOT updates it\n always wanted to use a new page, rather than reuse an existing empty page. This led to a steady growth in table size (and a steady growth in the number of empty pages in the table).\n \nI’ve managed to create a very simple reproducing example that shows the problem (original problem occurred on 12.4, but I’ve tested this example on latest master). It only occurs for updates where the new tuple is larger than the size of\n what “fillfactor” would normally allow. In real life, this would only be a very small portion of the updates to a certain table of course, but in this example every update will be this large.\n \nCreate a table with a low fill-factor and insert one row into it. Note that, in this case, the row that we’re inserting is by itself larger than the “max fill factor space”.\n \ncreate table t1 (a int primary key, b text) with (fillfactor=10);\ninsert into t1 select 1, (select string_agg('1', '') from generate_series(1,1000)); -- 1000 byte text field\nvacuum t1;\n \npostgres=# select * from pg_freespace('t1');\nblkno | avail \n-------+-------\n 0 | 7104\n(1 row)\n \nThis looks alright – there’s 1 page and the available space is indeed roughly 1000 bytes less, because of our tuple and page header.\n \nNow, in a different backend, initiate a longer query.\n \nselect pg_sleep(600); -- just sleep 600 seconds so that we have enough time to do some updates during this\n \nThen, in the original backend, update the tuple 7 times.\n \n-- execute this 7 times\nupdate t1 set b=(select string_agg((random()*9)::int::text, '') from generate_series(1,1000)) where a=1;\n \nCancel the pg_sleep call.\nThen execute \n \nvacuum t1; -- cleans rows and updates the fsm\n \npostgres=# select * from pg_freespace('t1');\nblkno | avail \n-------+-------\n 0 | 8128\n 1 | 7104\n(2 rows)\n \nThis still looks OK. There’s an extra page, because a total of 8 tuples needed to kept alive for the pg_sleep query. These didn’t fit on one page, so a new page was created.\n \nNow, repeat it (the pg_sleep, update 7 times, cancel the pg_sleep and vacuum).\n \npostgres=# select * from pg_freespace('t1');\nblkno | avail \n-------+-------\n 0 | 8128\n 1 | 8128\n 2 | 7104\n(3 rows)\n \nThis does not look good anymore. The tuple was on page 1, so at first there were several HOT updates on page 1. Then, when page 1 was full, it needed to search for another page to put the tuple. It did not consider page 0, but instead decided\n to create a new page 2.\n \nRepeating this process would create a new page each time, never reusing the empty old pages.\n \nThe reason it does not consider page 0 is because of this piece of code in function RelationGetBufferForTuple in hio.c:\n \n /* Compute desired extra freespace due to fillfactor option */\n saveFreeSpace = RelationGetTargetPageFreeSpace(relation, HEAP_DEFAULT_FILLFACTOR);\n…\n if (len + saveFreeSpace > MaxHeapTupleSize)\n {\n /* can't fit, don't bother asking FSM */\n targetBlock = InvalidBlockNumber;\n use_fsm = false;\n }\n \nThe problem here is two-folded: for any non-HOT update of a tuple that’s larger than the size of the fillfactor, the fsm will not be used, but instead a new page will be chosen.\nThis seems to rely on the false assumption that every existing page has at last one tuple on it.\nSecondly, and this is a bit trickier.. Even if we would ask the FSM to come up with a free page with a free size of “MaxHeapTupleSize”, it wouldn’t find anything… This is, because the FSM tracks free space excluding any unused line pointers.\n In this example, if we look at block 0:\n \npostgres=# select * from page_header(get_raw_page('t1', 0));\n lsn | checksum | flags | lower | upper | special | pagesize | version | prune_xid\n\n-----------+----------+-------+-------+-------+---------+----------+---------+-----------\n0/16D75A0 | 0 | 5 | 52 | 8192 | 8192 | 8192 | 4 | 0\n(1 row)\n \npostgres=# select * from heap_page_items(get_raw_page('t1', 0));\nlp | lp_off | lp_flags | lp_len | t_xmin | t_xmax | t_field3 | t_ctid | t_infomask2 | t_infomask | t_hoff | t_bits | t_oid | t_data\n\n----+--------+----------+--------+--------+--------+----------+--------+-------------+------------+--------+--------+-------+--------\n 1 | 0 | 0 | 0 | | | | | | | | | |\n\n 2 | 0 | 0 | 0 | | | | | | | | | |\n\n 3 | 0 | 0 | 0 | | | | | | | | | |\n\n 4 | 0 | 0 | 0 | | | | | | | | | |\n\n 5 | 0 | 0 | 0 | | | | | | | | | |\n\n 6 | 0 | 0 | 0 | | | | | | | | | |\n\n 7 | 0 | 0 | 0 | | | | | | | | | |\n\n(7 rows)\n \nThere are 7 line pointers on this page, consuming 28 bytes. Plus the 24 byte header, that means that lower=52. However, all line pointers are unused, so the page really is empty. The FSM does not see the page as empty though, as it only\n looks at “upper-lower”.\n \nWhen asking the FSM for slightly less space (MaxHeapTupleSize – 50 for example), it does find the free pages. I’ve confirmed that with such a hack the table is not growing indefinitely anymore. However, this number 50 is rather arbitrary\n obviously, as it depends on the number of unused line items on a page, so that’s not a proper way to fix things.\n \nIn any case, the behavior feels like a bug to me, but I don’t know what the best way would be to fix it. Thoughts?\n \n-Floris",
"msg_date": "Wed, 24 Feb 2021 14:44:30 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "On Wed, Feb 24, 2021 at 10:44 AM Floris Van Nee <florisvannee@optiver.com>\nwrote:\n>\n> The problem here is two-folded: for any non-HOT update of a tuple that’s\nlarger than the size of the fillfactor, the fsm will not be used, but\ninstead a new page will be chosen.\n\nI confirmed this not only non-HOT updates, but regular inserts, which are\nthe same thing in this context.\n\n> This seems to rely on the false assumption that every existing page has\nat last one tuple on it.\n\nYep.\n\n> Secondly, and this is a bit trickier.. Even if we would ask the FSM to\ncome up with a free page with a free size of “MaxHeapTupleSize”, it\nwouldn’t find anything… This is, because the FSM tracks free space\nexcluding any unused line pointers.\n\n> There are 7 line pointers on this page, consuming 28 bytes. Plus the 24\nbyte header, that means that lower=52. However, all line pointers are\nunused, so the page really is empty. The FSM does not see the page as empty\nthough, as it only looks at “upper-lower”.\n>\n>\n>\n> When asking the FSM for slightly less space (MaxHeapTupleSize – 50 for\nexample), it does find the free pages. I’ve confirmed that with such a hack\nthe table is not growing indefinitely anymore. However, this number 50 is\nrather arbitrary obviously, as it depends on the number of unused line\nitems on a page, so that’s not a proper way to fix things.\n>\n>\n>\n> In any case, the behavior feels like a bug to me, but I don’t know what\nthe best way would be to fix it. Thoughts?\n\nOne idea is to take your -50 idea and make it more general and safe, by\nscaling the fudge factor based on fillfactor, such that if fillfactor is\nless than 100, the requested freespace is a bit smaller than the max. It's\nstill a bit of a hack, though. I've attached a draft of this idea.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 24 Feb 2021 15:35:18 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "Hi John,\r\n\r\n\r\n\r\n> One idea is to take your -50 idea and make it more general and safe, by scaling the fudge factor based on fillfactor, such that if fillfactor is less than 100, the requested freespace is a bit smaller than the max. It's still a bit of a hack, though. I've attached a draft of this idea.\r\n\r\n\r\n\r\nYou’re right, that’d work better. Though, one thing I'd forgot to mention earlier is that in the \"wild\" where this occurred, the UPDATEs with these large tuple sizes are much rarer than UPDATEs with a much smaller tuple size. So this means that in reality, when this happens, the empty pages contain more unused line pointers and we’d need to subtract more bytes in order to find those pages in the fsm.\r\n\r\n\r\n\r\nThis is the (partial) output of pg_freespace function, bucketed by free space, for a real-life table with fillfactor=10 under the mixed load that I've described.\r\n\r\n│ free │ count │\r\n\r\n│ 7750 │ 2003 │\r\n\r\n│ 7800 │ 7113 │\r\n\r\n│ 7850 │ 1781 │\r\n\r\n│ 7900 │ 6803 │\r\n\r\n│ 7950 │ 13643 │\r\n\r\n│ 8000 │ 64779 │\r\n\r\n│ 8050 │ 2469665 │\r\n\r\n│ 8100 │ 61869 │\r\n\r\n└──────┴─────────┘\r\n\r\n(163 rows)\r\n\r\n\r\n\r\nThe ‘free’ column is the bucket where the value is the lower limit. So, free=7500 means between 7500-7549 bytes free, and count is the number of pages that have this amount free according to the fsm.\r\n\r\nIn this case, the vast majority has between 8050-8099 bytes free according to the FSM. That means that, for this particular case, for a fillfactor of 10, we’d need to subtract ~120 bytes or so in order to properly recycle the pages.\r\n\r\n\r\n\r\n-Floris\r\n\r\n\r\n\n\n\n\n\n\n\n\n\nHi John,\n \n> One idea is to take your -50 idea and make it more general and safe, by scaling the fudge factor based on fillfactor, such that if fillfactor is less than 100, the requested freespace is a bit smaller than the max. It's still a bit\r\n of a hack, though. I've attached a draft of this idea.\n \nYou’re right, that’d work better. Though, one thing I'd forgot to mention earlier is that in the \"wild\" where this occurred, the UPDATEs with these large tuple sizes are much rarer than UPDATEs with a much smaller tuple size. So this\r\n means that in reality, when this happens, the empty pages contain more unused line pointers and we’d need to subtract more bytes in order to find those pages in the fsm.\n \nThis is the (partial) output of pg_freespace function, bucketed by free space, for a real-life table with fillfactor=10 under the mixed load that I've described.\n\n│ free │ count │\n│ 7750 │ 2003 │\n│ 7800 │ 7113 │\n│ 7850 │ 1781 │\n│ 7900 │ 6803 │\n│ 7950 │ 13643 │\n│ 8000 │ 64779 │\n│ 8050 │ 2469665 │\n│ 8100 │ 61869 │\n└──────┴─────────┘\n(163 rows)\n \nThe ‘free’ column is the bucket where the value is the lower limit. So, free=7500 means between 7500-7549 bytes free, and count is the number of pages that have this amount free according to the fsm.\nIn this case, the vast majority has between 8050-8099 bytes free according to the FSM. That means that, for this particular case, for a fillfactor of 10, we’d need to subtract ~120 bytes or so in order to properly recycle the pages.\n \n-Floris",
"msg_date": "Wed, 24 Feb 2021 20:10:23 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "RE: non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "> In this case, the vast majority has between 8050-8099 bytes free according to the FSM. That means that, for this particular case, for a fillfactor of 10, we’d need to subtract ~120 bytes or so in order to properly recycle the pages.\r\n\r\nAlso, I think this \"fudge\" factor would need to be defined as a percentage of the page size as well. 100 bytes on an 8kB page is quite different than 100 bytes on a 1kB page (although I have no idea if people ever actually compile PG with a different page size, but it is supposed to be supported?).\r\n\r\nI also understand the temptation to define it based on the relation's fill factor, as you did in the patch. However, upon some further thought I wonder if that's useful? A relation with a higher fill factor will have a lower 'saveFreeSpace' variable, so it's less likely to run into issues in finding a fresh page, except if the tuple you're inserting/updating is even larger. However, if that case happens, you'll still be wanting to look for a page that's completely empty (except for the line items). So the only proper metric is 'how many unused line items do we expect on empty pages' and the fillfactor doesn't say much about this. Since this is probably difficult to estimate at all, we may be better off just defining it off MaxHeapTupleSize completely?\r\nFor example, we expect 1.5% of the page could be line items, then:\r\n\r\ntargetFreeSpace = MaxHeapTupleSize * 0.985\r\n\r\n-Floris\r\n\r\n",
"msg_date": "Wed, 24 Feb 2021 20:52:35 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "RE: non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "On Wed, Feb 24, 2021 at 4:52 PM Floris Van Nee <florisvannee@optiver.com>\nwrote:\n\n> I also understand the temptation to define it based on the relation's\nfill factor, as you did in the patch. However, upon some further thought I\nwonder if that's useful? A relation with a higher fill factor will have a\nlower 'saveFreeSpace' variable, so it's less likely to run into issues in\nfinding a fresh page, except if the tuple you're inserting/updating is even\nlarger. However, if that case happens, you'll still be wanting to look for\na page that's completely empty (except for the line items). So the only\nproper metric is 'how many unused line items do we expect on empty pages'\nand the fillfactor doesn't say much about this. Since this is probably\ndifficult to estimate at all, we may be better off just defining it off\nMaxHeapTupleSize completely?\n> For example, we expect 1.5% of the page could be line items, then:\n>\n> targetFreeSpace = MaxHeapTupleSize * 0.985\n\nThat makes sense, although the exact number seems precisely tailored to\nyour use case. 2% gives 164 bytes of slack and doesn't seem too large.\nUpdated patch attached.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 24 Feb 2021 17:19:47 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "\r\n> That makes sense, although the exact number seems precisely tailored to your use case. 2% gives 164 bytes of slack and doesn't seem too large. Updated patch attached.\r\n\r\nYeah, I tried picking it as conservative as I could, but 2% is obviously great too. :-) I can't think of any large drawbacks either of having a slightly larger value.\r\nThanks for posting the patch!\r\n\r\n-Floris\r\n\r\n",
"msg_date": "Wed, 24 Feb 2021 22:29:21 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "RE: non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "On Wed, Feb 24, 2021 at 6:29 PM Floris Van Nee <florisvannee@optiver.com>\nwrote:\n>\n>\n> > That makes sense, although the exact number seems precisely tailored to\nyour use case. 2% gives 164 bytes of slack and doesn't seem too large.\nUpdated patch attached.\n>\n> Yeah, I tried picking it as conservative as I could, but 2% is obviously\ngreat too. :-) I can't think of any large drawbacks either of having a\nslightly larger value.\n> Thanks for posting the patch!\n\nI've added this to the commitfest as a bug fix and added you as an author.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Feb 24, 2021 at 6:29 PM Floris Van Nee <florisvannee@optiver.com> wrote:>>> > That makes sense, although the exact number seems precisely tailored to your use case. 2% gives 164 bytes of slack and doesn't seem too large. Updated patch attached.>> Yeah, I tried picking it as conservative as I could, but 2% is obviously great too. :-) I can't think of any large drawbacks either of having a slightly larger value.> Thanks for posting the patch!I've added this to the commitfest as a bug fix and added you as an author.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 26 Feb 2021 07:14:18 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "> I've added this to the commitfest as a bug fix and added you as an author.\r\n\r\nThanks. Patch looks good to me, but I guess there needs to be someone else reviewing too?\r\nAlso, would this be a backpatchable bugfix?\r\n\r\n-Floris\r\n\r\n",
"msg_date": "Mon, 8 Mar 2021 15:25:11 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "RE: non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "On Mon, 8 Mar 2021 at 16:25, Floris Van Nee <florisvannee@optiver.com> wrote:\n>\n> > I've added this to the commitfest as a bug fix and added you as an author.\n>\n> Thanks. Patch looks good to me, but I guess there needs to be someone else reviewing too?\n> Also, would this be a backpatchable bugfix?\n>\n> -Floris\n>\n\nThis patch fails to consider that len may be bigger than\nMaxHeapTupleSize * 0.98, which in this case triggers a reproducable\nPANIC:\n\n=# CREATE TABLE t_failure (a int, b text) WITH (fillfactor = 10); --\nforce the new FSM calculation for large tuples\nCREATE TABLE\n=# ALTER TABLE t_failure ALTER COLUMN b SET STORAGE plain;\nALTER TABLE\n=# INSERT INTO t_failure (SELECT FROM generate_series(1, 32)); -- use\nup 32 line pointers on the first page.\nINSERT 0 32\n=# DELETE FROM t_failure;\nDELETE 32\n=# VACUUM (TRUNCATE OFF) t_failure; -- we now have a page that has\nMaxHeapTupleSize > free space > 98% MaxHeapTupleSize\nVACUUM\n=# INSERT INTO t_failure (select 1, string_agg('1', '') from\ngenerate_series(1, 8126));\nPANIC: failed to add tuple to page\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\n\nA possible solution should always request at least the size of the\nrequested tuple, e.g.:\n- targetFreeSpace = MaxHeapTupleSize - (MaxHeapTupleSize * 2 / 100);\n+ targetFreeSpace = Max(len, MaxHeapTupleSize - (MaxHeapTupleSize * 2 / 100));\n\n\nOne different question I have, though, is why we can't \"just\" teach\nvacuum to clean up trailing unused line pointers. As in, can't we trim\nthe line pointer array when vacuum detects that the trailing line\npointers on the page are all unused?\n\nThe only documentation that I could find that this doesn't happen is\nin the comment on PageIndexTupleDelete and PageRepairFragmentation,\nboth not very descriptive on why we can't shrink the page->pd_linp\narray. One is \"Unlike heap pages, we compact out the line pointer for\nthe removed tuple.\" (Jan. 2002), and the other is \"It doesn't remove\nunused line pointers! Please don't change this.\" (Oct. 2000), but I\ncan't seem to find the documentation / conversations on the\nimplications that such shrinking would have.\n\nWith regards,\n\nMatthias van de Meent.\n\n\n",
"msg_date": "Tue, 9 Mar 2021 00:14:19 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "Hi,\r\n\r\n> \r\n> This patch fails to consider that len may be bigger than MaxHeapTupleSize *\r\n> 0.98, which in this case triggers a reproducable\r\n> PANIC:\r\n\r\nGood catch! I've adapted the patch with your suggested fix.\r\n\r\n> \r\n> One different question I have, though, is why we can't \"just\" teach vacuum\r\n> to clean up trailing unused line pointers. As in, can't we trim the line pointer\r\n> array when vacuum detects that the trailing line pointers on the page are all\r\n> unused?\r\n> \r\n> The only documentation that I could find that this doesn't happen is in the\r\n> comment on PageIndexTupleDelete and PageRepairFragmentation, both not\r\n> very descriptive on why we can't shrink the page->pd_linp array. One is\r\n> \"Unlike heap pages, we compact out the line pointer for the removed tuple.\"\r\n> (Jan. 2002), and the other is \"It doesn't remove unused line pointers! Please\r\n> don't change this.\" (Oct. 2000), but I can't seem to find the documentation /\r\n> conversations on the implications that such shrinking would have.\r\n> \r\n\r\nThis is an interesting alternative indeed. I also can't find any documentation/conversation about this and the message is rather cryptic.\r\nHopefully someone on the list still remembers the reasoning behind this rather cryptic comment in PageRepairFragmentation.\r\n\r\n-Floris",
"msg_date": "Tue, 9 Mar 2021 13:40:29 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "RE: non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "On Tue, Mar 9, 2021 at 9:40 AM Floris Van Nee <florisvannee@optiver.com>\nwrote:\n>\n> Hi,\n>\n> >\n> > This patch fails to consider that len may be bigger than\nMaxHeapTupleSize *\n> > 0.98, which in this case triggers a reproducable\n> > PANIC:\n>\n> Good catch! I've adapted the patch with your suggested fix.\n\nThank you both for testing and for the updated patch. It seems we should\nadd a regression test, but it's not clear which file it belongs in.\nPossibly insert.sql?\n\n> > One different question I have, though, is why we can't \"just\" teach\nvacuum\n> > to clean up trailing unused line pointers. As in, can't we trim the\nline pointer\n> > array when vacuum detects that the trailing line pointers on the page\nare all\n> > unused?\n\nThat seems like the proper fix, and I see you've started a thread for that.\nI don't think that change in behavior would be backpatchable, but patch\nhere might have a chance at that.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Mar 9, 2021 at 9:40 AM Floris Van Nee <florisvannee@optiver.com> wrote:>> Hi,>> >> > This patch fails to consider that len may be bigger than MaxHeapTupleSize *> > 0.98, which in this case triggers a reproducable> > PANIC:>> Good catch! I've adapted the patch with your suggested fix.Thank you both for testing and for the updated patch. It seems we should add a regression test, but it's not clear which file it belongs in. Possibly insert.sql?> > One different question I have, though, is why we can't \"just\" teach vacuum> > to clean up trailing unused line pointers. As in, can't we trim the line pointer> > array when vacuum detects that the trailing line pointers on the page are all> > unused?That seems like the proper fix, and I see you've started a thread for that. I don't think that change in behavior would be backpatchable, but patch here might have a chance at that.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 9 Mar 2021 13:25:14 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "I wrote:\n\n> That seems like the proper fix, and I see you've started a thread for\nthat. I don't think that change in behavior would be backpatchable, but\npatch here might have a chance at that.\n\nI remembered after the fact that truncating line pointers would only allow\nfor omitting the 2% slack logic (and has other benefits), but the rest of\nthis patch would be needed regardless.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nI wrote:> That seems like the proper fix, and I see you've started a thread for that. I don't think that change in behavior would be backpatchable, but patch here might have a chance at that.I remembered after the fact that truncating line pointers would only allow for omitting the 2% slack logic (and has other benefits), but the rest of this patch would be needed regardless.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 9 Mar 2021 13:38:57 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "On Tue, 9 Mar 2021 at 18:39, John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n> I wrote:\n>\n> > That seems like the proper fix, and I see you've started a thread for that. I don't think that change in behavior would be backpatchable, but patch here might have a chance at that.\n>\n> I remembered after the fact that truncating line pointers would only allow for omitting the 2% slack logic (and has other benefits), but the rest of this patch would be needed regardless.\n\nRegarding the 2% slack logic, could we change it to use increments of\nline pointers instead? That makes it more clear what problem this\nsolution is trying to work around; that is to say line pointers not\n(all) being truncated away. The currently subtracted value accounts\nfor the size of 40 line pointers on 8k-pages (~ 13.7% of\nMaxHeapTuplesPerPage), and slightly higher fractions (up to 13.94%)\nfor larger page sizes. As the to-be-inserted tuple is already _at\nleast_ 10% of MaxHeapTupleSize when it hits this new code.\n\nAlso, even with this patch, we do FSM-requests of for sizes between\nMaxHeapTupleSize - 2% and MaxHeapTupleSize, if len+saveFreeSpace falls\nbetween those two numbers. I think we better clamp the fsm request\nbetween `len` and `MaxHeapTupleSize - PAGE_SIZE_DEPENDENT_FACTOR`.\n\nSo, I sugges the following incremental patch:\n\n bool needLock;\n+ const Size maxPaddedFsmRequest = MaxHeapTupleSize -\n(MaxHeapTuplesPerPage / 8 * sizeof(ItemIdData));\n...\n- if (len + saveFreeSpace > MaxHeapTupleSize)\n+ if (len + saveFreeSpace > maxPaddedFsmRequest)\n...\n- targetFreeSpace = Max(len, MaxHeapTupleSize - (MaxHeapTupleSize * 2 / 100));\n+ targetFreeSpace = Max(len, maxPaddedFsmRequest);\n...\n\nOther than this, I think this is a good fix.\n\n\nWith regards,\n\nMatthias van de Meent.\n\n\n",
"msg_date": "Thu, 11 Mar 2021 14:45:54 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "On Thu, Mar 11, 2021 at 9:46 AM Matthias van de Meent <\nboekewurm+postgres@gmail.com> wrote:\n\n> Regarding the 2% slack logic, could we change it to use increments of\n> line pointers instead? That makes it more clear what problem this\n> solution is trying to work around; that is to say line pointers not\n> (all) being truncated away. The currently subtracted value accounts\n\nThat makes sense.\n\n> ...\n> - if (len + saveFreeSpace > MaxHeapTupleSize)\n> + if (len + saveFreeSpace > maxPaddedFsmRequest)\n> ...\n> - targetFreeSpace = Max(len, MaxHeapTupleSize - (MaxHeapTupleSize * 2 /\n100));\n> + targetFreeSpace = Max(len, maxPaddedFsmRequest);\n> ...\n\nIf we have that convenient constant, it seems equivalent (I think) and a\nbit more clear to write it this way, but I'm not wedded to it:\n\nif (len + saveFreeSpace > MaxHeapTupleSize &&\n len <= maxPaddedFsmRequest)\n{\n ...\n targetFreeSpace = maxPaddedFsmRequest;\n}\nelse\n targetFreeSpace = len + saveFreeSpace;\n\nAlso, should I write a regression test for it? The test case is already\navailable, just no obvious place to put it.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Mar 11, 2021 at 9:46 AM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:> Regarding the 2% slack logic, could we change it to use increments of> line pointers instead? That makes it more clear what problem this> solution is trying to work around; that is to say line pointers not> (all) being truncated away. The currently subtracted value accountsThat makes sense.> ...> - if (len + saveFreeSpace > MaxHeapTupleSize)> + if (len + saveFreeSpace > maxPaddedFsmRequest)> ...> - targetFreeSpace = Max(len, MaxHeapTupleSize - (MaxHeapTupleSize * 2 / 100));> + targetFreeSpace = Max(len, maxPaddedFsmRequest);> ...If we have that convenient constant, it seems equivalent (I think) and a bit more clear to write it this way, but I'm not wedded to it:if (len + saveFreeSpace > MaxHeapTupleSize && len <= maxPaddedFsmRequest){ ... targetFreeSpace = maxPaddedFsmRequest;}else targetFreeSpace = len + saveFreeSpace;Also, should I write a regression test for it? The test case is already available, just no obvious place to put it.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 11 Mar 2021 11:16:04 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "On Thu, 11 Mar 2021 at 16:16, John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n> On Thu, Mar 11, 2021 at 9:46 AM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n>\n> > Regarding the 2% slack logic, could we change it to use increments of\n> > line pointers instead? That makes it more clear what problem this\n> > solution is trying to work around; that is to say line pointers not\n> > (all) being truncated away. The currently subtracted value accounts\n>\n> That makes sense.\n>\n> > ...\n> > - if (len + saveFreeSpace > MaxHeapTupleSize)\n> > + if (len + saveFreeSpace > maxPaddedFsmRequest)\n> > ...\n> > - targetFreeSpace = Max(len, MaxHeapTupleSize - (MaxHeapTupleSize * 2 / 100));\n> > + targetFreeSpace = Max(len, maxPaddedFsmRequest);\n> > ...\n>\n> If we have that convenient constant, it seems equivalent (I think) and a bit more clear to write it this way, but I'm not wedded to it:\n>\n> if (len + saveFreeSpace > MaxHeapTupleSize &&\n> len <= maxPaddedFsmRequest)\n> {\n> ...\n> targetFreeSpace = maxPaddedFsmRequest;\n> }\n\n+ else if (len > maxPaddedFsmRequest\n+ {\n+ /* request len amount of space; it might still fit on\nnot-quite-empty pages */\n+ targetFreeSpace = len;\n+ }\n\nIf this case isn't added, the lower else branch will fail to find\nfitting pages for len > maxPaddedFsmRequest tuples; potentially\nextending the relation when there is actually still enough space\navailable.\n\n> else\n> targetFreeSpace = len + saveFreeSpace;\n\n> Also, should I write a regression test for it? The test case is already available, just no obvious place to put it.\n\nI think it would be difficult to write tests that exhibit the correct\nbehaviour on BLCKSZ != 8196. On the other hand, I see there are some\ntests that explicitly call out that they expect BLCKSZ to be 8192, so\nthat has not really been a hard block before.\n\nThe previous code I sent had initial INSERT + DELETE + VACUUM. These\nstatements can be replaced with `INSERT INTO t_failure (b) VALUES\n(repeat('1', 95)); VACUUM;` for simplicity. The vacuum is still needed\nto populate the FSM for the new page.\n\nWith regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Fri, 12 Mar 2021 13:45:38 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "On Fri, Mar 12, 2021 at 8:45 AM Matthias van de Meent <\nboekewurm+postgres@gmail.com> wrote:\n>\n> If this case isn't added, the lower else branch will fail to find\n> fitting pages for len > maxPaddedFsmRequest tuples; potentially\n> extending the relation when there is actually still enough space\n> available.\n\nOkay, with that it looks better to go back to using Max().\n\nAlso in v4:\n\n- With the separate constant you suggested, I split up the comment into two\nparts.\n- I've added a regression test to insert.sql similar to your earlier test,\nbut we cannot use vacuum, since in parallel tests there could still be\ntuples visible to other transactions. It's still possible to test\nalmost-all-free by inserting a small tuple.\n- Draft commit message\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 17 Mar 2021 16:51:48 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "On Wed, 17 Mar 2021 at 21:52, John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n> On Fri, Mar 12, 2021 at 8:45 AM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n> >\n> > If this case isn't added, the lower else branch will fail to find\n> > fitting pages for len > maxPaddedFsmRequest tuples; potentially\n> > extending the relation when there is actually still enough space\n> > available.\n>\n> Okay, with that it looks better to go back to using Max().\n>\n> Also in v4:\n>\n> - With the separate constant you suggested, I split up the comment into two parts.\n\n> + * The minimum space on a page for it to be considered \"empty\" regardless\n> + * of fillfactor. We base this on MaxHeapTupleSize, minus a small amount\n> + * of slack. We allow slack equal to 1/8 the maximum space that could be\n> + * taken by line pointers, which is somewhat arbitrary.\n\n> + * We want to allow inserting a large tuple into an empty page even if\n> + * that would violate the fillfactor. Otherwise, we would unnecessarily\n> + * extend the relation. Instead, ask the FSM for maxPaddedFsmRequest\n> + * bytes. This will allow it to return a page that is not quite empty\n> + * because of unused line pointers\n\nHow about\n\n+ * Because pages that have no items left can still have space allocated\n+ * to item pointers, we consider pages \"empty\" for FSM requests when they\n+ * have at most 1/8 of their MaxHeapTuplesPerPage item pointers' space\n+ * allocated. This is a somewhat arbitrary number, but should prevent\n+ * most unnecessary relation extensions due to not finding \"empty\" pages\n+ * while inserting combinations of large tuples with low fillfactors.\n\n+ * When the free space to be requested from the FSM is greater than\n+ * maxPaddedFsmRequest, this is considered equivalent to requesting\n+ * insertion on an \"empty\" page, so instead of failing to find a page\n+ * with more empty space than an \"empty\" page and extend the relation,\n+ * we try to find a page which is considered \"emtpy\".\n\nThis is slightly more verbose, but I think this clarifies the\nreasoning why we need this a bit better. Feel free to reject or adapt\nas needed.\n\n> - I've added a regression test to insert.sql similar to your earlier test, but we cannot use vacuum, since in parallel tests there could still be tuples visible to other transactions. It's still possible to test almost-all-free by inserting a small tuple.\n> - Draft commit message\n\nApart from these mainly readability changes in comments, I think this is ready.\n\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Mar 2021 22:30:36 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "On Thu, Mar 18, 2021 at 5:30 PM Matthias van de Meent <\nboekewurm+postgres@gmail.com> wrote:\n>\n> > + * The minimum space on a page for it to be considered \"empty\"\nregardless\n> > + * of fillfactor. We base this on MaxHeapTupleSize, minus a small\namount\n> > + * of slack. We allow slack equal to 1/8 the maximum space that\ncould be\n> > + * taken by line pointers, which is somewhat arbitrary.\n>\n> > + * We want to allow inserting a large tuple into an empty page\neven if\n> > + * that would violate the fillfactor. Otherwise, we would\nunnecessarily\n> > + * extend the relation. Instead, ask the FSM for\nmaxPaddedFsmRequest\n> > + * bytes. This will allow it to return a page that is not quite\nempty\n> > + * because of unused line pointers\n>\n> How about\n>\n> + * Because pages that have no items left can still have space\nallocated\n> + * to item pointers, we consider pages \"empty\" for FSM requests when\nthey\n> + * have at most 1/8 of their MaxHeapTuplesPerPage item pointers' space\n> + * allocated. This is a somewhat arbitrary number, but should prevent\n> + * most unnecessary relation extensions due to not finding \"empty\"\npages\n> + * while inserting combinations of large tuples with low fillfactors.\n>\n> + * When the free space to be requested from the FSM is greater than\n> + * maxPaddedFsmRequest, this is considered equivalent to requesting\n> + * insertion on an \"empty\" page, so instead of failing to find a page\n> + * with more empty space than an \"empty\" page and extend the relation,\n> + * we try to find a page which is considered \"emtpy\".\n>\n> This is slightly more verbose, but I think this clarifies the\n> reasoning why we need this a bit better. Feel free to reject or adapt\n> as needed.\n\nI like this in general, but still has some rough edges. I've made another\nattempt in v5 incorporating your suggestions. Let me know what you think.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 19 Mar 2021 14:16:22 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "On Fri, 19 Mar 2021 at 19:16, John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n> On Thu, Mar 18, 2021 at 5:30 PM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n> >\n> > This is slightly more verbose, but I think this clarifies the\n> > reasoning why we need this a bit better. Feel free to reject or adapt\n> > as needed.\n>\n> I like this in general, but still has some rough edges. I've made another attempt in v5 incorporating your suggestions. Let me know what you think.\n\nThat is indeed better.\n\nI believe this is ready, so I've marked it as RFC in the commitfest application.\n\nWith regards,\n\nMatthias van de Meent.\n\n\n",
"msg_date": "Fri, 19 Mar 2021 19:42:08 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "I gather this is important when large_upd_rate=rate(cross-page update bytes\nfor tuples larger than fillfactor) exceeds small_ins_rate=rate(insert bytes\nfor tuples NOT larger than fillfactor). That is a plausible outcome when\ninserts are rare, and table bloat then accrues at\nlarge_upd_rate-small_ins_rate. I agree this patch improves behavior.\n\nDoes anyone have a strong opinion on whether to back-patch? I am weakly\ninclined not to back-patch, because today's behavior might happen to perform\nbetter when large_upd_rate-small_ins_rate<0. Besides the usual choices of\nback-patching or not back-patching, we could back-patch with a stricter\nthreshold. Suppose we accepted pages for larger-than-fillfactor tuples when\nthe pages have at least\nBLCKSZ-SizeOfPageHeaderData-sizeof(ItemIdData)-MAXALIGN(MAXALIGN(SizeofHeapTupleHeader)+1)+1\nbytes of free space. That wouldn't reuse a page containing a one-column\ntuple, but it would reuse a page having up to eight line pointers.\n\nOn Fri, Mar 19, 2021 at 02:16:22PM -0400, John Naylor wrote:\n> --- a/src/backend/access/heap/hio.c\n> +++ b/src/backend/access/heap/hio.c\n> @@ -335,11 +335,24 @@ RelationGetBufferForTuple(Relation relation, Size len,\n\n> +\tconst Size\tmaxPaddedFsmRequest = MaxHeapTupleSize -\n> +\t(MaxHeapTuplesPerPage / 8 * sizeof(ItemIdData));\n\nIn evaluating whether this is a good choice of value, I think about the\nexpected page lifecycle. A tuple barely larger than fillfactor (roughly\nlen=1+BLCKSZ*fillfactor/100) will start on a roughly-empty page. As long as\nthe tuple exists, the server will skip that page for inserts. Updates can\ncause up to floor(99/fillfactor) same-size versions of the tuple to occupy the\npage simultaneously, creating that many line pointers. At the fillfactor=10\nminimum, it's good to accept otherwise-empty pages having at least nine line\npointers, so a page can restart the aforementioned lifecycle. Tolerating even\nmore line pointers helps when updates reduce tuple size or when the page was\nused for smaller tuples before it last emptied. At the BLCKSZ=8192 default,\nthis maxPaddedFsmRequest allows 36 line pointers (good or somewhat high). At\nthe BLCKSZ=1024 minimum, it allows 4 line pointers (low). At the BLCKSZ=32768\nmaximum, 146 (likely excessive). I'm not concerned about optimizing\nnon-default block sizes, so let's keep your proposal.\n\nComments and the maxPaddedFsmRequest variable name use term \"fsm\" for things\nnot specific to the FSM. For example, the patch's test case doesn't use the\nFSM. (That is fine. Ordinarily, RelationGetTargetBlock() furnishes its\nblock. Under CLOBBER_CACHE_ALWAYS, the \"try the last page\" logic does so. An\nFSM-using test would contain a VACUUM.) I plan to commit the attached\nversion; compared to v5, it updates comments and renames this variable.\n\nThanks,\nnm",
"msg_date": "Sat, 27 Mar 2021 00:00:31 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "Hi Noah,\n\nThanks for taking a look at this patch.\n\n> \n> In evaluating whether this is a good choice of value, I think about the\n> expected page lifecycle. A tuple barely larger than fillfactor (roughly\n> len=1+BLCKSZ*fillfactor/100) will start on a roughly-empty page. As long as\n> the tuple exists, the server will skip that page for inserts. Updates can cause\n> up to floor(99/fillfactor) same-size versions of the tuple to occupy the page\n> simultaneously, creating that many line pointers. At the fillfactor=10\n> minimum, it's good to accept otherwise-empty pages having at least nine line\n> pointers, so a page can restart the aforementioned lifecycle. Tolerating even\n> more line pointers helps when updates reduce tuple size or when the page\n> was used for smaller tuples before it last emptied. At the BLCKSZ=8192\n> default, this maxPaddedFsmRequest allows 36 line pointers (good or\n> somewhat high). At the BLCKSZ=1024 minimum, it allows 4 line pointers\n> (low). At the BLCKSZ=32768 maximum, 146 (likely excessive). I'm not\n> concerned about optimizing non-default block sizes, so let's keep your\n> proposal.\n> \n\nAgreed. You briefly mention this already, but the case that caused me to report this was exactly the one where under normal circumstances each UPDATE would be small. However, in rare cases, the tuple that is updated grows in size to 1k bytes (the specific case we encountered sometimes would under specific circumstances write extra info in a field, which would otherwise be NULL). Suppose that this 1k UPDATE does not fit into the current page (so no HOT update), then a new page would be created (HEAD behavior). However, it is very likely that the next updates to this same tuple will be the regular size again. This causes the higher number of line pointers on the page.\n\n-Floris\n\n\n\n",
"msg_date": "Sat, 27 Mar 2021 10:24:00 +0000",
"msg_from": "Floris Van Nee <florisvannee@Optiver.com>",
"msg_from_op": true,
"msg_subject": "RE: non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "On Sat, Mar 27, 2021 at 3:00 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> Does anyone have a strong opinion on whether to back-patch? I am weakly\n> inclined not to back-patch, because today's behavior might happen to\nperform\n> better when large_upd_rate-small_ins_rate<0.\n\nIt's not a clear case. The present behavior is clearly a bug, but only\nmanifests in rare situations. The risk of the fix affecting other\nsituations is not zero, as you mention, but (thinking briefly about this\nand I could be wrong) the consequences don't seem as big as the reported\ncase of growing table size.\n\n> Besides the usual choices of\n> back-patching or not back-patching, we could back-patch with a stricter\n> threshold. Suppose we accepted pages for larger-than-fillfactor tuples\nwhen\n> the pages have at least\n>\nBLCKSZ-SizeOfPageHeaderData-sizeof(ItemIdData)-MAXALIGN(MAXALIGN(SizeofHeapTupleHeader)+1)+1\n> bytes of free space. That wouldn't reuse a page containing a one-column\n> tuple, but it would reuse a page having up to eight line pointers.\n\nI'm not sure how much that would help in the reported case that started\nthis thread.\n\n> Comments and the maxPaddedFsmRequest variable name use term \"fsm\" for\nthings\n> not specific to the FSM. For example, the patch's test case doesn't use\nthe\n> FSM. (That is fine. Ordinarily, RelationGetTargetBlock() furnishes its\n> block. Under CLOBBER_CACHE_ALWAYS, the \"try the last page\" logic does\nso. An\n> FSM-using test would contain a VACUUM.) I plan to commit the attached\n> version; compared to v5, it updates comments and renames this variable.\n\nLooks good to me, thanks!\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Sat, Mar 27, 2021 at 3:00 AM Noah Misch <noah@leadboat.com> wrote:>> Does anyone have a strong opinion on whether to back-patch? I am weakly> inclined not to back-patch, because today's behavior might happen to perform> better when large_upd_rate-small_ins_rate<0.It's not a clear case. The present behavior is clearly a bug, but only manifests in rare situations. The risk of the fix affecting other situations is not zero, as you mention, but (thinking briefly about this and I could be wrong) the consequences don't seem as big as the reported case of growing table size.> Besides the usual choices of> back-patching or not back-patching, we could back-patch with a stricter> threshold. Suppose we accepted pages for larger-than-fillfactor tuples when> the pages have at least> BLCKSZ-SizeOfPageHeaderData-sizeof(ItemIdData)-MAXALIGN(MAXALIGN(SizeofHeapTupleHeader)+1)+1> bytes of free space. That wouldn't reuse a page containing a one-column> tuple, but it would reuse a page having up to eight line pointers.I'm not sure how much that would help in the reported case that started this thread.> Comments and the maxPaddedFsmRequest variable name use term \"fsm\" for things> not specific to the FSM. For example, the patch's test case doesn't use the> FSM. (That is fine. Ordinarily, RelationGetTargetBlock() furnishes its> block. Under CLOBBER_CACHE_ALWAYS, the \"try the last page\" logic does so. An> FSM-using test would contain a VACUUM.) I plan to commit the attached> version; compared to v5, it updates comments and renames this variable.Looks good to me, thanks!--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Sat, 27 Mar 2021 11:26:47 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: non-HOT update not looking at FSM for large tuple update"
},
{
"msg_contents": "On Sat, Mar 27, 2021 at 11:26:47AM -0400, John Naylor wrote:\n> On Sat, Mar 27, 2021 at 3:00 AM Noah Misch <noah@leadboat.com> wrote:\n> > Does anyone have a strong opinion on whether to back-patch?� I am weakly\n> > inclined not to back-patch, because today's behavior might happen to perform\n> > better when large_upd_rate-small_ins_rate<0.\n> \n> It's not a clear case. The present behavior is clearly a bug, but only\n> manifests in rare situations. The risk of the fix affecting other situations\n> is not zero, as you mention, but (thinking briefly about this and I could be\n> wrong) the consequences don't seem as big as the reported case of growing\n> table size.\n\nI agree sites that are hurting now will see a net benefit. I can call it a\nbug that we treat just-extended pages differently from existing\nzero-line-pointer pages (e.g. pages left by RelationAddExtraBlocks()).\nChanging how we treat pages having 100 bytes of data feels different to me.\nIt's more like a policy decision, not a clear bug fix.\n\nI'm open to back-patching, but I plan to do so only if a few people report\nbeing firmly in favor.\n\n> > Besides the usual choices of\n> > back-patching or not back-patching, we could back-patch with a stricter\n> > threshold.� Suppose we accepted pages for larger-than-fillfactor tuples when\n> > the pages have at least\n> > BLCKSZ-SizeOfPageHeaderData-sizeof(ItemIdData)-MAXALIGN(MAXALIGN(SizeofHeapTupleHeader)+1)+1\n> > bytes of free space.� That wouldn't reuse a page containing a one-column\n> > tuple, but it would reuse a page having up to eight line pointers.\n> \n> I'm not sure how much that would help in the reported case that started this thread.\n\nI'm not sure, either. The thread email just before yours (27 Mar 2021\n10:24:00 +0000) does suggest it would help less than the main proposal.\n\n\n",
"msg_date": "Sat, 27 Mar 2021 13:32:57 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: non-HOT update not looking at FSM for large tuple update"
}
] |
[
{
"msg_contents": "We forgot to update the logical replication configuration settings\npage in commit ce0fdbfe97. After commit ce0fdbfe97, table\nsynchronization workers also started using replication origins to\ntrack the progress and the same should be reflected in docs.\n\nAttached patch for the same.",
"msg_date": "Fri, 26 Feb 2021 08:29:47 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Update docs of logical replication for commit ce0fdbfe97."
},
{
"msg_contents": "On Fri, Feb 26, 2021 at 8:29 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> We forgot to update the logical replication configuration settings\n> page in commit ce0fdbfe97. After commit ce0fdbfe97, table\n> synchronization workers also started using replication origins to\n> track the progress and the same should be reflected in docs.\n>\n> Attached patch for the same.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 1 Mar 2021 09:25:04 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Update docs of logical replication for commit ce0fdbfe97."
}
] |
[
{
"msg_contents": "Hi.\n\nI found a bug in pgbench's -d option and created a patch.\n\nThe bug is the following:\npgbench's option -d can display debug log about connection, which is \nlike \"pghost: foo pgport: 5432 nclients: 1 nxacts: 10 dbName: bar\".\nThis configuration is supplied by other options or environment variables \nlike PGUSER or PGPORT.\nWhen there is no PGDATABASE, pgbench will use PGUSER as the dbName.\nHowever, when there is PGPORT environment variable, this debug logger \ndoesn't refer PGUSER even if there is PGUSER.\nIn other words, even if you are setting both PGPORT and PGUSER, \npgbench's option -d will display like this: \"pghost: foo pgport: 5432 \nnclients: 1 nxact: 10 dbName: \".\nI think this is a bug that dbName is displayed as if it's not specified.\nNote that this bug is only related to this debug logger. The main unit \nof pgbench can establish a connection by complementing dbName with \nPGUSER despite this bug.\n\nSo I made a patch (only one line changed).\nAs shown in this patch file, I just changed the else if statement to an \nif statement.\nI'm suggesting this bug fix because I think it's a bug, but if there's \nany other intent to this else if statement, could you let me know?\n\nRegards\n---\nKota Miyake",
"msg_date": "Fri, 26 Feb 2021 13:18:20 +0900",
"msg_from": "miyake_kouta <miyake_kouta@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] pgbench: Bug fix for the -d option"
},
{
"msg_contents": "On Fri, Feb 26, 2021 at 01:18:20PM +0900, miyake_kouta wrote:\n> I'm suggesting this bug fix because I think it's a bug, but if there's any\n> other intent to this else if statement, could you let me know?\n\nYes, the existing code could mess up with the selection logic if\nPGPORT and PGUSER are both specified in an environment, masking the\nvalue of PGUSER, so let's fix that. This is as old as 412893b.\n--\nMichael",
"msg_date": "Fri, 26 Feb 2021 17:16:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench: Bug fix for the -d option"
},
{
"msg_contents": "On Fri, Feb 26, 2021 at 05:16:17PM +0900, Michael Paquier wrote:\n> Yes, the existing code could mess up with the selection logic if\n> PGPORT and PGUSER are both specified in an environment, masking the\n> value of PGUSER, so let's fix that. This is as old as 412893b.\n\nBy the way, I can help but wonder why pgbench has such a different\nhandling for the user name, fetching first PGUSER and then looking at\nthe options while most of the other binaries use get_user_name(). It\nseems to me that we could simplify the handling around \"login\" without\nreally impacting the usefulness of the tool, no?\n--\nMichael",
"msg_date": "Fri, 26 Feb 2021 20:30:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench: Bug fix for the -d option"
},
{
"msg_contents": "2021-02-26 20:30, Michael Paquier wrote:\n> By the way, I can help but wonder why pgbench has such a different\n> handling for the user name, fetching first PGUSER and then looking at\n> the options while most of the other binaries use get_user_name(). It\n> seems to me that we could simplify the handling around \"login\" without\n> really impacting the usefulness of the tool, no?\n\nHi.\n\nThank you for your comment.\nI modified the patch based on other binaries.\nIn this new patch, if there is a $PGUSER, then it's set to login.\nIf not, get_user_name_or_exit is excuted.\nPlese let me know what you think about this change.\n--\nKota Miyake",
"msg_date": "Tue, 02 Mar 2021 11:52:33 +0900",
"msg_from": "miyake_kouta <miyake_kouta@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pgbench: Bug fix for the -d option"
},
{
"msg_contents": "On Tue, Mar 02, 2021 at 11:52:33AM +0900, miyake_kouta wrote:\n> I modified the patch based on other binaries.\n> In this new patch, if there is a $PGUSER, then it's set to login.\n> If not, get_user_name_or_exit is excuted.\n> Plese let me know what you think about this change.\n\nYour patch makes the database selection slightly better, but I think\nthat we can do better and simpler than that. So please see the\nattached.\n\nOne thing on HEAD that looks like a bug to me is that if one uses a\npgbench command without specifying user, port and/or name in the\ncommand for an environment without PGDATABASE, PGPORT and PGHOST set,\nthen the debug log just before doConnect() prints empty strings for\nall that, which is basically useless so one has no idea where the\nconnection happens. Like any other src/bin/ facilities, let's instead\nremove all the getenv() calls at the beginning of pgbench.c and let's\nlibpq handle those environment variables if the parameters are NULL\n(aka in the case of no values given directly in the options). This\nrequires to move the debug log after doConnect(), which is no big deal\nanyway as a failure results in an exit(1) immediately with a log\ntelling where the connection failed.\n\nWhat do you think?\n--\nMichael",
"msg_date": "Thu, 4 Mar 2021 21:11:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench: Bug fix for the -d option"
},
{
"msg_contents": "2021-03-04 21:11, Michael Paquier wrote:\n> Like any other src/bin/ facilities, let's instead\n> remove all the getenv() calls at the beginning of pgbench.c and let's\n> libpq handle those environment variables if the parameters are NULL\n> (aka in the case of no values given directly in the options). This\n> requires to move the debug log after doConnect(), which is no big deal\n> anyway as a failure results in an exit(1) immediately with a log\n> telling where the connection failed.\n\nThank you for improving my patch.\nI agree that we should remove getenv() from pgbench.c and let libpq \ncomplement parameters with environment variables.\nAs you said, it's not a big problem that the debug log output comes \nafter doConnect(), I think.\nYour patch is simpler and more ideal.\n\nRegards.\n--\nKota Miyake\n\n\n",
"msg_date": "Fri, 05 Mar 2021 11:26:45 +0900",
"msg_from": "miyake_kouta <miyake_kouta@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pgbench: Bug fix for the -d option"
},
{
"msg_contents": "\n\nOn 2021/03/05 11:26, miyake_kouta wrote:\n> 2021-03-04 21:11, Michael Paquier wrote:\n>> Like any other src/bin/ facilities, let's instead\n>> remove all the getenv() calls at the beginning of pgbench.c and let's\n>> libpq handle those environment variables if the parameters are NULL\n>> (aka in the case of no values given directly in the options). This\n>> requires to move the debug log after doConnect(), which is no big deal\n>> anyway as a failure results in an exit(1) immediately with a log\n>> telling where the connection failed.\n\n \t\tif ((env = getenv(\"PGDATABASE\")) != NULL && *env != '\\0')\n \t\t\tdbName = env;\n-\t\telse if (login != NULL && *login != '\\0')\n-\t\t\tdbName = login;\n+\t\telse if ((env = getenv(\"PGUSER\")) != NULL && *env != '\\0')\n+\t\t\tdbName = env;\n \t\telse\n-\t\t\tdbName = \"\";\n+\t\t\tdbName = get_user_name_or_exit(progname);\n\nIf dbName is set by libpq, the above also is not necessary?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 5 Mar 2021 13:30:11 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench: Bug fix for the -d option"
},
{
"msg_contents": "On Fri, Mar 05, 2021 at 01:30:11PM +0900, Fujii Masao wrote:\n> \t\tif ((env = getenv(\"PGDATABASE\")) != NULL && *env != '\\0')\n> \t\t\tdbName = env;\n> -\t\telse if (login != NULL && *login != '\\0')\n> -\t\t\tdbName = login;\n> +\t\telse if ((env = getenv(\"PGUSER\")) != NULL && *env != '\\0')\n> +\t\t\tdbName = env;\n> \t\telse\n> -\t\t\tdbName = \"\";\n> +\t\t\tdbName = get_user_name_or_exit(progname);\n> \n> If dbName is set by libpq, the above also is not necessary?\n\nIf you remove this part, pgbench loses some log information if\nPQconnectdbParams() returns NULL, like on OOM if the database name is\nNULL. Perhaps that's not worth caring about here for a single log\nfailure, but that's the reason is why I left this part around.\n\nNow, simplifying the code is one goal of this patch, so I would not\nmind shaving a bit more of it :)\n--\nMichael",
"msg_date": "Fri, 5 Mar 2021 16:33:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench: Bug fix for the -d option"
},
{
"msg_contents": "\n\nOn 2021/03/05 16:33, Michael Paquier wrote:\n> On Fri, Mar 05, 2021 at 01:30:11PM +0900, Fujii Masao wrote:\n>> \t\tif ((env = getenv(\"PGDATABASE\")) != NULL && *env != '\\0')\n>> \t\t\tdbName = env;\n>> -\t\telse if (login != NULL && *login != '\\0')\n>> -\t\t\tdbName = login;\n>> +\t\telse if ((env = getenv(\"PGUSER\")) != NULL && *env != '\\0')\n>> +\t\t\tdbName = env;\n>> \t\telse\n>> -\t\t\tdbName = \"\";\n>> +\t\t\tdbName = get_user_name_or_exit(progname);\n>>\n>> If dbName is set by libpq, the above also is not necessary?\n> \n> If you remove this part, pgbench loses some log information if\n> PQconnectdbParams() returns NULL, like on OOM if the database name is\n> NULL. Perhaps that's not worth caring about here for a single log\n> failure, but that's the reason is why I left this part around.\n\nUnderstood. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 5 Mar 2021 18:35:47 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench: Bug fix for the -d option"
},
{
"msg_contents": "On Fri, Mar 05, 2021 at 06:35:47PM +0900, Fujii Masao wrote:\n> Understood. Thanks!\n\nOkay, so I have gone through this stuff today, and applied the\nsimplification. Thanks.\n--\nMichael",
"msg_date": "Sat, 6 Mar 2021 22:00:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench: Bug fix for the -d option"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Mar 05, 2021 at 06:35:47PM +0900, Fujii Masao wrote:\n>> Understood. Thanks!\n\n> Okay, so I have gone through this stuff today, and applied the\n> simplification. Thanks.\n\nThis item is still open according to the commitfest app ---\nshould that entry be closed?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Mar 2021 13:19:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench: Bug fix for the -d option"
},
{
"msg_contents": "On Sat, Mar 06, 2021 at 01:19:44PM -0500, Tom Lane wrote:\n> This item is still open according to the commitfest app ---\n> should that entry be closed?\n\nThanks. Done.\n--\nMichael",
"msg_date": "Sun, 7 Mar 2021 08:45:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench: Bug fix for the -d option"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nBack in 2016, Robert Haas proposed to replace I/O locks with condition\nvariables[1]. Condition variables went in and have found lots of\nuses, but this patch to replace a bunch of LWLocks and some busy\nlooping did not. Since then, it has been tested quite a lot as part\nof the AIO project[2], which currently depends on it. That's why I'm\ninterested in following up now. I asked Robert if he planned to\nre-propose it and he said I should go for it, so... here I go.\n\nAt the time, Tom Lane said:\n\n> Hmm. I fear the only reason you see an advantage there is that you don't\n> (yet) have any general-purpose mechanism for an aborting transaction to\n> satisfy its responsibilities vis-a-vis waiters on condition variables.\n> Instead, this wins specifically because you stuck some bespoke logic into\n> AbortBufferIO. OK ... but that sounds like we're going to end up with\n> every single condition variable that ever exists in the system needing to\n> be catered for separately and explicitly during transaction abort cleanup.\n> Which does not sound promising from a reliability standpoint. On the\n> other hand, I don't know what the equivalent rule to \"release all LWLocks\n> during abort\" might look like for condition variables, so I don't know\n> if it's even possible to avoid that.\n\nIt's true that cases like this one need bespoke logic, but that was\nalready the case: you have to make sure you call TerminateBufferIO()\nas before, it's just that BM_IO_IN_PROGRESS-clearing is now a\nCV-broadcastable event. That seems reasonable to me. As for the more\ngeneral point about the danger of waiting on CVs when potential\nbroadcasters might abort, and with the considerable benefit of a few\nyears of hindsight: I think the existing users of CVs mostly fall\ninto the category of waiters that will be shut down by a higher\nauthority if the expected broadcaster aborts. Examples: Parallel\nquery's interrupt-based error system will abort every back end waiting\nat a parallel hash join barrier if any process involved in the query\naborts, and the whole cluster will be shut down if you're waiting for\na checkpoint when the checkpointer dies.\n\nIt looks like there may be a nearby opportunity to improve another\n(rare?) busy loop, when InvalidateBuffer() encounters a pinned buffer,\nbased on this comment:\n\n * ... Note that if the other guy has pinned the buffer but not\n * yet done StartBufferIO, WaitIO will fall through and we'll effectively\n * be busy-looping here.)\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BTgmoaj2aPti0yho7FeEf2qt-JgQPRWb0gci_o1Hfr%3DC56Xng%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/20210223100344.llw5an2aklengrmn%40alap3.anarazel.de",
"msg_date": "Fri, 26 Feb 2021 19:08:12 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Replace buffer I/O locks with condition variables (reviving an old\n patch)"
},
{
"msg_contents": "On Fri, Feb 26, 2021 at 7:08 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Back in 2016, Robert Haas proposed to replace I/O locks with condition\n> variables[1]. Condition variables went in and have found lots of\n> uses, but this patch to replace a bunch of LWLocks and some busy\n> looping did not. Since then, it has been tested quite a lot as part\n> of the AIO project[2], which currently depends on it. That's why I'm\n> interested in following up now. I asked Robert if he planned to\n> re-propose it and he said I should go for it, so... here I go.\n\nI removed a redundant (Size) cast, fixed the wait event name and\ncategory (WAIT_EVENT_BUFFILE_XXX is for buffile.c stuff, not bufmgr.c\nstuff, and this is really an IPC wait, not an IO wait despite the\nname), updated documentation and pgindented.",
"msg_date": "Fri, 5 Mar 2021 12:12:36 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Replace buffer I/O locks with condition variables (reviving an\n old patch)"
},
{
"msg_contents": "On Fri, Mar 5, 2021 at 12:12 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Feb 26, 2021 at 7:08 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Back in 2016, Robert Haas proposed to replace I/O locks with condition\n> > variables[1]. Condition variables went in and have found lots of\n> > uses, but this patch to replace a bunch of LWLocks and some busy\n> > looping did not. Since then, it has been tested quite a lot as part\n> > of the AIO project[2], which currently depends on it. That's why I'm\n> > interested in following up now. I asked Robert if he planned to\n> > re-propose it and he said I should go for it, so... here I go.\n>\n> I removed a redundant (Size) cast, fixed the wait event name and\n> category (WAIT_EVENT_BUFFILE_XXX is for buffile.c stuff, not bufmgr.c\n> stuff, and this is really an IPC wait, not an IO wait despite the\n> name), updated documentation and pgindented.\n\nMore review and some proposed changes:\n\nThe old I/O lock array was the only user of struct\nLWLockMinimallyPadded, added in commit 6150a1b08a9, and it seems kinda\nstrange to leave it in the tree with no user. Of course it's remotely\npossible there are extensions using it (know of any?). In the\nattached, I've ripped that + associated commentary out, because it's\nfun to delete dead code. Objections?\n\nSince the whole reason for that out-of-line array in the first place\nwas to keep BufferDesc inside one cache line, and since it is in fact\npossible to put a new condition variable into BufferDesc without\nexceeding 64 bytes on a 64 bit x86 box, perhaps we should just do that\ninstead? I haven't yet considered other architectures or potential\nmember orders. It's also possible that some other project already had\ndesigns on those BufferDesc bytes. This drops quite a few lines from\nthe tree, including the comment about how nice it'd be to be able to\nput the lock in BufferDesc.\n\nI wonder if we should try to preserve user experience a little harder,\nfor the benefit of people who have monitoring queries that look for\nthis condition. Instead of inventing a new wait_event value, let's\njust keep showing \"BufferIO\" in that column. In other words, the\nchange is that wait_event_type changes from \"LWLock\" to \"IPC\", which\nis a pretty good summary of this patch. Done in the attached. Does\nthis make sense?\n\nPlease see attached, which gets us to: 8 files changed, 30\ninsertions(+), 113 deletions(-)\n\nPS: An idea I thought about while studying this patch is that we\nshould be able to make signaling an empty condition variable\nfree/cheap (no spinlock acquisition or other extra memory\nbarrier-containing operation); I'll write about that separately.",
"msg_date": "Mon, 8 Mar 2021 18:10:36 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Replace buffer I/O locks with condition variables (reviving an\n old patch)"
},
{
"msg_contents": "On Mon, Mar 08, 2021 at 06:10:36PM +1300, Thomas Munro wrote:\n> On Fri, Mar 5, 2021 at 12:12 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Fri, Feb 26, 2021 at 7:08 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > Back in 2016, Robert Haas proposed to replace I/O locks with condition\n> > > variables[1]. Condition variables went in and have found lots of\n> > > uses, but this patch to replace a bunch of LWLocks and some busy\n> > > looping did not. Since then, it has been tested quite a lot as part\n> > > of the AIO project[2], which currently depends on it. That's why I'm\n> > > interested in following up now. I asked Robert if he planned to\n> > > re-propose it and he said I should go for it, so... here I go.\n> >\n> > I removed a redundant (Size) cast, fixed the wait event name and\n> > category (WAIT_EVENT_BUFFILE_XXX is for buffile.c stuff, not bufmgr.c\n> > stuff, and this is really an IPC wait, not an IO wait despite the\n> > name), updated documentation and pgindented.\n> \n> More review and some proposed changes:\n> \n> The old I/O lock array was the only user of struct\n> LWLockMinimallyPadded, added in commit 6150a1b08a9, and it seems kinda\n> strange to leave it in the tree with no user. Of course it's remotely\n> possible there are extensions using it (know of any?). In the\n> attached, I've ripped that + associated commentary out, because it's\n> fun to delete dead code. Objections?\n\nNone from me. I don't know of any extension relying on it, and neither does\ncodesearch.debian.net. I would be surprised to see any extension actually\nrelying on that anyway.\n\n> Since the whole reason for that out-of-line array in the first place\n> was to keep BufferDesc inside one cache line, and since it is in fact\n> possible to put a new condition variable into BufferDesc without\n> exceeding 64 bytes on a 64 bit x86 box, perhaps we should just do that\n> instead? I haven't yet considered other architectures or potential\n> member orders.\n\n+1 for adding the cv into BufferDesc. That brings the struct size to exactly\n64 bytes on x86 64 bits architecture. This won't add any extra overhead to\nLOCK_DEBUG cases, as it was already exceeding the 64B threshold, if that even\nwas a concern.\n\n> I wonder if we should try to preserve user experience a little harder,\n> for the benefit of people who have monitoring queries that look for\n> this condition. Instead of inventing a new wait_event value, let's\n> just keep showing \"BufferIO\" in that column. In other words, the\n> change is that wait_event_type changes from \"LWLock\" to \"IPC\", which\n> is a pretty good summary of this patch. Done in the attached. Does\n> this make sense?\n\nI think it does make sense, and it's good to preserve this value.\n\nLooking at the patch itself, I don't have much to add it all looks sensible and\nI agree with the arguments in the first mail. All regression tests pass and\ndocumentation builds.\n\nI'm marking this patch as RFC.\n\n\n",
"msg_date": "Tue, 9 Mar 2021 13:25:15 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replace buffer I/O locks with condition variables (reviving an\n old patch)"
},
{
"msg_contents": "On Tue, Mar 9, 2021 at 6:24 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > The old I/O lock array was the only user of struct\n> > LWLockMinimallyPadded, added in commit 6150a1b08a9, and it seems kinda\n> > strange to leave it in the tree with no user. Of course it's remotely\n> > possible there are extensions using it (know of any?). In the\n> > attached, I've ripped that + associated commentary out, because it's\n> > fun to delete dead code. Objections?\n>\n> None from me. I don't know of any extension relying on it, and neither does\n> codesearch.debian.net. I would be surprised to see any extension actually\n> relying on that anyway.\n\nThanks for checking!\n\n> > Since the whole reason for that out-of-line array in the first place\n> > was to keep BufferDesc inside one cache line, and since it is in fact\n> > possible to put a new condition variable into BufferDesc without\n> > exceeding 64 bytes on a 64 bit x86 box, perhaps we should just do that\n> > instead? I haven't yet considered other architectures or potential\n> > member orders.\n>\n> +1 for adding the cv into BufferDesc. That brings the struct size to exactly\n> 64 bytes on x86 64 bits architecture. This won't add any extra overhead to\n> LOCK_DEBUG cases, as it was already exceeding the 64B threshold, if that even\n> was a concern.\n\nI also checked that it's 64B on an Arm box. Not sure about POWER.\nBut... despite the fact that it looks like a good change in isolation,\nI decided to go back to the separate array in this initial commit,\nbecause the AIO branch also wants to add a new BufferDesc member[1].\nI may come back to that change, if we can make some more space (seems\nentirely doable, but I'd like to look into that separately).\n\n> > I wonder if we should try to preserve user experience a little harder,\n> > for the benefit of people who have monitoring queries that look for\n> > this condition. Instead of inventing a new wait_event value, let's\n> > just keep showing \"BufferIO\" in that column. In other words, the\n> > change is that wait_event_type changes from \"LWLock\" to \"IPC\", which\n> > is a pretty good summary of this patch. Done in the attached. Does\n> > this make sense?\n>\n> I think it does make sense, and it's good to preserve this value.\n>\n> Looking at the patch itself, I don't have much to add it all looks sensible and\n> I agree with the arguments in the first mail. All regression tests pass and\n> documentation builds.\n\nI found one more thing to tweak: a reference in the README.\n\n> I'm marking this patch as RFC.\n\nThanks for the review. And of course to Robert for writing the patch. Pushed.\n\n[1] https://github.com/anarazel/postgres/blob/aio/src/include/storage/buf_internals.h#L190\n\n\n",
"msg_date": "Thu, 11 Mar 2021 10:48:40 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Replace buffer I/O locks with condition variables (reviving an\n old patch)"
},
{
"msg_contents": "On Thu, Mar 11, 2021 at 10:48:40AM +1300, Thomas Munro wrote:\n> On Tue, Mar 9, 2021 at 6:24 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > +1 for adding the cv into BufferDesc. That brings the struct size to exactly\n> > 64 bytes on x86 64 bits architecture. This won't add any extra overhead to\n> > LOCK_DEBUG cases, as it was already exceeding the 64B threshold, if that even\n> > was a concern.\n> \n> I also checked that it's 64B on an Arm box. Not sure about POWER.\n> But... despite the fact that it looks like a good change in isolation,\n> I decided to go back to the separate array in this initial commit,\n> because the AIO branch also wants to add a new BufferDesc member[1].\n\nOk!\n\n> I may come back to that change, if we can make some more space (seems\n> entirely doable, but I'd like to look into that separately).\n\n- /*\n- * It would be nice to include the I/O locks in the BufferDesc, but that\n- * would increase the size of a BufferDesc to more than one cache line,\n- * and benchmarking has shown that keeping every BufferDesc aligned on a\n- * cache line boundary is important for performance. So, instead, the\n- * array of I/O locks is allocated in a separate tranche. Because those\n- * locks are not highly contended, we lay out the array with minimal\n- * padding.\n- */\n- size = add_size(size, mul_size(NBuffers, sizeof(LWLockMinimallyPadded)));\n+ /* size of I/O condition variables */\n+ size = add_size(size, mul_size(NBuffers,\n+ sizeof(ConditionVariableMinimallyPadded)));\n\nShould we keep for now some similar comment mentionning why we don't put the cv\nin the BufferDesc even though it would currently fit the 64B target size?\n\n> Thanks for the review. And of course to Robert for writing the patch. Pushed.\n\nGreat!\n\n\n",
"msg_date": "Thu, 11 Mar 2021 10:27:35 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replace buffer I/O locks with condition variables (reviving an\n old patch)"
},
{
"msg_contents": "On Thu, Mar 11, 2021 at 3:27 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> - /*\n> - * It would be nice to include the I/O locks in the BufferDesc, but that\n> - * would increase the size of a BufferDesc to more than one cache line,\n> - * and benchmarking has shown that keeping every BufferDesc aligned on a\n> - * cache line boundary is important for performance. So, instead, the\n> - * array of I/O locks is allocated in a separate tranche. Because those\n> - * locks are not highly contended, we lay out the array with minimal\n> - * padding.\n> - */\n> - size = add_size(size, mul_size(NBuffers, sizeof(LWLockMinimallyPadded)));\n> + /* size of I/O condition variables */\n> + size = add_size(size, mul_size(NBuffers,\n> + sizeof(ConditionVariableMinimallyPadded)));\n>\n> Should we keep for now some similar comment mentionning why we don't put the cv\n> in the BufferDesc even though it would currently fit the 64B target size?\n\nI tried to write some words along those lines, but it seemed hard to\ncome up with a replacement message about a thing we're not doing\nbecause of other currently proposed patches. The situation could\nchange, and it seemed to be a strange place to put this comment\nanyway, far away from the relevant struct. Ok, let me try that\nagain... what do you think of this, as a new comment for BufferDesc,\nnext to the existing discussion of the 64 byte rule?\n\n--- a/src/include/storage/buf_internals.h\n+++ b/src/include/storage/buf_internals.h\n@@ -174,6 +174,10 @@ typedef struct buftag\n * Be careful to avoid increasing the size of the struct when adding or\n * reordering members. Keeping it below 64 bytes (the most common CPU\n * cache line size) is fairly important for performance.\n+ *\n+ * Per-buffer I/O condition variables are kept outside this struct in a\n+ * separate array. They could be moved in here and still fit under that\n+ * limit on common systems, but for now that is not done.\n */\n typedef struct BufferDesc\n {\n\n\n",
"msg_date": "Thu, 11 Mar 2021 15:54:06 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Replace buffer I/O locks with condition variables (reviving an\n old patch)"
},
{
"msg_contents": "On Thu, Mar 11, 2021 at 03:54:06PM +1300, Thomas Munro wrote:\n> On Thu, Mar 11, 2021 at 3:27 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > - /*\n> > - * It would be nice to include the I/O locks in the BufferDesc, but that\n> > - * would increase the size of a BufferDesc to more than one cache line,\n> > - * and benchmarking has shown that keeping every BufferDesc aligned on a\n> > - * cache line boundary is important for performance. So, instead, the\n> > - * array of I/O locks is allocated in a separate tranche. Because those\n> > - * locks are not highly contended, we lay out the array with minimal\n> > - * padding.\n> > - */\n> > - size = add_size(size, mul_size(NBuffers, sizeof(LWLockMinimallyPadded)));\n> > + /* size of I/O condition variables */\n> > + size = add_size(size, mul_size(NBuffers,\n> > + sizeof(ConditionVariableMinimallyPadded)));\n> >\n> > Should we keep for now some similar comment mentionning why we don't put the cv\n> > in the BufferDesc even though it would currently fit the 64B target size?\n> \n> I tried to write some words along those lines, but it seemed hard to\n> come up with a replacement message about a thing we're not doing\n> because of other currently proposed patches. The situation could\n> change, and it seemed to be a strange place to put this comment\n> anyway, far away from the relevant struct.\n\nYeah, I agree that it's not the best place to document the size consideration.\n\n> Ok, let me try that\n> again... what do you think of this, as a new comment for BufferDesc,\n> next to the existing discussion of the 64 byte rule?\n> \n> --- a/src/include/storage/buf_internals.h\n> +++ b/src/include/storage/buf_internals.h\n> @@ -174,6 +174,10 @@ typedef struct buftag\n> * Be careful to avoid increasing the size of the struct when adding or\n> * reordering members. Keeping it below 64 bytes (the most common CPU\n> * cache line size) is fairly important for performance.\n> + *\n> + * Per-buffer I/O condition variables are kept outside this struct in a\n> + * separate array. They could be moved in here and still fit under that\n> + * limit on common systems, but for now that is not done.\n> */\n> typedef struct BufferDesc\n> {\n\nI was mostly thinking about something like \"leave room for now as other feature\ncould make a better use of that space\", but I'm definitely fine with this\ncomment!\n\n\n",
"msg_date": "Thu, 11 Mar 2021 11:11:18 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Replace buffer I/O locks with condition variables (reviving an\n old patch)"
}
] |
[
{
"msg_contents": "Hi all,\n\nSince c5b2860, it is possible to specify a tablespace for a REINDEX,\nbut the equivalent option has not been added to reindexdb. Attached\nis a patch to take care of that.\n\nThis includes documentation and tests.\n\nWhile on it, I have added tests for toast tables and indexes with a\ntablespace move during a REINDEX. Those operations fail, but it is\nnot possible to get that into the main regression test suite because\nthe error messages include the relation names so that's unstable.\nWell, it would be possible to do that for the non-concurrent case\nusing a TRY/CATCH block in a custom function but that would not work\nwith CONCURRENTLY. Anyway, I would rather group the whole set of\ntests together, and using the --tablespace option introduced here\nwithin a TAP test does the job.\n\nThis is added to the next commit fest.\n\nThanks,\n--\nMichael",
"msg_date": "Fri, 26 Feb 2021 15:49:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Add --tablespace option to reindexdb"
},
{
"msg_contents": "> On 26 Feb 2021, at 07:49, Michael Paquier <michael@paquier.xyz> wrote:\n\n> Since c5b2860, it is possible to specify a tablespace for a REINDEX,\n> but the equivalent option has not been added to reindexdb. Attached\n> is a patch to take care of that.\n> \n> This includes documentation and tests.\n\nMakes sense.\n\n> While on it, I have added tests for toast tables and indexes with a\n> tablespace move during a REINDEX. Those operations fail, but it is\n> not possible to get that into the main regression test suite because\n> the error messages include the relation names so that's unstable.\n> Well, it would be possible to do that for the non-concurrent case\n> using a TRY/CATCH block in a custom function but that would not work\n> with CONCURRENTLY. Anyway, I would rather group the whole set of\n> tests together, and using the --tablespace option introduced here\n> within a TAP test does the job.\n\nAgreed, doing it with a TAP test removes the complication.\n\nSome other small comments:\n\n+\t\tAssert(PQserverVersion(conn) >= 140000);\nAre these assertions really buying us much when we already check the server\nversion in reindex_one_database()?\n\n+\tprintf(_(\" --tablespace=TABLESPACE reindex on specified tablespace\\n\"));\nWhile not introduced by this patch, I realized that we have a mix of\nconventions for how to indent long options which don't have a short option.\nUnder \"Connection options\" all options are left-justified but under \"Options\"\nall long-options are aligned with space indentation for missing shorts. Some\ntools do it like this, where others like createuser left justifies under\nOptions as well. Maybe not the most pressing issue, but consistency is always\ngood in user interfaces so maybe it's worth addressing (in a separate patch)?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 26 Feb 2021 11:00:13 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Add --tablespace option to reindexdb"
},
{
"msg_contents": "On Fri, Feb 26, 2021 at 11:00:13AM +0100, Daniel Gustafsson wrote:\n> Some other small comments:\n> \n> +\t\tAssert(PQserverVersion(conn) >= 140000);\n> Are these assertions really buying us much when we already check the server\n> version in reindex_one_database()?\n\nI found these helpful when working on vacuumdb and refactoring the\ncode, though I'd agree this code may not justify going down to that.\n\n> +\tprintf(_(\" --tablespace=TABLESPACE reindex on specified tablespace\\n\"));\n> While not introduced by this patch, I realized that we have a mix of\n> conventions for how to indent long options which don't have a short option.\n> Under \"Connection options\" all options are left-justified but under \"Options\"\n> all long-options are aligned with space indentation for missing shorts. Some\n> tools do it like this, where others like createuser left justifies under\n> Options as well. Maybe not the most pressing issue, but consistency is always\n> good in user interfaces so maybe it's worth addressing (in a separate patch)?\n\nYeah, consistency is good, though I am not sure which one would be\nconsistent here actually as there is no defined rule. For this one,\nI think that I would keep what I have to be consistent with the\nexisting inconsistency (?). In short, I would just add --tablespace\nthe same way there is --concurrently, without touching the connection\noption part.\n--\nMichael",
"msg_date": "Fri, 26 Feb 2021 20:14:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Add --tablespace option to reindexdb"
},
{
"msg_contents": "\n\n> On Feb 25, 2021, at 10:49 PM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> Hi all,\n\nHi Michael,\n\n> Since c5b2860, it is possible to specify a tablespace for a REINDEX,\n> but the equivalent option has not been added to reindexdb. Attached\n> is a patch to take care of that.\n> \n> This includes documentation and tests.\n\nThe documentation of the TABLESPACE option for REINDEX says:\n\n+ Specifies that indexes will be rebuilt on a new tablespace.\n\nbut in your patch for reindexdb, it is less clear:\n\n+ Specifies the tablespace to reindex on. (This name is processed as\n+ a double-quoted identifier.)\n\nI think the language \"to reindex on\" could lead users to think that indexes already existent in the given tablespace will be reindexed. In other words, the option might be misconstrued as a way to specify all the indexes you want reindexed. Whatever you do here, beware that you are using similar language in the --help, so consider changing that, too.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 1 Mar 2021 09:26:03 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add --tablespace option to reindexdb"
},
{
"msg_contents": "\n\n> On Feb 25, 2021, at 10:49 PM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> While on it, I have added tests for toast tables and indexes with a\n> tablespace move during a REINDEX. \n\n\nIn t/090_reindexdb.pl, you add:\n\n+# Create a tablespace for testing.\n+my $ts = $node->basedir . '/regress_reindex_tbspace';\n+mkdir $ts or die \"cannot create directory $ts\";\n+# this takes care of WIN-specific path issues\n+my $ets = TestLib::perl2host($ts);\n+my $tbspace = 'reindex_tbspace';\n+$node->safe_psql('postgres', \"CREATE TABLESPACE $tbspace LOCATION '$ets';\");\n\nI recognize that you are borrowing some of that from src/bin/pgbench/t/001_pgbench_with_server.pl, but I don't find anything intuitive about the name \"ets\" and would rather not see that repeated. There is nothing in TestLib::perl2host to explain this name choice, nor in pgbench's test, nor yours.\n\nYou also change a test:\n\n $node->issues_sql_like(\n- [ 'reindexdb', '-v', '-t', 'test1', 'postgres' ],\n- qr/statement: REINDEX \\(VERBOSE\\) TABLE public\\.test1;/,\n- 'reindex with verbose output');\n+ [ 'reindexdb', '--concurrently', '-v', '-t', 'test1', 'postgres' ],\n+ qr/statement: REINDEX \\(VERBOSE\\) TABLE CONCURRENTLY public\\.test1;/,\n+ 'reindex concurrently with verbose output');\n\nbut I don't see what tests of the --concurrently option have to do with this patch. I'm not saying there is anything wrong with this change, but it seems out of place. Am I missing something?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 1 Mar 2021 09:47:57 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add --tablespace option to reindexdb"
},
{
"msg_contents": "\n\n> On Feb 25, 2021, at 10:49 PM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> Anyway, I would rather group the whole set of\n> tests together, and using the --tablespace option introduced here\n> within a TAP test does the job.\n\nYour check verifies that reindexing a system table on a new tablespace fails, but does not verify what the failure was. I wonder if you might want to make it more robust, something like:\n\ndiff --git a/src/bin/scripts/t/090_reindexdb.pl b/src/bin/scripts/t/090_reindexdb.pl\nindex 6946268209..8453acc817 100644\n--- a/src/bin/scripts/t/090_reindexdb.pl\n+++ b/src/bin/scripts/t/090_reindexdb.pl\n@@ -3,7 +3,7 @@ use warnings;\n \n use PostgresNode;\n use TestLib;\n-use Test::More tests => 54;\n+use Test::More tests => 58;\n \n program_help_ok('reindexdb');\n program_version_ok('reindexdb');\n@@ -108,23 +108,35 @@ $node->issues_sql_like(\n # names, and CONCURRENTLY cannot be used in transaction blocks, preventing\n # the use of TRY/CATCH blocks in a custom function to filter error\n # messages.\n-$node->command_fails(\n+$node->command_checks_all(\n [ 'reindexdb', '-t', $toast_table, '--tablespace', $tbspace, 'postgres' ],\n+ 1,\n+ [ ],\n+ [ qr/cannot move system relation/ ],\n 'reindex toast table with tablespace');\n-$node->command_fails(\n+$node->command_checks_all(\n [\n 'reindexdb', '--concurrently', '-t', $toast_table,\n '--tablespace', $tbspace, 'postgres'\n ],\n+ 1,\n+ [ ],\n+ [ qr/cannot move system relation/ ],\n 'reindex toast table concurrently with tablespace');\n-$node->command_fails(\n+$node->command_checks_all(\n [ 'reindexdb', '-i', $toast_index, '--tablespace', $tbspace, 'postgres' ],\n+ 1,\n+ [ ],\n+ [ qr/cannot move system relation/ ],\n 'reindex toast index with tablespace');\n-$node->command_fails(\n+$node->command_checks_all(\n [\n 'reindexdb', '--concurrently', '-i', $toast_index,\n '--tablespace', $tbspace, 'postgres'\n ],\n+ 1,\n+ [ ],\n+ [ qr/cannot move system relation/ ],\n 'reindex toast index concurrently with tablespace');\n \n # connection strings\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 1 Mar 2021 10:12:54 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add --tablespace option to reindexdb"
},
{
"msg_contents": "On Mon, Mar 01, 2021 at 10:12:54AM -0800, Mark Dilger wrote:\n> Your check verifies that reindexing a system table on a new\n> tablespace fails, but does not verify what the failure was. I\n> wonder if you might want to make it more robust, something like:\n\nI was not sure if that was worth adding or not, but no objections to\ndo so. So updated the patch to do that.\n\nOn Mon, Mar 01, 2021 at 09:47:57AM -0800, Mark Dilger wrote:\n> I recognize that you are borrowing some of that from\n> src/bin/pgbench/t/001_pgbench_with_server.pl, but I don't find\n> anything intuitive about the name \"ets\" and would rather not see\n> that repeated. There is nothing in TestLib::perl2host to explain\n> this name choice, nor in pgbench's test, nor yours.\n\nOkay, I have made the variable names more explicit.\n\n> but I don't see what tests of the --concurrently option have to do\n> with this patch. I'm not saying there is anything wrong with this\n> change, but it seems out of place. Am I missing something?\n\nWhile hacking on this feature I have just bumped into this separate\nissue, where the same test just gets repeated twice. I have just\napplied an independent patch to take care of this problem separately,\nand backpatched it down to 12 where this got introduced.\n\nOn Mon, Mar 01, 2021 at 09:26:03AM -0800, Mark Dilger wrote:\n> I think the language \"to reindex on\" could lead users to think that\n> indexes already existent in the given tablespace will be reindexed.\n> In other words, the option might be misconstrued as a way to specify\n> all the indexes you want reindexed. Whatever you do here, beware\n> that you are using similar language in the --help, so consider\n> changing that, too.\n\nOK. I have switched the docs to \"Specifies the tablespace where\nindexes are rebuilt\" and --help to \"tablespace where indexes are\nrebuilt\".\n\nI have also removed the assertions based on the version number of the\nbackend, based on Daniel's input sent upthread.\n\nWhat do you think?\n--\nMichael",
"msg_date": "Tue, 2 Mar 2021 15:04:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Add --tablespace option to reindexdb"
},
{
"msg_contents": "> On 2 Mar 2021, at 07:04, Michael Paquier <michael@paquier.xyz> wrote:\n\n> I have also removed the assertions based on the version number of the\n> backend, based on Daniel's input sent upthread.\n> \n> What do you think?\n\n+1, no objections from me after a readthrough of this version.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 2 Mar 2021 10:01:43 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Add --tablespace option to reindexdb"
},
{
"msg_contents": "\n\n> On Mar 1, 2021, at 10:04 PM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Mon, Mar 01, 2021 at 10:12:54AM -0800, Mark Dilger wrote:\n>> Your check verifies that reindexing a system table on a new\n>> tablespace fails, but does not verify what the failure was. I\n>> wonder if you might want to make it more robust, something like:\n> \n> I was not sure if that was worth adding or not, but no objections to\n> do so. So updated the patch to do that.\n> \n> On Mon, Mar 01, 2021 at 09:47:57AM -0800, Mark Dilger wrote:\n>> I recognize that you are borrowing some of that from\n>> src/bin/pgbench/t/001_pgbench_with_server.pl, but I don't find\n>> anything intuitive about the name \"ets\" and would rather not see\n>> that repeated. There is nothing in TestLib::perl2host to explain\n>> this name choice, nor in pgbench's test, nor yours.\n> \n> Okay, I have made the variable names more explicit.\n> \n>> but I don't see what tests of the --concurrently option have to do\n>> with this patch. I'm not saying there is anything wrong with this\n>> change, but it seems out of place. Am I missing something?\n> \n> While hacking on this feature I have just bumped into this separate\n> issue, where the same test just gets repeated twice. I have just\n> applied an independent patch to take care of this problem separately,\n> and backpatched it down to 12 where this got introduced.\n> \n> On Mon, Mar 01, 2021 at 09:26:03AM -0800, Mark Dilger wrote:\n>> I think the language \"to reindex on\" could lead users to think that\n>> indexes already existent in the given tablespace will be reindexed.\n>> In other words, the option might be misconstrued as a way to specify\n>> all the indexes you want reindexed. Whatever you do here, beware\n>> that you are using similar language in the --help, so consider\n>> changing that, too.\n> \n> OK. I have switched the docs to \"Specifies the tablespace where\n> indexes are rebuilt\" and --help to \"tablespace where indexes are\n> rebuilt\".\n> \n> I have also removed the assertions based on the version number of the\n> backend, based on Daniel's input sent upthread.\n> \n> What do you think?\n\nLooks good.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 2 Mar 2021 06:49:47 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add --tablespace option to reindexdb"
},
{
"msg_contents": "On Tue, Mar 02, 2021 at 10:01:43AM +0100, Daniel Gustafsson wrote:\n> +1, no objections from me after a readthrough of this version.\n\nThanks, Daniel and Mark. I have applied v2 from upthread, then.\n--\nMichael",
"msg_date": "Wed, 3 Mar 2021 10:49:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Add --tablespace option to reindexdb"
}
] |
[
{
"msg_contents": "Hi,\n\n\n\nI am Yixin Shi, a junior student majoring in Computer Science at University\nof Michigan. I notice that the project ideas for GSoC 2021 have been posted\non your webpage and I am quite interested in two of them. I really wish to\ntake part in the project *Make plsample a more complete procedural language\nhandler example (2021)* considering both my interest and ability. The\npotential mentor for this Project is *Mark Wong*. I am also interested in\nthe project *Improve PostgreSQL Regression Test Coverage (2021)* and the\npotential mentor is *Andreas Scherbaum* and *Stephen Frost*.\n\n\n\nI would like to learn more about these two projects but failed to contact\nthe mentors. How can I contact them? Also, I really hope to join the\nproject. Are there any suggestions on application?\n\n\n\nBest wishes,\n\n\n\nYixin Shi\n\nHi,\n \nI am Yixin Shi, a junior student majoring\nin Computer Science at University of Michigan. I notice that the project ideas\nfor GSoC 2021 have been posted on your webpage and I am quite interested in two\nof them. I really wish to take part in the project Make plsample a more\ncomplete procedural language handler example (2021) considering both my\ninterest and ability. The potential mentor for this Project is Mark Wong.\nI am also interested in the project Improve PostgreSQL Regression Test\nCoverage (2021) and the potential mentor is Andreas Scherbaum and Stephen\nFrost. \n \nI would like to learn more about these\ntwo projects but failed to contact the mentors. How can I contact them? Also, I really hope to join the project. Are there any suggestions on application?\n \nBest wishes,\n \nYixin Shi",
"msg_date": "Fri, 26 Feb 2021 15:56:17 +0800",
"msg_from": "Yixin Shi <esing@umich.edu>",
"msg_from_op": true,
"msg_subject": "Interest in GSoC 2021 Projects"
},
{
"msg_contents": "Greetings!\n\n* Yixin Shi (esing@umich.edu) wrote:\n> I am Yixin Shi, a junior student majoring in Computer Science at University\n> of Michigan. I notice that the project ideas for GSoC 2021 have been posted\n> on your webpage and I am quite interested in two of them. I really wish to\n> take part in the project *Make plsample a more complete procedural language\n> handler example (2021)* considering both my interest and ability. The\n> potential mentor for this Project is *Mark Wong*. I am also interested in\n> the project *Improve PostgreSQL Regression Test Coverage (2021)* and the\n> potential mentor is *Andreas Scherbaum* and *Stephen Frost*.\n> \n> I would like to learn more about these two projects but failed to contact\n> the mentors. How can I contact them? Also, I really hope to join the\n> project. Are there any suggestions on application?\n\nGlad to hear you're interested- I'm one of the mentors you mention.\n\nYou've found me, but I do want to point out that I'm also listed with my\nemail address at: https://wiki.postgresql.org/wiki/GSoC ; the top-level\nPostgreSQL GSoC page (I've also added a link to that from the GSoC 2021\nproject ideas page).\n\nNote that PostgreSQL hasn't yet been formally accepted into the\nGSoC 2021 program. Google has said they intend to announce the chosen\norganizations on March 9th. Of course, you're welcome to start working\non your application even ahead of that date if you wish to do so.\n\nGoogle provides a lot of useful information for prospective students\nhere: https://google.github.io/gsocguides/student/ including a lot of\ninformation about how to write a solid proposal. For the regression\ntest coverage, the first step would be to review, as suggested in the\nproject idea, the current state of coverage which you can see here:\nhttps://coverage.postgresql.org/ You'd also want to pull PG down and\nmake sure that you have an environment where you can build PG and run\nthe regression tests and build the coverage report yourself. With that\ndone, you can consider what areas you'd like to tackle, perhaps starting\nout with some of the simpler cases such as extensions that maybe don't\nhave any test coverage today, or the various tools which have zero or\nvery little.\n\nI've also CC'd Mark so he can provide some comments regarding plsample.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 26 Feb 2021 13:11:46 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Interest in GSoC 2021 Projects"
},
{
"msg_contents": "Hi,\n\nOn Fri, Feb 26, 2021 at 03:56:17PM +0800, Yixin Shi wrote:\n> Hi,\n> \n> \n> \n> I am Yixin Shi, a junior student majoring in Computer Science at University\n> of Michigan. I notice that the project ideas for GSoC 2021 have been posted\n> on your webpage and I am quite interested in two of them. I really wish to\n> take part in the project *Make plsample a more complete procedural language\n> handler example (2021)* considering both my interest and ability. The\n> potential mentor for this Project is *Mark Wong*. I am also interested in\n> the project *Improve PostgreSQL Regression Test Coverage (2021)* and the\n> potential mentor is *Andreas Scherbaum* and *Stephen Frost*.\n> \n> \n> \n> I would like to learn more about these two projects but failed to contact\n> the mentors. How can I contact them? Also, I really hope to join the\n> project. Are there any suggestions on application?\n\nThe idea for plsample is to continue providing working code and\ndocumentation examples so that plsample can be used as a template for\ncreating additional language handlers.\n\nDepending on the individual's comfort level, some effort may need to be\nput into studying the current handlers for ideas on how to implement the\nlacking functionality in plsample.\n\nDoes that help?\n\nRegards,\nMark\n\n\n",
"msg_date": "Sat, 27 Feb 2021 00:58:42 +0000",
"msg_from": "Mark Wong <markwkm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Interest in GSoC 2021 Projects"
}
] |
[
{
"msg_contents": "Hi.\n\nI created a patch to remove ecnt which is a member variable of CState.\nThis variable is incremented in some places, but it's not used for any \npurpose.\nAlso, the current pgbench's client abandons processing after hitting \nerror, so this variable is no need, I think.\n\nRegards\n--\nKota Miyake",
"msg_date": "Fri, 26 Feb 2021 17:39:31 +0900",
"msg_from": "miyake_kouta <miyake_kouta@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] pgbench: Remove ecnt, a member variable of CState"
},
{
"msg_contents": "On Fri, Feb 26, 2021 at 05:39:31PM +0900, miyake_kouta wrote:\n> Also, the current pgbench's client abandons processing after hitting error,\n> so this variable is no need, I think.\n\nAgreed. Its last use was in 12788ae, as far as I can see. So let's\njust cleanup that.\n--\nMichael",
"msg_date": "Fri, 26 Feb 2021 20:39:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench: Remove ecnt, a member variable of CState"
},
{
"msg_contents": "On 2021-Feb-26, Michael Paquier wrote:\n\n> On Fri, Feb 26, 2021 at 05:39:31PM +0900, miyake_kouta wrote:\n> > Also, the current pgbench's client abandons processing after hitting error,\n> > so this variable is no need, I think.\n> \n> Agreed. Its last use was in 12788ae, as far as I can see. So let's\n> just cleanup that.\n\n+1\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Fri, 26 Feb 2021 16:36:41 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench: Remove ecnt, a member variable of CState"
},
{
"msg_contents": "On Fri, Feb 26, 2021 at 04:36:41PM -0300, Alvaro Herrera wrote:\n> +1\n\nThanks, done.\n--\nMichael",
"msg_date": "Sun, 28 Feb 2021 08:06:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench: Remove ecnt, a member variable of CState"
},
{
"msg_contents": "2021-02-28 08:06, Michael Paquier wrote:\n> On Fri, Feb 26, 2021 at 04:36:41PM -0300, Alvaro Herrera wrote:\n>> +1\n> \n> Thanks, done.\n> --\n> Michael\n\nThanks for reviewing and committing this patch!\n\nRegards\n--\nKota Miyake\n\n\n",
"msg_date": "Mon, 01 Mar 2021 11:32:11 +0900",
"msg_from": "miyake_kouta <miyake_kouta@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pgbench: Remove ecnt, a member variable of CState"
}
] |
[
{
"msg_contents": "We have highly evolved systems such as SQL, HTTP, HTML, file formats or\nhigh level programming languages such as Java or PHP that allow us to\nprogram many things with little code. Even so a lot of effort is \ninvested in the integration of these systems. To try to reduce this\nproblem libraries and frameworks that help in some ways are created but\nthe integration is not complete.\n\nIt is well known that most of the time when you try to create something\nto integrate several incompatible systems what you get in the end is to\nhave another incompatible system :). Still I think the integration\nbetween the systems mentioned above is something very important that\ncan mean a great step in the evolution of computing and worth a try.\n\nTo explore how this integration can be I have created a framework that\nI have called NextTypes and that its main objective is the integration\nof data types. For me it is something very illogical that something as\nbasic as a 16 bits integer receives a different name in each of the\nsystems (\"smallint\", \"short\", \"number\") and can also be signed or\nunsigned. In any moment, due to a mistake from the programmer, the\nnumber of the programming language does not fit in the database column.\nBesides these names are little indicative of its characteristics, it\nwould be clearer for example to use \"int16\". Whatever names are chosen\nthe most important thing is to use in all systems the same names for\ntypes of the same characteristics.\n\nAlso there is no standard system for defining composite data types of\nprimitive types and other composite types. From an HTML form to a SQL\ntable or from one application to another are required multiple\ntransformations of the data. Lack of integration also lowers the level\nof abstraction, making it necessary to do lots of low level stuff for\nsystems to fit.\n\nNextTypes at this moment is nothing more than another incompatible\nsystem with the others. It simply integrates them quite a bit and\nraises the level of abstraction. But what I would like is that the\nthings that compose NextTypes were included in the systems it\nintegrates.\n\nFinally I would like to list some examples of improvements in database\nmanagers, SQL, HTTP, HTML, browsers and programming languages that\nwould help the integration and elevation of the level of abstraction.\nSome of these enhancements are already included in NextTypes and other\nframeworks.\n\n\nSQL\n---\n\n- Custom metadata in tables and columns.\n- Date of creation and modification of the rows.\n- Date of creation and modification of the definition of the tables.\n- Use of table and column names in prepared statements.\n\n Example: select # from # where # = ?;\n\n- Use of arrays in prepared statements.\n\n Example: select # from article where id in (?);\n\n # = author,title\n ? = 10,24,45\n\n- Standardization of ranges of valid values and resolution for date,\ntime and datetime types in database managers an HTML time element.\n\n\nPostgreSQL\n----------\n\n- Facilitate access to the definition of full text search indexes with\na function to parse \"pg_index.indexprs\" column.\n\n\nOther database managers\n-----------------------\n\n- Allow transactional DDL, deferrable constraints and composite types.\n\n\nJDBC\n----\n\n- High level methods that allow queries with the execution of a\nsingle method.\n\n Example: Tuple [] tuples = query(\"select author,title from article\nwhere id in (?), ids);\n\n- Integration with java.time data types.\n\n\nHTTP - Servers\n--------------\n\n- Processing of arrays of elements composed of several parameters.\n\n fields:0:type = string\n fields:0:name = title\n fields:0:parameters = 250\n fields:0:not_null = true\n \n fields:1:type = string\n fields:1:name = author\n fields:1:parameters = 250\n fields:1:not_null = true\n\n Another possibility is to generate in the browser arrays of JSON\nobjects from the forms. \n \n \"fields\": [\n {\n \"type\": \"string\",\n \"name\": \"title\",\n \"parameters\": 250,\n \"not_null\": true\n }, {\n \"type\": \"string\",\n \"name\": \"author\",\n \"parameters\": 250,\n \"not_null\": true\n }\n ]\n\n\nXML/HTML - BROWSER\n------------------\n\n- Input elements for different types of numbers with min and max\nvalues: 16 bits integer, 32 bits integer, 32 bits float and 64 bits\nfloat.\n\n- Input elements for images, audios and videos with preview.\n\n- Timezone input element.\n\n- Boolean input element with \"true\" and \"false\" values.\n\n- Null value in file inputs.\n\n- Clear button in file inputs like in date and time inputs.\n\n- Show size in file inputs.\n\n- Extension of the DOM API with high level and chainable methods.\n\n Example: paragraph.appendElement(\"a\").setAttribute(\"href\",\n\"/article\");\n\n- Change of the \"action\" parameter of the forms to \"target\" to indicate\nthe URL where to execute the action. The \"action\" parameter is moved to\nthe different buttons on the form and allows executing a different\naction with each of the the buttons.\n\n Example:\n\n <form target=\"/article\">\n <button action=\"delete\">Delete</button>\n <button action=\"export\">Export</button>\n </form>\n\n- \"select\" elements that change a parameter of the current URL.\n\n Example:\n <select url-parameter=\"lang\"/>\n <option>en</option>\n <option>es</option>\n\n URL = https://demo.nexttypes.com/?lang=en\n\n- \"content-type\" attribute in links and context menu in the browser to\nopen links with external applications using WEBDAV, similar to a file\nmanager.\n\n Example: <a href=\"\" content-\ntype=\"application/vnd.oasis.opendocument.text\">\n\n ------------------------------\n | Open link with ... |\n | Open link with LibreOffice |\n\n- Background submission of forms without using XMLHttpRequest,\ndisplay of result in dialog window or file download, and execution of a\nJavascript function for subsequent actions.\n\n Example: <form background show-progress callback=function()>\n\n- Dialog with progress indicator of form submission. Must show total\nsize, transmitted data and average speed. Possibility to pause or\ncancel the submission.\n\n- Dynamic datalist with searchable JSON source. Over data source URL is\nadded a \"search\" parameter with input value.\n\n Example:\n <input list=\"article-list\" name=\"article\" type=\"text\" />\n <datalist id=\"article-list\"\nsrc=\"/article?lang=en&view=json&names\" />\n\n Example query URL: \"/article?lang=en&view=json&names&search=Ne\"\n\n- Same appearance and operation of inputs with dynamic datalist in all\nbrowsers.\n\n- Option tags with icons.\n\n Example: <option icon=\"/icons/save.svg\">Save</option>\n\n- Tabs, Tree, etc widgets\n\n- Mechanism to close HTTPS sessions initiated with client certificate\nauthentication.\n\n\nJAVA\n----\n\n- Subclasses of String or some system of variants of String that allows\nassigning a regular expression or class that limits its valid values\nto avoid code injection or values that will crash the system.\n\n Example: String:Type = \"[a-z0-9 _] +\"; Or String:Type = TypeChecks;\n\n String:Type type = \"article_language\"; -> correct\n String:Type type = \"Article-Language\"; -> error\n\n\nWe can talk about the characteristics of each system in its mailing\nlist or group. For the general topic of system integration I have\ncreated a discussion in the github project.\n\nhttps://github.com/alejsanc/nexttypes/discussions/6\n\nThis email has been sent to the following mailing lists and groups:\n\npgsql-hackers@lists.postgresql.org\npublic-html@w3.org\njdbc-spec-discuss@openjdk.java.net\njdk-dev@openjdk.java.net\nmaria-developers@lists.launchpad.net\nhttps://mysqlcommunity.slack.com/archives/C8R1336M7\ndev-platform@lists.mozilla.org\n\nBest regards.\nAlejandro Sánchez.\n\n\n\n",
"msg_date": "Fri, 26 Feb 2021 20:09:20 +0100",
"msg_from": "Alejandro =?ISO-8859-1?Q?S=E1nchez?= <alex@nexttypes.com>",
"msg_from_op": true,
"msg_subject": "Systems Integration and Raising of the Abstraction Level"
}
] |
[
{
"msg_contents": "When looking at disallowing SSL compression I found the parameter \"authtype\"\nwhich was deprecated in commit d5bbe2aca55bc8 on January 26 1998. While I do\nthink there is a case to be made for the backwards compatibility having run its\ncourse on this one, shouldn't we at least remove the environment variable and\ndefault compiled fallback for it to save us a getenv call when filling in the\noption defaults?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Fri, 26 Feb 2021 21:02:11 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "authtype parameter in libpq"
},
{
"msg_contents": "On 26.02.21 21:02, Daniel Gustafsson wrote:\n> When looking at disallowing SSL compression I found the parameter \"authtype\"\n> which was deprecated in commit d5bbe2aca55bc8 on January 26 1998. While I do\n> think there is a case to be made for the backwards compatibility having run its\n> course on this one, shouldn't we at least remove the environment variable and\n> default compiled fallback for it to save us a getenv call when filling in the\n> option defaults?\n\nThe argument of avoiding unnecessary getenv() calls is sensible. PGTTY \nshould get the same treatment.\n\nBut I tend to think we should remove them both altogether (modulo ABI \nand API preservation).\n\n\n",
"msg_date": "Wed, 3 Mar 2021 14:47:38 +0100",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: authtype parameter in libpq"
},
{
"msg_contents": "> On 3 Mar 2021, at 14:47, Peter Eisentraut <peter@eisentraut.org> wrote:\n> \n> On 26.02.21 21:02, Daniel Gustafsson wrote:\n>> When looking at disallowing SSL compression I found the parameter \"authtype\"\n>> which was deprecated in commit d5bbe2aca55bc8 on January 26 1998. While I do\n>> think there is a case to be made for the backwards compatibility having run its\n>> course on this one, shouldn't we at least remove the environment variable and\n>> default compiled fallback for it to save us a getenv call when filling in the\n>> option defaults?\n> \n> The argument of avoiding unnecessary getenv() calls is sensible. PGTTY should get the same treatment.\n\nThe reason I left PGTTY alone is that we still have a way to extract the value\nset via PQtty(), so removing one or two ways of setting it while at the same\ntime allowing the value to be read back seemed inconsistent regardless of its\nobsolescence.\n\nauthtype is completely dead in terms of reading back the value, to the point of\nit being a memleak if it indeed was found in as an environment variable.\n\n> But I tend to think we should remove them both altogether (modulo ABI and API preservation).\n\nNo disagreement from me, the attached takes a stab at that to get an idea what\nit would look like. PQtty is left to maintain API stability but the parameters\nare removed from the conn object as thats internal to libpq.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Thu, 4 Mar 2021 16:06:36 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: authtype parameter in libpq"
},
{
"msg_contents": "On 04.03.21 16:06, Daniel Gustafsson wrote:\n> authtype is completely dead in terms of reading back the value, to the point of\n> it being a memleak if it indeed was found in as an environment variable.\n> \n>> But I tend to think we should remove them both altogether (modulo ABI and API preservation).\n> \n> No disagreement from me, the attached takes a stab at that to get an idea what\n> it would look like. PQtty is left to maintain API stability but the parameters\n> are removed from the conn object as thats internal to libpq.\n\nThis looks like this right idea to me.\n\n\n",
"msg_date": "Mon, 8 Mar 2021 10:57:56 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: authtype parameter in libpq"
},
{
"msg_contents": "On 08.03.21 10:57, Peter Eisentraut wrote:\n> On 04.03.21 16:06, Daniel Gustafsson wrote:\n>> authtype is completely dead in terms of reading back the value, to the \n>> point of\n>> it being a memleak if it indeed was found in as an environment variable.\n>>\n>>> But I tend to think we should remove them both altogether (modulo ABI \n>>> and API preservation).\n>>\n>> No disagreement from me, the attached takes a stab at that to get an \n>> idea what\n>> it would look like.� PQtty is left to maintain API stability but the \n>> parameters\n>> are removed from the conn object as thats internal to libpq.\n> \n> This looks like this right idea to me.\n\ncommitted, with some tweaks\n\n\n",
"msg_date": "Tue, 9 Mar 2021 15:51:41 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: authtype parameter in libpq"
}
] |
[
{
"msg_contents": "Hello,\n\nCommit 82ebbeb0 added a workaround for (already in 2017) ancient Linux\nkernels with no EPOLL_CLOEXEC. I don't see any such systems in the\nbuild farm today (and if there is one hiding in there somewhere, it's\nwell past time to upgrade). I'd like to rip that code out, because\nI'm about to commit some new code that uses another 2.6.17+\nXXX_CLOEXEC flag, and it'd be silly to have to write new workaround\ncode for that too, and a contradiction to have fallback code in one\nplace but not another. Any objections?",
"msg_date": "Sat, 27 Feb 2021 12:10:23 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Remove latch.c workaround for Linux < 2.6.27"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Commit 82ebbeb0 added a workaround for (already in 2017) ancient Linux\n> kernels with no EPOLL_CLOEXEC. I don't see any such systems in the\n> build farm today (and if there is one hiding in there somewhere, it's\n> well past time to upgrade). I'd like to rip that code out, because\n> I'm about to commit some new code that uses another 2.6.17+\n> XXX_CLOEXEC flag, and it'd be silly to have to write new workaround\n> code for that too, and a contradiction to have fallback code in one\n> place but not another. Any objections?\n\nI believe we've dropped support for RHEL5, so no objection here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Feb 2021 22:45:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove latch.c workaround for Linux < 2.6.27"
},
{
"msg_contents": "On 27 February 2021 01:10:23 EET, Thomas Munro <thomas.munro@gmail.com> wrote:\n>Hello,\n>\n>Commit 82ebbeb0 added a workaround for (already in 2017) ancient Linux\n>kernels with no EPOLL_CLOEXEC. I don't see any such systems in the\n>build farm today (and if there is one hiding in there somewhere, it's\n>well past time to upgrade). I'd like to rip that code out, because\n>I'm about to commit some new code that uses another 2.6.17+\n>XXX_CLOEXEC flag, and it'd be silly to have to write new workaround\n>code for that too, and a contradiction to have fallback code in one\n>place but not another. Any objections?\n\nWhat happens if you try to try to compile or run on such an ancient kernel? Does it fall back to something else? Can you still make it work with different configure options?\n\nI'm just curious, not objecting.\n\n- Heikki\n\n\n",
"msg_date": "Sat, 27 Feb 2021 10:01:27 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Remove latch.c workaround for Linux < 2.6.27"
},
{
"msg_contents": "On Sat, Feb 27, 2021 at 9:01 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 27 February 2021 01:10:23 EET, Thomas Munro <thomas.munro@gmail.com> wrote:\n> >Commit 82ebbeb0 added a workaround for (already in 2017) ancient Linux\n> >kernels with no EPOLL_CLOEXEC. I don't see any such systems in the\n> >build farm today (and if there is one hiding in there somewhere, it's\n> >well past time to upgrade). I'd like to rip that code out, because\n> >I'm about to commit some new code that uses another 2.6.17+\n> >XXX_CLOEXEC flag, and it'd be silly to have to write new workaround\n> >code for that too, and a contradiction to have fallback code in one\n> >place but not another. Any objections?\n>\n> What happens if you try to try to compile or run on such an ancient kernel? Does it fall back to something else? Can you still make it work with different configure options?\n>\n> I'm just curious, not objecting.\n\nWith this patch, I guess you'd have to define WAIT_USE_POLL. I\nsuppose we could fall back to that automatically if EPOLL_CLOEXEC\nisn't defined, if anyone thinks that's worth bothering with.\n\nEven though Linux < 2.6.17 is not relevant, one thing I have wondered\nabout is what other current OSes might have copied Linux's epoll API\nand get me into trouble by being incomplete. So far I have found only\nillumos, based on googling about OSes that are in our build farm (my\nidea of what OSes we support in some sense), and BF animal damselfly's\nconfigure output seems to confirm that it does have it. Googling\ntells me that it does seem to have the full modern version of the API,\nso fingers crossed (looks like it also has signalfd() too, which is\ninteresting for my latch optimisation patch which assumes that if you\nhave epoll you also have signalfd).\n\n\n",
"msg_date": "Sat, 27 Feb 2021 21:30:09 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove latch.c workaround for Linux < 2.6.27"
},
{
"msg_contents": "On Sat, Feb 27, 2021 at 9:30 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, Feb 27, 2021 at 9:01 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > What happens if you try to try to compile or run on such an ancient kernel? Does it fall back to something else? Can you still make it work with different configure options?\n> >\n> > I'm just curious, not objecting.\n>\n> With this patch, I guess you'd have to define WAIT_USE_POLL. I\n> suppose we could fall back to that automatically if EPOLL_CLOEXEC\n> isn't defined, if anyone thinks that's worth bothering with.\n\nI thought about doing:\n\n /* don't overwrite manual choice */\n-#elif defined(HAVE_SYS_EPOLL_H)\n+#elif defined(HAVE_SYS_EPOLL_H) && defined(EPOLL_CLOEXEC)\n #define WAIT_USE_EPOLL\n\n... but on reflection, I don't think we should expend energy on a\ndesupported OS vintage that isn't present in our build farm, at least\nnot without a reasonable field complaint; I wouldn't even know if it\nworked.\n\nSo, pushed without that.\n\n\n",
"msg_date": "Mon, 1 Mar 2021 11:30:24 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove latch.c workaround for Linux < 2.6.27"
}
] |
[
{
"msg_contents": "Hi!\n\nThere is a lot of different compression threads nearby. And that's great!\nEvery few bytes going to IO still deserve to be compressed.\n\nCurrently, we have a pglz compression for WAL full page images. As shown in [0] this leads to high CPU usage in pglz when wal_compression is on. Swapping pglz with lz4 increases pgbench tps by 21% on my laptop (if wal_compression is enabled).\nSo I think it worth to propose a patch to make wal_compression_method = {\"pglz\", \"lz4\", \"zlib\"}. Probably, \"zstd\" can be added to the list.\n\nEven better option would be to teach WAL compression to compress everything, not only FPIs. But this is a big orthogonal chunk of work.\n\nAttached is a draft taking CompressionId from \"custom compression methods\" patch and adding zlib to it.\nI'm not sure where to add tests that check recovery with different methods. It seems to me that only TAP tests are suitable for this.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/323B1B01-DA42-419F-A99C-23E2C162D53B%40yandex-team.ru",
"msg_date": "Sat, 27 Feb 2021 12:43:52 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Different compression methods for FPI"
},
{
"msg_contents": "On Sat, Feb 27, 2021 at 12:43:52PM +0500, Andrey Borodin wrote:\n> So I think it worth to propose a patch to make wal_compression_method = {\"pglz\", \"lz4\", \"zlib\"}. Probably, \"zstd\" can be added to the list.\n> Attached is a draft taking CompressionId from \"custom compression methods\" patch and adding zlib to it.\n\nThanks for submitting it.\n\nDoes this need to patch ./configure{,.ac} and Solution.pm for HAVE_LIBLZ4 ?\nI suggest to also include an 0002 patch (not for commit) which changes to use a\nnon-default compression, to exercise this on the CIs - linux and bsd\nenvironments now have liblz4 installed, and for windows you can keep \"undef\".\n\nDaniil had a patch to add src/common/z_stream.c:\nhttps://github.com/postgrespro/libpq_compression/blob/0a9c70d582cd4b1ef60ff39f8d535f6e800bd7d4/src/common/z_stream.c\nhttps://www.postgresql.org/message-id/470E411E-681D-46A2-A1E9-6DE11B5F59F3@yandex-team.ru\n\nYour patch looks fine, but I wonder if we should first implement a generic\ncompression API. Then, the callers don't need to have a bunch of #ifdef.\nIf someone calls zs_create() for an algorithm which isn't enabled at compile\ntime, it throws the error at a lower level.\n\nThat also allows a central place to do things like handle options (compression\nlevel, and things like zstd --long, --rsyncable, etc).\n\nIn some cases there's an argument that the compression ID should be globally\ndefined constant, not like a dynamic \"plugable\" OID. That may be true for the\nlibpq compression, WAL compression, and pg_dump, since there's separate\n\"producer\" and \"consumer\". I think if there's \"pluggable\" compression (like\nthe TOAST patch), then it may have to map between the static or dynamic OIDs\nand the global compression ID.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 28 Feb 2021 17:08:17 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Sun, Feb 28, 2021 at 05:08:17PM -0600, Justin Pryzby wrote:\n> Does this need to patch ./configure{,.ac} and Solution.pm for HAVE_LIBLZ4 ?\n> I suggest to also include an 0002 patch (not for commit) which changes to use a\n> non-default compression, to exercise this on the CIs - linux and bsd\n> environments now have liblz4 installed, and for windows you can keep \"undef\".\n\nGood idea. But are you sure that lz4 is available in the CF bot\nenvironments?\n\n> Your patch looks fine, but I wonder if we should first implement a generic\n> compression API. Then, the callers don't need to have a bunch of #ifdef.\n> If someone calls zs_create() for an algorithm which isn't enabled at compile\n> time, it throws the error at a lower level.\n\nYeah, the patch feels incomplete with its footprint in xloginsert.c\nfor something that could be available in src/common/ like pglz,\nparticularly knowing that you will need to have this information \navailable to frontend tools, no?\n\n> In some cases there's an argument that the compression ID should be globally\n> defined constant, not like a dynamic \"plugable\" OID. That may be true for the\n> libpq compression, WAL compression, and pg_dump, since there's separate\n> \"producer\" and \"consumer\". I think if there's \"pluggable\" compression (like\n> the TOAST patch), then it may have to map between the static or dynamic OIDs\n> and the global compression ID.\n\nThis declaration should be frontend-aware. As presented, the patch\nwould enforce zlib or lz4 on top of pglz, but I guess that it would be\nmore user-friendly to complain when attempting to set up\nwal_compression_method (why not just using wal_compression?) to an\nunsupported option rather than doing it after-the-fact for the first\nfull page generated once the new parameter value is loaded.\n\nThis stuff could just add tests in src/test/recovery/. See for\nexample src/test/ssl and with_ssl to see how conditional tests happen\ndepending on the configure options.\n\nNot much a fan either of assuming that it is just fine to add one byte\nto XLogRecordBlockImageHeader for the compression_method field.\n--\nMichael",
"msg_date": "Mon, 1 Mar 2021 13:57:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Mon, Mar 01, 2021 at 01:57:12PM +0900, Michael Paquier wrote:\n> On Sun, Feb 28, 2021 at 05:08:17PM -0600, Justin Pryzby wrote:\n> > Does this need to patch ./configure{,.ac} and Solution.pm for HAVE_LIBLZ4 ?\n> > I suggest to also include an 0002 patch (not for commit) which changes to use a\n> > non-default compression, to exercise this on the CIs - linux and bsd\n> > environments now have liblz4 installed, and for windows you can keep \"undef\".\n> \n> Good idea. But are you sure that lz4 is available in the CF bot\n> environments?\n\nYes, see here\nhttps://www.postgresql.org/message-id/flat/20210119205643.GA1941%40telsasoft.com\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 28 Feb 2021 23:03:27 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "\n\n> 1 марта 2021 г., в 10:03, Justin Pryzby <pryzby@telsasoft.com> написал(а):\n\nJustin, Michael, thanks for comments!\n\nAs far as I understood TODO list for the patch looks as follows:\n\n1. Reuse compression API of other patches. But which one? Or should I invent new one? \"compressamapi.h\" from \"custom compression methods\" bothers only with varlena <-> varlena conversions, and only for lz4. And it is \"access method\" after all, residing in backend...\nZStream looks good, but it lacks OID identification for compression methods and lz4.\n2. Store OID in FPIs instead of 1-byte CompressionId. Make sure frontend is able to recognize builtin compression OIDs.\n3. Complain if wal_compression_method is set to lib which is not compiled in.\n4. Add tests for different WAL compression methods similar to src/test/ssl\n\nDid I miss something?\nI would appreciate a help with item 1, I do not know how to choose starting point.\n\n> wal_compression_method (why not just using wal_compression?)\n\nI hope one day we will compress all WAL, not just FPIs. Advanced archive management tools already do so, why not compress it in walwriter?\nWhen this will be implemented, we could have wal_compression = {off, fpi, all}.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sat, 6 Mar 2021 12:29:14 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Sat, Mar 06, 2021 at 12:29:14PM +0500, Andrey Borodin wrote:\n> > 1 марта 2021 г., в 10:03, Justin Pryzby <pryzby@telsasoft.com> написал(а):\n> \n> Justin, Michael, thanks for comments!\n> \n> As far as I understood TODO list for the patch looks as follows:\n\nYour patch can be simplified some, and then the only ifdef are in two short\nfunctions. Moving the compression calls to another function/file is hardly\nworth it, and anyone that implements a generic compression API could refactor\neasily, if it's a win. So I don't want to impose the burden on your small\npatch of setting up the compression API for everyone else's patches. Since\nthis is non-streaming compression, the calls are trivial.\n\nOne advantage of a generic API is that it's a good place to handle things like\ncompression options, like zstd:9 or zstd:3,rsyncable (I am not suggesting this\nsyntax).\n\nToday, I re-sent an Dillip's patch with a change to use pkg-config for liblz4,\nand it now also compiles on mac, so I used those changes to configure.ac (using\npkg-config) and src/tools/msvc/Solution.pm, and changed HAVE_LIBLZ4 to USE_LZ4.\n\nThis also resolves conflict with 32fd2b57d7f64948e649fc205c43f007762ecaac.\n\nwal_compression_method is PGC_SIGHUP, not SUSET.\n\nI think that the COMPRESSION_ID should have a prefix like XLOG_* - but didn't\ndo it here.\n\nDoes this patch need to bump XLOG_PAGE_MAGIC ?\n\nwal_compression_options: I made this conditional compilation, so the GUC\nmachinery rejects methods which aren't supported. That means that xloginsert\ndoesn't need to check for unsupported methods. sync_method_options also uses\nconditional compilation, but a GUC \"check\" hook would be more friendly, since\nit could distinguish between \"not supported\" and \"valid\":\n|ERROR: invalid value for parameter \"wal_compression_method\": \"lz4\"\n\n-- \nJustin",
"msg_date": "Fri, 12 Mar 2021 01:45:47 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Fri, Mar 12, 2021 at 01:45:47AM -0600, Justin Pryzby wrote:\n> On Sat, Mar 06, 2021 at 12:29:14PM +0500, Andrey Borodin wrote:\n> > > 1 марта 2021 г., в 10:03, Justin Pryzby <pryzby@telsasoft.com> написал(а):\n> > \n> > Justin, Michael, thanks for comments!\n> > \n> > As far as I understood TODO list for the patch looks as follows:\n> \n> Your patch can be simplified some, and then the only ifdef are in two short\n> functions. Moving the compression calls to another function/file is hardly\n> worth it, and anyone that implements a generic compression API could refactor\n> easily, if it's a win. So I don't want to impose the burden on your small\n> patch of setting up the compression API for everyone else's patches. Since\n> this is non-streaming compression, the calls are trivial.\n> \n> One advantage of a generic API is that it's a good place to handle things like\n> compression options, like zstd:9 or zstd:3,rsyncable (I am not suggesting this\n> syntax).\n> \n> Today, I re-sent an Dillip's patch with a change to use pkg-config for liblz4,\n> and it now also compiles on mac, so I used those changes to configure.ac (using\n> pkg-config) and src/tools/msvc/Solution.pm, and changed HAVE_LIBLZ4 to USE_LZ4.\n\nUpdated patch with a minor fix to configure.ac to avoid warnings on OSX.\nAnd 2ndary patches from another thread to allow passing recovery tests.\nRenamed to WAL_COMPRESSION_*\nSplit LZ4 support to a separate patch and support zstd. These show that\nchanges needed for a new compression method have been minimized, although not\nyet moved to a separate, abstracted compression/decompression function.\n\nOn Mon, Mar 01, 2021 at 01:57:12PM +0900, Michael Paquier wrote:\n> > Your patch looks fine, but I wonder if we should first implement a generic\n> > compression API. Then, the callers don't need to have a bunch of #ifdef.\n> > If someone calls zs_create() for an algorithm which isn't enabled at compile\n> > time, it throws the error at a lower level.\n> \n> Yeah, the patch feels incomplete with its footprint in xloginsert.c\n> for something that could be available in src/common/ like pglz,\n> particularly knowing that you will need to have this information \n> available to frontend tools, no?\n\nMichael: what frontend tools did you mean ?\npg_rewind? This may actually be okay as-is, since it uses symlinks:\n\n$ ls -l src/bin/pg_rewind/xlogreader.c\nlrwxrwxrwx 1 pryzbyj pryzbyj 48 Mar 12 17:48 src/bin/pg_rewind/xlogreader.c -> ../../../src/backend/access/transam/xlogreader.c\n\n> Not much a fan either of assuming that it is just fine to add one byte\n> to XLogRecordBlockImageHeader for the compression_method field.\n\nWhat do you mean? Are you concerned about alignment, or the extra width, or??\n\nThese two patches are a prerequisite for this patch to progress:\n * Run 011_crash_recovery.pl with wal_level=minimal\n * Make sure published XIDs are persistent\n\nI don't know if anyone will consider this patch for v14 - if not, it should be\nset to v15 and revisit in a month. \n\n-- \nJustin",
"msg_date": "Fri, 12 Mar 2021 19:28:20 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "\n\n> 13 марта 2021 г., в 06:28, Justin Pryzby <pryzby@telsasoft.com> написал(а):\n>> \n> Updated patch with a minor fix to configure.ac to avoid warnings on OSX.\n> And 2ndary patches from another thread to allow passing recovery tests.\n> Renamed to WAL_COMPRESSION_*\n> Split LZ4 support to a separate patch and support zstd. These show that\n> changes needed for a new compression method have been minimized, although not\n> yet moved to a separate, abstracted compression/decompression function.\n\nThanks! Awesome work!\n\n> These two patches are a prerequisite for this patch to progress:\n> * Run 011_crash_recovery.pl with wal_level=minimal\n> * Make sure published XIDs are persistent\n> \n> I don't know if anyone will consider this patch for v14 - if not, it should be\n> set to v15 and revisit in a month. \n\nI want to note, that fixes for 011_crash_recovery.pl are not strictly necessary for this patch set.\nThe problem in tests arises if we turn on wal_compression, absolutely independently from wal compression method.\nWe turn on wal_compression in this test only for CI purposes.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sat, 13 Mar 2021 20:48:33 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Sat, Mar 13, 2021 at 08:48:33PM +0500, Andrey Borodin wrote:\n> > 13 марта 2021 г., в 06:28, Justin Pryzby <pryzby@telsasoft.com> написал(а):\n> > Updated patch with a minor fix to configure.ac to avoid warnings on OSX.\n> > And 2ndary patches from another thread to allow passing recovery tests.\n> > Renamed to WAL_COMPRESSION_*\n> > Split LZ4 support to a separate patch and support zstd. These show that\n> > changes needed for a new compression method have been minimized, although not\n> > yet moved to a separate, abstracted compression/decompression function.\n> \n> Thanks! Awesome work!\n> \n> > These two patches are a prerequisite for this patch to progress:\n> > * Run 011_crash_recovery.pl with wal_level=minimal\n> > * Make sure published XIDs are persistent\n> > \n> > I don't know if anyone will consider this patch for v14 - if not, it should be\n> > set to v15 and revisit in a month. \n> \n> I want to note, that fixes for 011_crash_recovery.pl are not strictly necessary for this patch set.\n> The problem in tests arises if we turn on wal_compression, absolutely independently from wal compression method.\n> We turn on wal_compression in this test only for CI purposes.\n\nI rearranged the patches to reflect this.\nChange to zlib and zstd to level=1.\nAdd support for negative \"zstd fast\" levels.\nUse correct length accounting for \"hole\" in LZ4 and ZSTD.\nFixed Solution.pm for zstd on windows.\nSwitch to zstd by default (for CI).\nAdd docs.\n\nIt occurred to me that the generic \"compression API\" might be of more\nsignificance if we supported compression of all WAL (not just FPI). I imagine\nthat might use streaming compression.\n\n-- \nJustin",
"msg_date": "Sun, 14 Mar 2021 19:31:35 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "@cfbot: Resending without duplicate patches",
"msg_date": "Sun, 14 Mar 2021 20:12:31 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Sun, Mar 14, 2021 at 07:31:35PM -0500, Justin Pryzby wrote:\n> On Sat, Mar 13, 2021 at 08:48:33PM +0500, Andrey Borodin wrote:\n> > > 13 марта 2021 г., в 06:28, Justin Pryzby <pryzby@telsasoft.com> написал(а):\n> > > Updated patch with a minor fix to configure.ac to avoid warnings on OSX.\n> > > And 2ndary patches from another thread to allow passing recovery tests.\n> > > Renamed to WAL_COMPRESSION_*\n> > > Split LZ4 support to a separate patch and support zstd. These show that\n> > > changes needed for a new compression method have been minimized, although not\n> > > yet moved to a separate, abstracted compression/decompression function.\n> > \n> > Thanks! Awesome work!\n> > \n> > > These two patches are a prerequisite for this patch to progress:\n> > > * Run 011_crash_recovery.pl with wal_level=minimal\n> > > * Make sure published XIDs are persistent\n> > > \n> > > I don't know if anyone will consider this patch for v14 - if not, it should be\n> > > set to v15 and revisit in a month. \n> > \n> > I want to note, that fixes for 011_crash_recovery.pl are not strictly necessary for this patch set.\n> > The problem in tests arises if we turn on wal_compression, absolutely independently from wal compression method.\n> > We turn on wal_compression in this test only for CI purposes.\n> \n> I rearranged the patches to reflect this.\n> Change to zlib and zstd to level=1.\n> Add support for negative \"zstd fast\" levels.\n> Use correct length accounting for \"hole\" in LZ4 and ZSTD.\n> Fixed Solution.pm for zstd on windows.\n> Switch to zstd by default (for CI).\n> Add docs.\n\nChanges:\n- Allocate buffer sufficient to accommodate any supported compression method;\n- Use existing info flags argument rather than adding another byte for storing\n the compression method; this seems to be what was anticipated by commit\n 57aa5b2bb and what Michael objected to.\n\nI think the existing structures are ugly, so maybe this suggests using a GUC\nassign hook to support arbitrary compression level, and maybe other options.\n\n-- \nJustin",
"msg_date": "Mon, 15 Mar 2021 13:09:18 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "rebased to keep cfbot happy. This will run with default=zlib.\n\nOn Mon, Mar 15, 2021 at 01:09:18PM -0500, Justin Pryzby wrote:\n> On Sun, Mar 14, 2021 at 07:31:35PM -0500, Justin Pryzby wrote:\n> > On Sat, Mar 13, 2021 at 08:48:33PM +0500, Andrey Borodin wrote:\n> > > > 13 марта 2021 г., в 06:28, Justin Pryzby <pryzby@telsasoft.com> написал(а):\n> > > > Updated patch with a minor fix to configure.ac to avoid warnings on OSX.\n> > > > And 2ndary patches from another thread to allow passing recovery tests.\n> > > > Renamed to WAL_COMPRESSION_*\n> > > > Split LZ4 support to a separate patch and support zstd. These show that\n> > > > changes needed for a new compression method have been minimized, although not\n> > > > yet moved to a separate, abstracted compression/decompression function.\n> > > \n> > > Thanks! Awesome work!\n> > > \n> > > > These two patches are a prerequisite for this patch to progress:\n> > > > * Run 011_crash_recovery.pl with wal_level=minimal\n> > > > * Make sure published XIDs are persistent\n> > > > \n> > > > I don't know if anyone will consider this patch for v14 - if not, it should be\n> > > > set to v15 and revisit in a month. \n> > > \n> > > I want to note, that fixes for 011_crash_recovery.pl are not strictly necessary for this patch set.\n> > > The problem in tests arises if we turn on wal_compression, absolutely independently from wal compression method.\n> > > We turn on wal_compression in this test only for CI purposes.\n> > \n> > I rearranged the patches to reflect this.\n> > Change to zlib and zstd to level=1.\n> > Add support for negative \"zstd fast\" levels.\n> > Use correct length accounting for \"hole\" in LZ4 and ZSTD.\n> > Fixed Solution.pm for zstd on windows.\n> > Switch to zstd by default (for CI).\n> > Add docs.\n> \n> Changes:\n> - Allocate buffer sufficient to accommodate any supported compression method;\n> - Use existing info flags argument rather than adding another byte for storing\n> the compression method; this seems to be what was anticipated by commit\n> 57aa5b2bb and what Michael objected to.\n> \n> I think the existing structures are ugly, so maybe this suggests using a GUC\n> assign hook to support arbitrary compression level, and maybe other options.",
"msg_date": "Sun, 21 Mar 2021 14:30:04 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Sun, Mar 21, 2021 at 02:30:04PM -0500, Justin Pryzby wrote:\n> rebased to keep cfbot happy. This will run with default=zlib.\n\nI have been looking at bit at this patch set, and I think that we\nshould do something here for 15~.\n\nFirst, I think that we should make more tests and pick up one\ncompression method that could be used as an alternative to pglz, not\nthree among zlib, LZ4 or zstd. Looking at the past archives, we could\ndo more tests like this one: \nhttps://www.postgresql.org/message-id/CAB7nPqSc97o-UE5paxfMUKWcxE_JioyxO1M4A0pMnmYqAnec2g@mail.gmail.com\n\nThe I/O should not be the bottleneck here, so on top of disabling\nfsync we could put the whole data folder on a ramdisk and compare the\nwhole. The patch set does not apply, by the way, so it needs a\nrebase.\n\nBased on my past experiences, I'd see LZ4 as a good choice in terms of\nperformance, speed and compression ratio, and on top of that we now\nhave the possibility to build PG with --with-lz4 for TOAST\ncompression so ./configure is already tweaked for it. For this patch,\nthis is going to require a bit more in terms of library linking as the\nblock decompression is done in xlogreader.c, so that's one thing to\nworry about.\n\n #define BKPIMAGE_IS_COMPRESSED 0x02 /* page image is compressed */\n #define BKPIMAGE_APPLY 0x04 /* page image should be restored during\n * replay */\t\t\t\t \n+#define BKPIMAGE_COMPRESS_METHOD1 0x08 /* bits to encode\ncompression method */\n+#define BKPIMAGE_COMPRESS_METHOD2 0x10 /* 0=pglz; 1=zlib; */\n\nThe interface used in xlogrecord.h is confusing to me with\nBKPIMAGE_IS_COMPRESSED, followed by extra bits set for the compression\nmethod. Wouldn't it be cleaner to have a set of BKPIMAGE_COMPRESS_XXX\n(XXX={lz4,zlib,etc.})? There is no even need to steal one bit for\nsome kind of BKPIMAGE_COMPRESS_NONE as we know that the page is\ncompressed if we know it has a method assigned, so with pglz, the\ndefault, and one extra method we need only two bits here.\n\n+ {\n+ {\"wal_compression_method\", PGC_SIGHUP, WAL_SETTINGS,\n+ gettext_noop(\"Set the method used to compress full page\nimages in the WAL.\"),\n+ NULL\n+ },\n+ &wal_compression_method,\n+ WAL_COMPRESSION_PGLZ, wal_compression_options,\n+ NULL, NULL, NULL\n+ },\nThe interface is not quite right to me. I think that we should just\nchange wal_compression to become an enum, with extra values for pglz\nand the new method. \"on\" would be a synonym for \"pglz\".\n\n+/* This is a mapping indexed by wal_compression */\n+// XXX: maybe this is better done as a GUC hook to assign the 1)\nmethod; and 2) level\n+struct walcompression walmethods[] = {\n+ {\"pglz\", WAL_COMPRESSION_PGLZ},\n+ {\"zlib\", WAL_COMPRESSION_ZLIB},\n+};\nDon't think you need a hook here, but zlib, or any other method which\nis not supported by the build, had better not be listed instead. This\nensures that the value cannot be assigned if the binaries don't\nsupport that.\n\n+ {\"wal_compression_method\", PGC_SIGHUP, WAL_SETTINGS,\n+ gettext_noop(\"Set the method used to compress full page\nimages in the WAL.\"),\n+ NULL\n+ },\n+ &wal_compression_method,\n+ WAL_COMPRESSION_PGLZ, wal_compression_options,\n+ NULL, NULL, NULL\nAny reason to not make that user-settable? If you merge that with\nwal_compression, that's not an issue.\n\nThe patch set is a gathering of various things, and not only things\nassociated to the compression method used for FPIs.\n\n my $node = get_new_node('primary');\n-$node->init(allows_streaming => 1);\n+$node->init();\n $node->start;\nWhat is the point of that in patch 0002?\n\n> Subject: [PATCH 03/12] Make sure published XIDs are persistent\n\nPatch 0003 looks unrelated to this thread.\n\n> Subject: [PATCH 04/12] wal_compression_method: default to zlib..\n\nPatch 0004 could happen, however there are no reasons given why this\nis adapted. Changing the default is not going to happen for the time\nrelease where this feature is added, anyway.\n\n+ default:\n+ report_invalid_record(record, \"image at %X/%X is compressed with unsupported codec, block %d (%d/%s)\",\n+ (uint32) (record->ReadRecPtr >> 32),\n+ (uint32) record->ReadRecPtr,\n+ block_id,\n+ compression_method,\n+\nwal_compression_name(compression_method));\n+ return false;\nIn xlogreader.c, the error message is helpful this way. However, we\nwould not know which compression method failed if there is a\ndecompression failure for a method supported by the build restoring\nthis block. That would be good to add.\n\nI think that what we actually need for this thread are patches 0001,\n0005 and 0006 merged together to study first the performance we have\nwith each one of the compression methods proposed, and then let's just\npick one. Reading around, zstd and zlib compresse more but take\nlonger. LZ4 is faster than the others, but can compress less.\nWith limited bandwidth, less data makes sense, and my guess is that\nmost users care most about the speed of recovery if we can afford\nspeed with an acceptable compression ratio.\n--\nMichael",
"msg_date": "Mon, 17 May 2021 16:44:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Mon, May 17, 2021 at 04:44:11PM +0900, Michael Paquier wrote:\n> On Sun, Mar 21, 2021 at 02:30:04PM -0500, Justin Pryzby wrote:\n> \n> For this patch, this is going to require a bit more in terms of library\n> linking as the block decompression is done in xlogreader.c, so that's one\n> thing to worry about.\n\nI'm not sure what you mean here ?\n\n> + {\n> + {\"wal_compression_method\", PGC_SIGHUP, WAL_SETTINGS,\n> + gettext_noop(\"Set the method used to compress full page images in the WAL.\"),\n> + NULL\n> + },\n> + &wal_compression_method,\n> + WAL_COMPRESSION_PGLZ, wal_compression_options,\n> + NULL, NULL, NULL\n> + },\n> The interface is not quite right to me. I think that we should just\n> change wal_compression to become an enum, with extra values for pglz\n> and the new method. \"on\" would be a synonym for \"pglz\".\n\nAndrey gave a reason in March:\n\n| I hope one day we will compress all WAL, not just FPIs. Advanced archive management tools already do so, why not compress it in walwriter?\n| When this will be implemented, we could have wal_compression = {off, fpi, all}.\n\n> +/* This is a mapping indexed by wal_compression */\n> +// XXX: maybe this is better done as a GUC hook to assign the 1)\n> method; and 2) level\n> +struct walcompression walmethods[] = {\n> + {\"pglz\", WAL_COMPRESSION_PGLZ},\n> + {\"zlib\", WAL_COMPRESSION_ZLIB},\n> +};\n> Don't think you need a hook here, but zlib, or any other method which\n> is not supported by the build, had better not be listed instead. This\n> ensures that the value cannot be assigned if the binaries don't\n> support that.\n\nI think you're confusing the walmethods struct (which is unconditional) with\nwal_compression_options, which is conditional.\n\n> The patch set is a gathering of various things, and not only things\n> associated to the compression method used for FPIs.\n> What is the point of that in patch 0002?\n> > Subject: [PATCH 03/12] Make sure published XIDs are persistent\n> Patch 0003 looks unrelated to this thread.\n\n..for the reason that I gave:\n\n| And 2ndary patches from another thread to allow passing recovery tests.\n|These two patches are a prerequisite for this patch to progress:\n| * Run 011_crash_recovery.pl with wal_level=minimal\n| * Make sure published XIDs are persistent\n\n> > Subject: [PATCH 04/12] wal_compression_method: default to zlib..\n> \n> Patch 0004 could happen, however there are no reasons given why this\n> is adapted. Changing the default is not going to happen for the time\n> release where this feature is added, anyway.\n\n From the commit message:\n| this is meant to exercise the CIs, and not meant to be merged\n\n> + default:\n> + report_invalid_record(record, \"image at %X/%X is compressed with unsupported codec, block %d (%d/%s)\",\n> + (uint32) (record->ReadRecPtr >> 32),\n> + (uint32) record->ReadRecPtr,\n> + block_id,\n> + compression_method,\n> + wal_compression_name(compression_method));\n> + return false;\n> In xlogreader.c, the error message is helpful this way. However, we\n> would not know which compression method failed if there is a\n> decompression failure for a method supported by the build restoring\n> this block. That would be good to add.\n\nI don't undersand you here - that's what wal_compression_name is for ?\n2021-05-18 21:38:04.324 CDT [26984] FATAL: unknown compression method requested: 2(lz4)\n\n> I think that what we actually need for this thread are patches 0001,\n> 0005 and 0006 merged together to study first the performance we have\n> with each one of the compression methods proposed, and then let's just\n> pick one. Reading around, zstd and zlib compresse more but take\n> longer. LZ4 is faster than the others, but can compress less.\n> With limited bandwidth, less data makes sense, and my guess is that\n> most users care most about the speed of recovery if we can afford\n> speed with an acceptable compression ratio.\n\nI don't see why we'd add a guc for configuration compression but not include\nthe 30 lines of code needed to support a 3rd method that we already used by the\ncore server.\n\n-- \nJustin",
"msg_date": "Tue, 18 May 2021 22:06:18 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Tue, May 18, 2021 at 10:06:18PM -0500, Justin Pryzby wrote:\n> On Mon, May 17, 2021 at 04:44:11PM +0900, Michael Paquier wrote:\n>> On Sun, Mar 21, 2021 at 02:30:04PM -0500, Justin Pryzby wrote:\n>> \n>> For this patch, this is going to require a bit more in terms of library\n>> linking as the block decompression is done in xlogreader.c, so that's one\n>> thing to worry about.\n> \n> I'm not sure what you mean here ?\n\nI was wondering if anything else was needed in terms of linking here.\n\n>> Don't think you need a hook here, but zlib, or any other method which\n>> is not supported by the build, had better not be listed instead. This\n>> ensures that the value cannot be assigned if the binaries don't\n>> support that.\n> \n> I think you're confusing the walmethods struct (which is unconditional) with\n> wal_compression_options, which is conditional.\n\nIndeed I was. For the note, walmethods is not necessary either as a\nnew structure. You could just store a list of strings in xlogreader.c\nand make a note to keep the entries in a given order. That's simpler\nas well.\n\n>> The patch set is a gathering of various things, and not only things\n>> associated to the compression method used for FPIs.\n>> What is the point of that in patch 0002?\n>>> Subject: [PATCH 03/12] Make sure published XIDs are persistent\n>> Patch 0003 looks unrelated to this thread.\n> \n> ..for the reason that I gave:\n> \n> | And 2ndary patches from another thread to allow passing recovery tests.\n> |These two patches are a prerequisite for this patch to progress:\n> | * Run 011_crash_recovery.pl with wal_level=minimal\n> | * Make sure published XIDs are persistent\n\nI still don't understand why XID consistency has anything to do with\nthe compression of FPIs. There is nothing preventing the testing of\ncompression of FPIs, and plese note this argument:\nhttps://www.postgresql.org/message-id/BEF3B1E0-0B31-4F05-8E0A-F681CB918626@yandex-team.ru\n\nFor example, I can just revert from my tree 0002 and 0003, and still\nperform tests of the various compression methods. I do agree that we\nare going to need to do something about this problem, but let's drop\nthis stuff from the set of patches of this thread and just discuss\nthem where they are needed.\n\n\n>> + {\n>> + {\"wal_compression_method\", PGC_SIGHUP, WAL_SETTINGS,\n>> + gettext_noop(\"Set the method used to compress full page images in the WAL.\"),\n>> + NULL\n>> + },\n>> + &wal_compression_method,\n>> + WAL_COMPRESSION_PGLZ, wal_compression_options,\n>> + NULL, NULL, NULL\n>> + },\n>> The interface is not quite right to me. I think that we should just\n>> change wal_compression to become an enum, with extra values for pglz\n>> and the new method. \"on\" would be a synonym for \"pglz\".\n> \n> Andrey gave a reason in March:\n> \n> | I hope one day we will compress all WAL, not just FPIs. Advanced archive management tools already do so, why not compress it in walwriter?\n> | When this will be implemented, we could have wal_compression = {off, fpi, all}.\n>\n> [...]\n\n> I don't see why we'd add a guc for configuration compression but not include\n> the 30 lines of code needed to support a 3rd method that we already used by the\n> core server.\n\nBecause that makes things confusing. We have no idea if we'll ever\nreach a point or even if it makes sense to have compression applied to\nmultiple parts of WAL. So, for now, let's just use one single GUC and\nbe done with it. Your argument is not tied to what's proposed on this\nthread either, and could go the other way around. If we were to\ninvent more compression concepts in WAL records, we could as well just\ngo with a new GUC that lists the parts of the WAL where compression\nneeds to be applied. I'd say to keep it to a minimum for now, that's\nan interface less confusing than what's proposed here.\n\nAnd you have not replaced BKPIMAGE_IS_COMPRESSED by a PGLZ-equivalent,\nso your patch set is eating more bits for BKPIMAGE_* than it needs\nto.\n\nBy the way, it would be really useful for the user to print in\npg_waldump -b the type of compression used :)\n\nA last point, and I think that this should be part of a study of the\nchoice to made for an extra compression method: there is no discussion\nyet about the level of compression applied, which is something that\ncan be applied to zstd, lz4 or even zlib. Perhaps there is an\nargument for a different GUC controlling that, so more benchmarking\nis a first step needed for this thread to move on. Benchmarking can\nhappen with what's currently posted, of course.\n--\nMichael",
"msg_date": "Wed, 19 May 2021 18:31:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Wed, May 19, 2021 at 06:31:15PM +0900, Michael Paquier wrote:\n> I still don't understand why XID consistency has anything to do with\n> the compression of FPIs. There is nothing preventing the testing of\n> compression of FPIs, and plese note this argument:\n> https://www.postgresql.org/message-id/BEF3B1E0-0B31-4F05-8E0A-F681CB918626@yandex-team.ru\n> \n> For example, I can just revert from my tree 0002 and 0003, and still\n> perform tests of the various compression methods. I do agree that we\n> are going to need to do something about this problem, but let's drop\n> this stuff from the set of patches of this thread and just discuss\n> them where they are needed.\n\nThey are needed here - that they're included is deliberate. Revert this and\nthen the tests fail. \"Make sure published XIDs are persistent\"\n\ntime make -C src/test/recovery check\n# Failed test 'new xid after restart is greater'\n\n> And you have not replaced BKPIMAGE_IS_COMPRESSED by a PGLZ-equivalent,\n> so your patch set is eating more bits for BKPIMAGE_* than it needs\n\nThe goal is to support 2+ \"methods\" (including \"none\"), which takes 4 bits, so\nmay as well support 3 methods.\n\n- uncompressed\n- pglz\n- lz4\n- zlib or zstd or ??\n\nThis version:\n0) repurposes the pre-existing GUC as an enum;\n1) saves a bit (until zstd is included);\n2) shows the compression in pg_waldump;\n\nTo support different compression levels, I think I'd change from an enum to\nstring and an assign hook, which sets a pair of ints.\n\n-- \nJustin",
"msg_date": "Mon, 24 May 2021 23:44:45 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Tue, May 25, 2021 at 10:14 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n Some comment.\n\n+#define BKPIMAGE_COMPRESS_METHOD1 0x04 /* bits to encode compression method */\n+#define BKPIMAGE_COMPRESS_METHOD2 0x08 /* 0=none, 1=pglz, 2=zlib */\n\nInstead of using METHOD1, METHOD2, can we use the direct method name,\nthat will look cleaner?\n\n+ unsigned long len_l = COMPRESS_BUFSIZE;\n+ int ret;\n+ ret = compress2((Bytef*)dest, &len_l, (Bytef*)source, orig_len, 1);\n\ncompress2((Bytef*)dest -> compress2((Bytef *) dest\n\ndiff --git a/src/test/recovery/t/011_crash_recovery.pl\nb/src/test/recovery/t/011_crash_recovery.pl\nindex a26e99500b..2e7e3db639 100644\n--- a/src/test/recovery/t/011_crash_recovery.pl\n+++ b/src/test/recovery/t/011_crash_recovery.pl\n@@ -14,7 +14,7 @@ use Config;\n plan tests => 3;\n\n my $node = get_new_node('primary');\n-$node->init(allows_streaming => 1);\n+$node->init();\n $node->start;\n\nHow this change is relevant?\n\n+#ifdef USE_LZ4\n+ case WAL_COMPRESSION_LZ4:\n+ len = LZ4_compress_fast(source, dest, orig_len, COMPRESS_BUFSIZE, 1);\n+ if (len == 0)\n+ len = -1;\n+ break;\n+#endif\n\nIf we are passing acceleration as 1, then we can directly use\nLZ4_compress_default, it is the same as LZ4_compress_fast with\nacceleration=1.\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 May 2021 12:05:19 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Mon, May 24, 2021 at 11:44:45PM -0500, Justin Pryzby wrote:\n> The goal is to support 2+ \"methods\" (including \"none\"), which takes 4 bits, so\n> may as well support 3 methods.\n> \n> - uncompressed\n> - pglz\n> - lz4\n> - zlib or zstd or ??\n\nLet's make a proper study of all that and make a choice, the only\nthing I am rather sure of is that pglz is bad compared to all the\nothers. There is no point to argue as long as we don't know if any of\nthose candidates are suited for the job.\n\n> This version:\n> 0) repurposes the pre-existing GUC as an enum;\n> 1) saves a bit (until zstd is included);\n> 2) shows the compression in pg_waldump;\n> \n> To support different compression levels, I think I'd change from an enum to\n> string and an assign hook, which sets a pair of ints.\n\nHmm. I am not really sure what you mean here, but let's keep that\nin mind until we get more performance numbers.\n--\nMichael",
"msg_date": "Tue, 25 May 2021 16:26:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "\n\n> 25 мая 2021 г., в 12:26, Michael Paquier <michael@paquier.xyz> написал(а):\n> \n> On Mon, May 24, 2021 at 11:44:45PM -0500, Justin Pryzby wrote:\n>> The goal is to support 2+ \"methods\" (including \"none\"), which takes 4 bits, so\n>> may as well support 3 methods.\n>> \n>> - uncompressed\n>> - pglz\n>> - lz4\n>> - zlib or zstd or ??\n> \n> Let's make a proper study of all that and make a choice, the only\n> thing I am rather sure of is that pglz is bad compared to all the\n> others. There is no point to argue as long as we don't know if any of\n> those candidates are suited for the job.\n\nThere's a lot of open studies like [0,1].\nIn short, Lz4 is fastest codec. Zstd gives better compression at cost of more CPU usage[2]. Zlib is not tied to Facebook, however, it's slower and have smaller compression ratio than Zstd. Zstd is actively developed.\n\nThere is Google's Brotli codec. It is comparable to Zstd.\n\nWhat kind of deeper study do we want to do?\nWould it make sense to run our own benchmarks?\nOr, perhaps, review other codecs?\n\nIn my opinion, anything that is sent over network or written to block storage deserves Zstd-5 compression. But milage may vary.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n[0] https://indico.cern.ch/event/695984/contributions/2872933/attachments/1590457/2516802/ZSTD_and_ZLIB_Updates_-_January_20186.pdf\n[1] https://facebook.github.io/zstd/\n[2] Zstd gives significantly better compression at cost of little more CPU usage. But they both have points on Pareto frontier.\n\n",
"msg_date": "Mon, 31 May 2021 12:33:44 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Mon, May 31, 2021 at 12:33:44PM +0500, Andrey Borodin wrote:\n> Would it make sense to run our own benchmarks?\n\nYes, I think that it could be a good idea to run some custom-made\nbenchmarks as that could mean different bottlenecks found when it\ncomes to PG.\n\nThere are a couple of factors that matter here:\n- Is the algo available across a maximum of platforms? ZLIB and LZ4\nare everywhere and popular, for one. And we already plug with them in\nthe builds. No idea about the others but I can see quickly that Zstd\nhas support across many systems, and has a compatible license.\n- Speed and CPU usage. We should worry about that for CPU-bounded\nenvironments.\n- Compression ratio, which is just monitoring the difference in WAL.\n- Effect of the level of compression perhaps?\n- Use a fixed amount of WAL generated, meaning a set of repeatable SQL\nqueries, with one backend, no benchmarks like pgbench.\n- Avoid any I/O bottleneck, so run tests on a tmpfs or ramfs.\n- Avoid any extra WAL interference, like checkpoints, no autovacuum\nrunning in parallel.\n\nIt is not easy to draw a straight line here, but one could easily say\nthat an algo that reduces a FPI by 90% costing two times more CPU\ncycles is worse than something doing only a 70%~75% compression for\ntwo times less CPU cycles if environments are easily constrained on\nCPU.\n\nAs mentioned upthread, I'd recomment to design tests like this one, or\njust reuse this one:\nhttps://www.postgresql.org/message-id/CAB7nPqSc97o-UE5paxfMUKWcxE_JioyxO1M4A0pMnmYqAnec2g@mail.gmail.com\n\nIn terms of CPU usage, we should also monitor the user and system\ntimes of the backend, and compare the various situations. See patch\n0003 posted here that we used for wal_compression:\nhttps://www.postgresql.org/message-idCAB7nPqRC20=mKgu6d2st-e11_QqqbreZg-=SF+_UYsmvwNu42g@mail.gmail.com\n\nThis just uses getrusage() to get more stats.\n--\nMichael",
"msg_date": "Tue, 1 Jun 2021 11:06:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Tue, Jun 01, 2021 at 11:06:53AM +0900, Michael Paquier wrote:\n> - Speed and CPU usage. We should worry about that for CPU-bounded\n> environments.\n> - Compression ratio, which is just monitoring the difference in WAL.\n> - Effect of the level of compression perhaps?\n> - Use a fixed amount of WAL generated, meaning a set of repeatable SQL\n> queries, with one backend, no benchmarks like pgbench.\n> - Avoid any I/O bottleneck, so run tests on a tmpfs or ramfs.\n> - Avoid any extra WAL interference, like checkpoints, no autovacuum\n> running in parallel.\n\nI think it's more nuanced than just finding the algorithm with the least CPU\nuse. The GUC is PGC_USERSET, and it's possible that a data-loading process\nmight want to use zlib for better compress ratio, but an interactive OLTP\nprocess might want to use lz4 or no compression for better responsiveness.\n\nReducing WAL volume during loading can be important - at one site, their SAN\nwas too slow to keep up during their period of heaviest loading, the\ncheckpointer fell behind, WAL couldn't be recycled as normal, and the (local)\nWAL filesystem overflowed, and then the oversized WAL then needed to be\nreplayed, to the slow SAN. A large fraction of their WAL is FPI, and\ncompression now made this a non-issue. We'd happily incur 2x more CPU cost if\nWAL were 25% smaller.\n\nWe're not proposing to enable it by default, so the threshold doesn't have to\nbe \"no performance regression\" relative to no compression. The feature should\nprovide a faster alternative to PGLZ, and also a method with better compression\nratio to improve the case of heavy WAL writes, by reducing I/O from FPI.\n\nIn a CPU-bound environment, one would just disable WAL compression, or use LZ4\nif it's cheap enough. In the IO bound case, someone might enable zlib or zstd\ncompression.\n\nI found this old thread about btree performance with wal compression (+Peter,\n+Andres).\nhttps://www.postgresql.org/message-id/flat/540584F2-A554-40C1-8F59-87AF8D623BB7%40yandex-team.ru#94c0dcaa34e3170992749f6fdc8db35c\n\nAnd the differences are pretty dramatic, so I ran a single test on my PC:\n\nCREATE TABLE t AS SELECT generate_series(1,999999)a; VACUUM t;\nSET wal_compression= off;\n\\set QUIET \\\\ \\timing on \\\\ SET max_parallel_maintenance_workers=0; SELECT pg_stat_reset_shared('wal'); begin; CREATE INDEX ON t(a); rollback; SELECT * FROM pg_stat_wal;\nTime: 1639.375 ms (00:01.639)\nwal_bytes | 20357193\n\npglz writes ~half as much, but takes twice as long as uncompressed:\n|Time: 3362.912 ms (00:03.363)\n|wal_bytes | 11644224\n\nzlib writes ~4x less than ncompressed, and still much faster than pglz\n|Time: 2167.474 ms (00:02.167)\n|wal_bytes | 5611653\n\nlz4 is as fast as uncompressed, and writes a bit more than pglz:\n|Time: 1612.874 ms (00:01.613)\n|wal_bytes | 12397123\n\nzstd(6) is slower than lz4, but compresses better than anything but zlib.\n|Time: 1808.881 ms (00:01.809)\n|wal_bytes | 6395993\n\nIn this patch series, I added compression information to the errcontext from\nxlog_block_info(), and allow specifying compression levels like zlib-2. I'll\nrearrange that commit earlier if we decide that's desirable to include.",
"msg_date": "Sun, 13 Jun 2021 20:24:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "Thanks for benchmarks, Justin!\n\n\n> 14 июня 2021 г., в 06:24, Justin Pryzby <pryzby@telsasoft.com> написал(а):\n> \n> The GUC is PGC_USERSET\nOh, wow, that's neat. I did not realize that we can tune this for each individual client connection. Cool!\n\n> pglz writes ~half as much, but takes twice as long as uncompressed:\n> |Time: 3362.912 ms (00:03.363)\n> |wal_bytes | 11644224\n> \n> zlib writes ~4x less than ncompressed, and still much faster than pglz\n> |Time: 2167.474 ms (00:02.167)\n> |wal_bytes | 5611653\n> \n> lz4 is as fast as uncompressed, and writes a bit more than pglz:\n> |Time: 1612.874 ms (00:01.613)\n> |wal_bytes | 12397123\n> \n> zstd(6) is slower than lz4, but compresses better than anything but zlib.\n> |Time: 1808.881 ms (00:01.809)\n> |wal_bytes | 6395993\n\nI was wrong about zlib: it has its point on Pareto frontier. At least for this test.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 14 Jun 2021 14:47:08 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Sun, Jun 13, 2021 at 08:24:12PM -0500, Justin Pryzby wrote:\n> I think it's more nuanced than just finding the algorithm with the least CPU\n> use. The GUC is PGC_USERSET, and it's possible that a data-loading process\n> might want to use zlib for better compress ratio, but an interactive OLTP\n> process might want to use lz4 or no compression for better responsiveness.\n\nIt seems to me that this should be a PGC_SUSET, at least? We've had\nour share of problems with assumptions behind data leaks depending on\ndata compressibility (see ssl_compression and the kind).\n\n> In this patch series, I added compression information to the errcontext from\n> xlog_block_info(), and allow specifying compression levels like zlib-2. I'll\n> rearrange that commit earlier if we decide that's desirable to include.\n\nThe compression level may be better if specified with a different\nGUC. That's less parsing to have within the GUC machinery.\n\nSo, how does the compression level influence those numbers? The level\nof compression used by LZ4 here is the fastest-CPU/least-compression,\nsame for zlib and zstd? Could we get some data with getrusage()? It\nseems to me that if we can get the same amount of compression and CPU\nusage just by tweaking the compression level, there is no need to\nsupport more than one extra compression algorithm, easing the life of\npackagers and users.\n--\nMichael",
"msg_date": "Tue, 15 Jun 2021 09:50:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "Hi,\n\nOn 2021-06-15 09:50:41 +0900, Michael Paquier wrote:\n> On Sun, Jun 13, 2021 at 08:24:12PM -0500, Justin Pryzby wrote:\n> > I think it's more nuanced than just finding the algorithm with the least CPU\n> > use. The GUC is PGC_USERSET, and it's possible that a data-loading process\n> > might want to use zlib for better compress ratio, but an interactive OLTP\n> > process might want to use lz4 or no compression for better responsiveness.\n> \n> It seems to me that this should be a PGC_SUSET, at least? We've had\n> our share of problems with assumptions behind data leaks depending on\n> data compressibility (see ssl_compression and the kind).\n\n-1.\n\nCurrently wal_compression has too much overhead for some workloads, but\nnot for others. Disallowing normal users to set it would break cases\nwhere it's set for users that can tolerate the perf impact (which I have\ndone at least). And the scenarios where it can leak information that\ncouldn't otherwise be leaked already don't seem all that realistic?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 14 Jun 2021 18:07:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Tue, Jun 15, 2021 at 09:50:41AM +0900, Michael Paquier wrote:\n> On Sun, Jun 13, 2021 at 08:24:12PM -0500, Justin Pryzby wrote:\n> > I think it's more nuanced than just finding the algorithm with the least CPU\n> > use. The GUC is PGC_USERSET, and it's possible that a data-loading process\n> > might want to use zlib for better compress ratio, but an interactive OLTP\n> > process might want to use lz4 or no compression for better responsiveness.\n> \n> It seems to me that this should be a PGC_SUSET, at least? We've had\n> our share of problems with assumptions behind data leaks depending on\n> data compressibility (see ssl_compression and the kind).\n\nIt's USERSET following your own suggestion (which is a good suggestion):\n\nOn Mon, May 17, 2021 at 04:44:11PM +0900, Michael Paquier wrote:\n> + {\"wal_compression_method\", PGC_SIGHUP, WAL_SETTINGS,\n> + gettext_noop(\"Set the method used to compress full page images in the WAL.\"),\n> + NULL\n> + },\n> + &wal_compression_method,\n> + WAL_COMPRESSION_PGLZ, wal_compression_options,\n> + NULL, NULL, NULL\n> Any reason to not make that user-settable? If you merge that with\n> wal_compression, that's not an issue.\n\nI don't see how restricting it to superusers would mitigate the hazard at all:\nIf the local admin enables wal compression, then every user's data will be\ncompressed, and the degree of compression indicatates a bit about their data,\nno matter whether it's pglz or lz4.\n\nIt's probably true without compression, too - the fraction of FPW might reveal\ntheir usage patterns.\n\n> > In this patch series, I added compression information to the errcontext from\n> > xlog_block_info(), and allow specifying compression levels like zlib-2. I'll\n> > rearrange that commit earlier if we decide that's desirable to include.\n> \n> The compression level may be better if specified with a different\n> GUC. That's less parsing to have within the GUC machinery.\n\nI'm not sure about that - then there's an interdependency between GUCs.\nIf zlib range is 1..9, and zstd is -50..10, then you may have to set the\ncompression level first, to avoid an error. I believe there's a previous\ndiscussion about inter-dependent GUCs, and maybe a commit fixing a problem they\ncaused. But I cannot find it offhand.\n\n> seems to me that if we can get the same amount of compression and CPU\n> usage just by tweaking the compression level, there is no need to\n> support more than one extra compression algorithm, easing the life of\n> packagers and users.\n\nI don't think it eases it for packagers, since I anticipate the initial patch\nwould support {none/pglz/lz4/zlib}. I anticipate that zstd may not be in pg15.\n\nThe goal of the patch is to give options, and the overhead of adding both zlib\nand lz4 is low. zlib gives good compression at some CPUcost and may be\npreferable for (some) DWs, and lz4 is almost certainly better (than pglz) for\nOLTPs.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 14 Jun 2021 20:42:08 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Mon, Jun 14, 2021 at 08:42:08PM -0500, Justin Pryzby wrote:\n> On Tue, Jun 15, 2021 at 09:50:41AM +0900, Michael Paquier wrote:\n>> + {\"wal_compression_method\", PGC_SIGHUP, WAL_SETTINGS,\n>> + gettext_noop(\"Set the method used to compress full page images in the WAL.\"),\n>> + NULL\n>> + },\n>> + &wal_compression_method,\n>> + WAL_COMPRESSION_PGLZ, wal_compression_options,\n>> + NULL, NULL, NULL\n>> Any reason to not make that user-settable? If you merge that with\n>> wal_compression, that's not an issue.\n\nHmm, yeah. This can be read as using PGC_USERSET. With the second\npart of my sentence, I think that I imply to use PGC_SUSET and be\nconsistent with wal_compression, but I don't recall my mood from one\nmonth ago :) Sorry for the confusion.\n\n> I don't see how restricting it to superusers would mitigate the hazard at all:\n> If the local admin enables wal compression, then every user's data will be\n> compressed, and the degree of compression indicatates a bit about their data,\n> no matter whether it's pglz or lz4.\n\nI would vote for having some consistency with wal_compression.\nPerhaps we could even revisit c2e5f4d, but I'd rather not do that.\n\n>> The compression level may be better if specified with a different\n>> GUC. That's less parsing to have within the GUC machinery.\n> \n> I'm not sure about that - then there's an interdependency between GUCs.\n> If zlib range is 1..9, and zstd is -50..10, then you may have to set the\n> compression level first, to avoid an error. I believe there's a previous\n> discussion about inter-dependent GUCs, and maybe a commit fixing a problem they\n> caused. But I cannot find it offhand.\n\nYou cannot do cross-checks for GUCs in their assign hooks or even rely\nin the order of those parameters, but you can do that in some backend\ncode paths. A recent discussion on the matter is for example what led\nto 79dfa8a for the GUCs controlling the min/max SSL protocols\nallowed.\n\n>> seems to me that if we can get the same amount of compression and CPU\n>> usage just by tweaking the compression level, there is no need to\n>> support more than one extra compression algorithm, easing the life of\n>> packagers and users.\n> \n> I don't think it eases it for packagers, since I anticipate the initial patch\n> would support {none/pglz/lz4/zlib}. I anticipate that zstd may not be in pg15.\n\nYes, without zstd we have all the infra to track the dependencies.\n\n> The goal of the patch is to give options, and the overhead of adding both zlib\n> and lz4 is low. zlib gives good compression at some CPUcost and may be\n> preferable for (some) DWs, and lz4 is almost certainly better (than pglz) for\n> OLTPs.\n\nAnything will be better than pglz. I am rather confident in that.\n\nWhat I am wondering is if we need to eat more bits than necessary for\nthe WAL record format, because we will end up supporting it until the\nend of times. We may have twenty years from now a better solution\nthan what has been integrated, and we may not care much about 1 extra\nbyte for a WAL record at this point, or perhaps we will. From what I\nhear here, there are many cases that we may care about depending on\nhow much CPU one is ready to pay in order to get more compression,\nknowing that there are no magic solutions for something that's cheap\nin CPU with a very good compression ratio or we could just go with\nthat. So it seems to me that there is still an argument for adding\nonly one new compression method with a good range of levels, able to\nsupport the range of cases we'd care about:\n- High compression ratio but high CPU cost.\n- Low compression ratio but low CPU cost.\n\nSo we could also take a decision here based on the range of\n(compression,CPU) an algorithm is able to cover.\n--\nMichael",
"msg_date": "Tue, 15 Jun 2021 11:39:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Tue, Jun 15, 2021 at 11:39:24AM +0900, Michael Paquier wrote:\n> On Mon, Jun 14, 2021 at 08:42:08PM -0500, Justin Pryzby wrote:\n>>>> It's USERSET following your own suggestion (which is a good suggestion):\n> >> + {\"wal_compression_method\", PGC_SIGHUP, WAL_SETTINGS,\n> >> + gettext_noop(\"Set the method used to compress full page images in the WAL.\"),\n> >> + NULL\n> >> + },\n> >> + &wal_compression_method,\n> >> + WAL_COMPRESSION_PGLZ, wal_compression_options,\n> >> + NULL, NULL, NULL\n> >> Any reason to not make that user-settable? If you merge that with\n> >> wal_compression, that's not an issue.\n> \n> Hmm, yeah. This can be read as using PGC_USERSET. With the second\n> part of my sentence, I think that I imply to use PGC_SUSET and be\n> consistent with wal_compression, but I don't recall my mood from one\n> month ago :) Sorry for the confusion.\n\nHold on - we're all confused (and I'm to blame). The patch is changing the\nexisting wal_compression GUC, rather than adding wal_compression_method.\nIt's still SUSET, but in earlier messages, I called it USERSET, twice.\n\n> You cannot do cross-checks for GUCs in their assign hooks or even rely\n> in the order of those parameters, but you can do that in some backend\n> code paths. A recent discussion on the matter is for example what led\n> to 79dfa8a for the GUCs controlling the min/max SSL protocols\n> allowed.\n\nThank you - this is what I was remembering.\n\n> > The goal of the patch is to give options, and the overhead of adding both zlib\n> > and lz4 is low. zlib gives good compression at some CPUcost and may be\n> > preferable for (some) DWs, and lz4 is almost certainly better (than pglz) for\n> > OLTPs.\n> \n> Anything will be better than pglz. I am rather confident in that.\n> \n> What I am wondering is if we need to eat more bits than necessary for\n> the WAL record format, because we will end up supporting it until the\n> end of times.\n\nWhy ? This is WAL, not table data. WAL depends on the major version, so\nI think wal_compression could provide a different set of compression methods at\nevery major release?\n\nActually, I was just thinking that default yes/no/on/off stuff maybe should be\ndefined to mean \"lz4\" rather than meaning pglz for \"backwards compatibility\".\n\n> hear here, there are many cases that we may care about depending on\n> how much CPU one is ready to pay in order to get more compression,\n> knowing that there are no magic solutions for something that's cheap\n> in CPU with a very good compression ratio or we could just go with\n> that. So it seems to me that there is still an argument for adding\n> only one new compression method with a good range of levels, able to\n> support the range of cases we'd care about:\n> - High compression ratio but high CPU cost.\n> - Low compression ratio but low CPU cost.\n\nI think zlib is too expensive and lz4 doesn't get enough compression,\nso neither supports both cases. In a sample of 1, zlib-1 is ~35% slower than\nlz4 and writes half as much.\n\nI think zstd could support both cases; however, I still see it as this patch's\njob to provide options, rather to definitively conclude which compression\nalgorithm is going to work best for everyone's use data and application.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 14 Jun 2021 22:42:06 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On 15/06/2021 06:42, Justin Pryzby wrote:\n> On Tue, Jun 15, 2021 at 11:39:24AM +0900, Michael Paquier wrote:\n>> On Mon, Jun 14, 2021 at 08:42:08PM -0500, Justin Pryzby wrote:\n>>>>> It's USERSET following your own suggestion (which is a good suggestion):\n>>>> + {\"wal_compression_method\", PGC_SIGHUP, WAL_SETTINGS,\n>>>> + gettext_noop(\"Set the method used to compress full page images in the WAL.\"),\n>>>> + NULL\n>>>> + },\n>>>> + &wal_compression_method,\n>>>> + WAL_COMPRESSION_PGLZ, wal_compression_options,\n>>>> + NULL, NULL, NULL\n>>>> Any reason to not make that user-settable? If you merge that with\n>>>> wal_compression, that's not an issue.\n>>\n>> Hmm, yeah. This can be read as using PGC_USERSET. With the second\n>> part of my sentence, I think that I imply to use PGC_SUSET and be\n>> consistent with wal_compression, but I don't recall my mood from one\n>> month ago :) Sorry for the confusion.\n> \n> Hold on - we're all confused (and I'm to blame). The patch is changing the\n> existing wal_compression GUC, rather than adding wal_compression_method.\n> It's still SUSET, but in earlier messages, I called it USERSET, twice.\n\nSee prior discussion on the security aspect: \nhttps://www.postgresql.org/message-id/55269915.1000309%40iki.fi. Adding \ndifferent compression algorithms doesn't change anything from a security \npoint of view AFAICS.\n\n>>> The goal of the patch is to give options, and the overhead of adding both zlib\n>>> and lz4 is low. zlib gives good compression at some CPUcost and may be\n>>> preferable for (some) DWs, and lz4 is almost certainly better (than pglz) for\n>>> OLTPs.\n>>\n>> Anything will be better than pglz. I am rather confident in that.\n>> What I am wondering is if we need to eat more bits than necessary for\n>> the WAL record format, because we will end up supporting it until the\n>> end of times.\n> \n> Why ? This is WAL, not table data. WAL depends on the major version, so\n> I think wal_compression could provide a different set of compression methods at\n> every major release?\n> \n> Actually, I was just thinking that default yes/no/on/off stuff maybe should be\n> defined to mean \"lz4\" rather than meaning pglz for \"backwards compatibility\".\n\n+1\n\n- Heikki\n\n\n",
"msg_date": "Tue, 15 Jun 2021 08:08:54 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Tue, Jun 15, 2021 at 08:08:54AM +0300, Heikki Linnakangas wrote:\n> On 15/06/2021 06:42, Justin Pryzby wrote:\n>> Why ? This is WAL, not table data. WAL depends on the major version, so\n>> I think wal_compression could provide a different set of compression methods at\n>> every major release?\n\nThat may finish by being annoying to the user, but perhaps that you\nare right that we could have more flexibility here. That does not\nchange the fact that we'd better choose something wisely, able to\nstick around for a couple of years at least, rather than revisiting\nthis choice every year.\n\n>> Actually, I was just thinking that default yes/no/on/off stuff maybe should be\n>> defined to mean \"lz4\" rather than meaning pglz for \"backwards compatibility\".\n> \n> +1\n\nI am not sure that we have any reasons to be that aggressive about\nthis one either, and this would mean that wal_compression=on implies a\ndifferent method depending on the build options. I would just stick\nwith the past, careful practice that we have to use a default\nbackward-compatible value as default, while being able to use a new\noption.\n--\nMichael",
"msg_date": "Tue, 15 Jun 2021 14:37:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On 15.06.21 07:37, Michael Paquier wrote:\n>>> Actually, I was just thinking that default yes/no/on/off stuff maybe should be\n>>> defined to mean \"lz4\" rather than meaning pglz for \"backwards compatibility\".\n>> +1\n> I am not sure that we have any reasons to be that aggressive about\n> this one either, and this would mean that wal_compression=on implies a\n> different method depending on the build options. I would just stick\n> with the past, careful practice that we have to use a default\n> backward-compatible value as default, while being able to use a new\n> option.\n\nIf we think this new thing is strictly better than the old thing, then \nwhy not make it the default. What would be the gain from sticking to \nthe old default?\n\nThe point that the default would depend on build options is a valid one. \n I'm not sure whether it's important enough by itself.\n\n\n",
"msg_date": "Tue, 15 Jun 2021 07:53:26 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Tue, Jun 15, 2021 at 07:53:26AM +0200, Peter Eisentraut wrote:\n> On 15.06.21 07:37, Michael Paquier wrote:\n> > > > Actually, I was just thinking that default yes/no/on/off stuff maybe should be\n> > > > defined to mean \"lz4\" rather than meaning pglz for \"backwards compatibility\".\n> > > +1\n> > I am not sure that we have any reasons to be that aggressive about\n> > this one either, and this would mean that wal_compression=on implies a\n> > different method depending on the build options. I would just stick\n> > with the past, careful practice that we have to use a default\n> > backward-compatible value as default, while being able to use a new\n> > option.\n\nYou're right, I hadn't though this through all the way.\nThere's precedent if the default is non-static (wal_sync_method).\n\nBut I think yes/on/true/1 should be a compatibility alias for a static thing,\nand then the only option is pglz.\n\nOf course, changing the default to LZ4 is still a possibility.\n\n> If we think this new thing is strictly better than the old thing, then why\n> not make it the default. What would be the gain from sticking to the old\n> default?\n> \n> The point that the default would depend on build options is a valid one.\n> I'm not sure whether it's important enough by itself.\n\n\n",
"msg_date": "Tue, 15 Jun 2021 11:14:56 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Tue, Jun 15, 2021 at 11:14:56AM -0500, Justin Pryzby wrote:\n> You're right, I hadn't though this through all the way.\n> There's precedent if the default is non-static (wal_sync_method).\n> \n> But I think yes/on/true/1 should be a compatibility alias for a static thing,\n> and then the only option is pglz.\n> \n> Of course, changing the default to LZ4 is still a possibility.\n\nWe have not reached yet a conclusion with the way we are going to\nparameterize all that, so let's adapt depending on the feedback. For\nnow, I am really interested in this patch set, so I am going to run\nsome tests of my own and test more around the compression levels we\nhave at our disposals with the proposed algos.\n\nFrom I'd like us to finish with here is one new algorithm method, able\nto cover a large range of cases as mentioned upthread, from\nlow-CPU/low-compression to high-CPU/high-compression. It does not\nseem like a good idea to be stuck with an algo that only specializes\nin one or the other, for example.\n--\nMichael",
"msg_date": "Wed, 16 Jun 2021 09:39:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Wed, Jun 16, 2021 at 09:39:57AM +0900, Michael Paquier wrote:\n> From I'd like us to finish with here is one new algorithm method, able\n> to cover a large range of cases as mentioned upthread, from\n> low-CPU/low-compression to high-CPU/high-compression. It does not\n> seem like a good idea to be stuck with an algo that only specializes\n> in one or the other, for example.\n\nSo, I have been playing with that. And the first thing I have done\nbefore running any benchmark was checking the logic of the patch, that\nI have finished to heavily clean up. This is still WIP (see the\nvarious XXX), and it still includes all the compression methods we are\ndiscussing here, but it allows to control the level of the\ncompression and it is in a much better shape. So that will help.\n\nAttached are two patches, the WIP version I have simplified (there\nwere many things I found confusing, from the set of header\ndependencies added across the code to unnecessary code, the set of\npatches in the series as mentioned upthread, etc.) that I have used\nfor the benchmarks. The second patch is a tweak to grab getrusage()\nstats for the lifetime of a backend.\n\nThe benchmark I have used is rather simple, as follows, with a\nvalue of shared_buffers that allows to fit all the pages of the\nrelation in. I then just mounted the instance on a tmpfs while\nadapting wal_compression* for each test. This gives a fixed amount of\nFPWs generated, large enough to reduce any noise and to still allow to\nany difference:\n#!/bin/bash\npsql <<EOF\n-- Change your conf here\nSET wal_compression = zstd;\nSET wal_compression_level = 20;\nSELECT pg_backend_pid();\nDROP TABLE IF EXISTS aa, results;\nCREATE TABLE aa (a int);\nCREATE TABLE results (phase text, position pg_lsn);\nCREATE EXTENSION IF NOT EXISTS pg_prewarm;\nALTER TABLE aa SET (FILLFACTOR = 50);\nINSERT INTO results VALUES ('pre-insert', pg_current_wal_lsn());\nINSERT INTO aa VALUES (generate_series(1,7000000)); -- 484MB\nSELECT pg_size_pretty(pg_relation_size('aa'::regclass));\nSELECT pg_prewarm('aa'::regclass);\nCHECKPOINT;\nINSERT INTO results VALUES ('pre-update', pg_current_wal_lsn());\nUPDATE aa SET a = 7000000 + a;\nCHECKPOINT;\nINSERT INTO results VALUES ('post-update', pg_current_wal_lsn());\nSELECT * FROM results;\nEOF\n\nThe set of results, with various compression levels used gives me the\nfollowing (see also compression_results.sql attached):\n wal_compression | user_diff | sys_diff | rel_size | fpi_size\n------------------------------+------------+----------+----------+----------\n lz4 level=1 | 24.219464 | 0.427996 | 429 MB | 574 MB\n lz4 level=65535 (speed mode) | 24.154747 | 0.524067 | 429 MB | 727 MB\n off | 24.069323 | 0.635622 | 429 MB | 727 MB\n pglz | 36.123642 | 0.451949 | 429 MB | 566 MB\n zlib level=1 (default) | 27.454397 | 2.25989 | 429 MB | 527 MB\n zlib level=9 | 31.962234 | 2.160444 | 429 MB | 527 MB\n zstd level=0 | 24.766077 | 0.67174 | 429 MB | 523 MB\n zstd level=20 | 114.429589 | 0.495938 | 429 MB | 520 MB\n zstd level=3 (default) | 25.218323 | 0.475974 | 429 MB | 523 MB\n(9 rows)\n\nThere are a couple of things that stand out here:\n- zlib has a much higher user CPU time than zstd and lz4, so we could\njust let this one go.\n- Everything is better than pglz, that does not sound as a surprise.\n- The level does not really influence the compression reached\n-- lz4 aims at being fast, so its default is actually the best\ncompression it can do. Using a much high acceleration level reduces\nthe effects of compression to zero.\n-- zstd has a high CPU consumption at high level (level > 20 is\nclassified as ultra, I have not tested that), without helping much\nwith the amount of data compressed.\n\nIt seems to me that this would leave LZ4 or zstd as obvious choices,\nand that we don't really need to care about the compression level, so\nlet's just stick with the defaults without any extra GUCs. Among the\nremaining two I would be tempted to choose LZ4. That's consistent\nwith what toast can use now. And, even if it is a bit worse than pglz\nin terms of compression in this case, it shows a CPU usage close to\nthe \"off\" case, which is nice (sys_diff for lz4 with level=1 is a \nbit suspicious by the way). zstd has merits as well at default\nlevel.\n\nAt the end I am not surprised by this result: LZ4 is designed to be\nfaster, while zstd compresses more and eats more CPU. Modern\ncompression algos are nice.\n--\nMichael",
"msg_date": "Wed, 16 Jun 2021 16:18:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "\n\n> 16 июня 2021 г., в 12:18, Michael Paquier <michael@paquier.xyz> написал(а):\n> Among the\n> remaining two I would be tempted to choose LZ4. That's consistent\n> with what toast can use now.\n\nI agree that allowing just lz4 - is already a huge step ahead.\nBut I'd suggest supporting zstd as well. Currently we only compress 8Kb chunks and zstd had no runaway to fully unwrap it's potential.\nIn WAL-G we observed ~3x improvement in network utilisation when switched from lz4 to zstd in WAL archive compression.\n\nBTW we could get rid of whole hole-in-a-page thing if we would set lz4 as default. This could simplify FPI code.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 16 Jun 2021 13:17:26 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On 16/06/2021 11:17, Andrey Borodin wrote:\n>> 16 июня 2021 г., в 12:18, Michael Paquier <michael@paquier.xyz> написал(а):\n>> Among the\n>> remaining two I would be tempted to choose LZ4. That's consistent\n>> with what toast can use now.\n> \n> I agree that allowing just lz4 - is already a huge step ahead.\n> But I'd suggest supporting zstd as well. Currently we only compress 8Kb chunks and zstd had no runaway to fully unwrap it's potential.\n> In WAL-G we observed ~3x improvement in network utilisation when switched from lz4 to zstd in WAL archive compression.\n\nHmm, do we currently compress each block in a WAL record separately, for \nrecords that contain multiple full-page images? That could make a big \ndifference e.g. for GiST index build that WAL-logs 32 pages in each \nrecord. If it helps the compression, we should probably start \nWAL-logging b-tree index build in larger batches, too.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 16 Jun 2021 11:49:51 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "\n\n> 16 июня 2021 г., в 13:49, Heikki Linnakangas <hlinnaka@iki.fi> написал(а):\n> \n> Hmm, do we currently compress each block in a WAL record separately, for records that contain multiple full-page images? That could make a big difference e.g. for GiST index build that WAL-logs 32 pages in each record. If it helps the compression, we should probably start WAL-logging b-tree index build in larger batches, too.\n\nHere's PoC for this [0]. But benchmark results are HW-dependent.\n\nBest regards, Andrey Borodin.\n\nhttps://www.postgresql.org/message-id/flat/41822E78-48EE-41AE-A89B-3CB76FF53980%40yandex-team.ru\n\n",
"msg_date": "Wed, 16 Jun 2021 13:52:50 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Wed, Jun 16, 2021 at 11:49:51AM +0300, Heikki Linnakangas wrote:\n> Hmm, do we currently compress each block in a WAL record separately, for\n> records that contain multiple full-page images? That could make a big\n> difference e.g. for GiST index build that WAL-logs 32 pages in each record.\n> If it helps the compression, we should probably start WAL-logging b-tree\n> index build in larger batches, too.\n\nEach block is compressed alone, see XLogCompressBackupBlock() in\nXLogRecordAssemble() where we loop through each block. Compressing a\ngroup of blocks would not be difficult (the refactoring may be\ntrickier than it looks) but I am wondering how we should treat the\ncase where we finish by not compressing a group of blocks as there is\na safety fallback to not enforce a failure if a block cannot be\ncompressed. Should we move back to the compression of individual\nblocks or just log all those pages uncompressed without their holes?\nI really don't expect a group of blocks to not be compressed, just\nbeing a bit paranoid here about the fallback we'd better have.\n--\nMichael",
"msg_date": "Thu, 17 Jun 2021 10:12:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Wed, Jun 16, 2021 at 01:17:26PM +0500, Andrey Borodin wrote:\n> I agree that allowing just lz4 - is already a huge step ahead.\n\nYeah, I am tempted to just add LZ4 as a first step as the patch\nfootprint would be minimal, and we could come back to zstd once we\nhave more feedback from the field, if that's necessary. As said\nupthread, we have more flexibility with WAL than for the relation\ndata.\n\n> But I'd suggest supporting zstd as well. Currently we only compress\n> 8Kb chunks and zstd had no runaway to fully unwrap it's potential.\n> In WAL-G we observed ~3x improvement in network utilisation when\n> switched from lz4 to zstd in WAL archive compression.\n\nYou mean full segments here, right? This has no need to be in core,\nexcept if we want to add more compression options to pg_receivewal and\nits friends? That would be a nice addition, saying that.\n\n> BTW we could get rid of whole hole-in-a-page thing if we would set\n> lz4 as default. This could simplify FPI code.\n\nWhy would we do that? We still need to support pglz as fallback if a\nplatform does not have LZ4, no?\n--\nMichael",
"msg_date": "Thu, 17 Jun 2021 10:19:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Thu, Jun 17, 2021 at 10:19:47AM +0900, Michael Paquier wrote:\n> Yeah, I am tempted to just add LZ4 as a first step as the patch\n> footprint would be minimal, and we could come back to zstd once we\n> have more feedback from the field, if that's necessary. As said\n> upthread, we have more flexibility with WAL than for the relation\n> data.\n\nI have worked more on that today and finished with two patches:\n- 0001 is the mininal patch to add support for LZ4. This is in a\nrather committable shape. I noticed that we checked for an incorrect\nerror code in the compression and decompression paths as LZ4 APIs can\nreturn a negative result. There were also some extra bugs I spotted.\nIts size is satisfying for what it does, and there is MSVC support\nout-of-the-box:\n 12 files changed, 176 insertions(+), 48 deletions(-)\n- 0002 is the extra code need to add ZSTD and do the same. This still\nrequires support for MSVC and I have not checked the internals of ZSTD\nto see if we do the compress/decompress calls the right way.\n\nWhile on it, I am going to switch my buildfarm animal to use LZ4 for\ntoast.. Just saying.\n--\nMichael",
"msg_date": "Thu, 17 Jun 2021 15:44:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "\n\n> 17 июня 2021 г., в 06:19, Michael Paquier <michael@paquier.xyz> написал(а):\n> \n> On Wed, Jun 16, 2021 at 01:17:26PM +0500, Andrey Borodin wrote:\n>> I agree that allowing just lz4 - is already a huge step ahead.\n> \n> Yeah, I am tempted to just add LZ4 as a first step as the patch\n> footprint would be minimal, and we could come back to zstd once we\n> have more feedback from the field, if that's necessary. As said\n> upthread, we have more flexibility with WAL than for the relation\n> data.\n> \n>> But I'd suggest supporting zstd as well. Currently we only compress\n>> 8Kb chunks and zstd had no runaway to fully unwrap it's potential.\n>> In WAL-G we observed ~3x improvement in network utilisation when\n>> switched from lz4 to zstd in WAL archive compression.\n> \n> You mean full segments here, right? This has no need to be in core,\n> except if we want to add more compression options to pg_receivewal and\n> its friends? That would be a nice addition, saying that.\nKonstantin, Daniil and Justin are working on compressing libpq [0]. That would make walsender compress WAL automatically.\nAnd we (at least I and Dan) are inclined to work on compressing on-disk WAL as soon as libpq compression will be committed.\n\nZstd is much better at compressing long data sequences than lz4. I'm sure we will need such codec in core one day.\n\n\n>> BTW we could get rid of whole hole-in-a-page thing if we would set\n>> lz4 as default. This could simplify FPI code.\n> \n> Why would we do that? We still need to support pglz as fallback if a\n> platform does not have LZ4, no?\nBecause compressing sequence of zeroes is cheap, even for pglz. But we still need to support 'no compression at all', this mode benefits from hole-in-a-page a lot. Until we send and store WAL uncompressed, of cause.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n[0]https://www.postgresql.org/message-id/flat/aad16e41-b3f9-e89d-fa57-fb4c694bec25%40postgrespro.ru\n\n\n\n\n",
"msg_date": "Thu, 17 Jun 2021 11:45:37 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Thu, Jun 17, 2021 at 11:45:37AM +0500, Andrey Borodin wrote:\n> Konstantin, Daniil and Justin are working on compressing libpq\n> [0]. That would make walsender compress WAL automatically.\n> And we (at least I and Dan) are inclined to work on compressing\n> on-disk WAL as soon as libpq compression will be committed.\n\nWhat's the relationship between libpq and WAL? Just the addition of\nzstd in the existing dependency chain?\n\n> Zstd is much better at compressing long data sequences than lz4.\n\nThat's my impression.\n\n> I'm sure we will need such codec in core one day.\n>\n> Because compressing sequence of zeroes is cheap, even for pglz. But\n> we still need to support 'no compression at all', this mode benefits\n> from hole-in-a-page a lot. Until we send and store WAL uncompressed,\n> of cause.\n\nI am not sure, but surely this will come up in future discussions as a\nseparate problem.\n--\nMichael",
"msg_date": "Thu, 17 Jun 2021 15:57:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Thu, Jun 17, 2021 at 03:44:26PM +0900, Michael Paquier wrote:\n> I have worked more on that today and finished with two patches:\n> - 0001 is the mininal patch to add support for LZ4. This is in a\n> rather committable shape. I noticed that we checked for an incorrect\n> error code in the compression and decompression paths as LZ4 APIs can\n> return a negative result. There were also some extra bugs I spotted.\n> Its size is satisfying for what it does, and there is MSVC support\n> out-of-the-box:\n> 12 files changed, 176 insertions(+), 48 deletions(-)\n> - 0002 is the extra code need to add ZSTD and do the same. This still\n> requires support for MSVC and I have not checked the internals of ZSTD\n> to see if we do the compress/decompress calls the right way.\n> \n> While on it, I am going to switch my buildfarm animal to use LZ4 for\n> toast.. Just saying.\n\nAnd I forgot to attach these. (Thanks Andrey!)\n--\nMichael",
"msg_date": "Thu, 17 Jun 2021 16:01:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On 17/06/2021 04:12, Michael Paquier wrote:\n> On Wed, Jun 16, 2021 at 11:49:51AM +0300, Heikki Linnakangas wrote:\n>> Hmm, do we currently compress each block in a WAL record separately, for\n>> records that contain multiple full-page images? That could make a big\n>> difference e.g. for GiST index build that WAL-logs 32 pages in each record.\n>> If it helps the compression, we should probably start WAL-logging b-tree\n>> index build in larger batches, too.\n> \n> Each block is compressed alone, see XLogCompressBackupBlock() in\n> XLogRecordAssemble() where we loop through each block. Compressing a\n> group of blocks would not be difficult (the refactoring may be\n> trickier than it looks) but I am wondering how we should treat the\n> case where we finish by not compressing a group of blocks as there is\n> a safety fallback to not enforce a failure if a block cannot be\n> compressed. Should we move back to the compression of individual\n> blocks or just log all those pages uncompressed without their holes?\n\nJust log all the pages uncompressed in that case. If you didn't save any \nbytes by compressing the pages together, surely compressing them one by \none would be even worse.\n\n> I really don't expect a group of blocks to not be compressed, just\n> being a bit paranoid here about the fallback we'd better have.\n\nYeah, there will inevitably be some common bytes in the page and tuple \nheaders, if you compress multiple pages together. But I don't think the \nfallback is that important anyway. Even in the worst case, the \ncompressed image of something uncompressible should not be much larger \nthan the original.\n\n- Heikki\n\n\n",
"msg_date": "Thu, 17 Jun 2021 10:52:10 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "\n\n> 17 июня 2021 г., в 11:57, Michael Paquier <michael@paquier.xyz> написал(а):\n> \n> On Thu, Jun 17, 2021 at 11:45:37AM +0500, Andrey Borodin wrote:\n>> Konstantin, Daniil and Justin are working on compressing libpq\n>> [0]. That would make walsender compress WAL automatically.\n>> And we (at least I and Dan) are inclined to work on compressing\n>> on-disk WAL as soon as libpq compression will be committed.\n> \n> What's the relationship between libpq and WAL?\nwalsender transmit WAL over regular protocol. Compressing libpq leads to huge decrease of cross-datacenter traffic of HA clusters.\nIn fact that's the reason why Daniil is working on libpq compression [0]. But that's matter of other thread.\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/161609580905.28624.5304095609680400810.pgcf%40coridan.postgresql.org#be6bc3ba77ff8a293b1816f4841c59ef\n\n",
"msg_date": "Thu, 17 Jun 2021 13:18:52 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "\n\n> 17 июня 2021 г., в 11:44, Michael Paquier <michael@paquier.xyz> написал(а):\n> \n> I have worked more on that today and finished with two patches\n\nI have some small questions.\n\n1.\n+\t\t\treport_invalid_record(record, \"image at %X/%X compressed with %s not supported, block %d\",\n+\t\t\t\t\t\t\t\t (uint32) (record->ReadRecPtr >> 32),\n+\t\t\t\t\t\t\t\t (uint32) record->ReadRecPtr,\n+\t\t\t\t\t\t\t\t \"lz4\",\n+\t\t\t\t\t\t\t\t block_id);\nCan we point out to user that the problem is in the build? Also, maybe %s can be inlined to lz4 in this case.\n\n2.\n> const char *method = \"???\";\nMaybe we can use something like \"unknown\" for unknown compression methods? Or is it too long string for waldump output?\n\n3. Can we exclude lz4 from config if it's not supported by the build?\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n\n",
"msg_date": "Sun, 20 Jun 2021 23:15:08 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Sun, Jun 20, 2021 at 11:15:08PM +0500, Andrey Borodin wrote:\n> I have some small questions.\n> \n> 1.\n> +\t\t\treport_invalid_record(record, \"image at %X/%X compressed with %s not supported, block %d\",\n> +\t\t\t\t\t\t\t\t (uint32) (record->ReadRecPtr >> 32),\n> +\t\t\t\t\t\t\t\t (uint32) record->ReadRecPtr,\n> +\t\t\t\t\t\t\t\t \"lz4\",\n> +\t\t\t\t\t\t\t\t block_id);\n> Can we point out to user that the problem is in the build?\n\nWhat about the following error then? Say:\n\"image at %X/%X compressed with LZ4 not supported by build, block\n%d\".\n\n> Also, maybe %s can be inlined to lz4 in this case.\n\nI think that's a remnant of the zstd part of the patch set, where I\nwanted to have only one translatable message. Sure, I can align lz4\nwith the message.\n\n> 2.\n> > const char *method = \"???\";\n> Maybe we can use something like \"unknown\" for unknown compression\n> methods? Or is it too long string for waldump output?\n\nA couple of extra bytes for pg_waldump will not matter much. Using\n\"unknown\" is fine by me.\n\n> 3. Can we exclude lz4 from config if it's not supported by the build?\n\nEnforcing the absence of this value at GUC level is enough IMO:\n+static const struct config_enum_entry wal_compression_options[] = {\n+ {\"pglz\", WAL_COMPRESSION_PGLZ, false},\n+#ifdef USE_LZ4\n+ {\"lz4\", WAL_COMPRESSION_LZ4, false},\n+#endif\n[...]\n+typedef enum WalCompression\n+{\n+ WAL_COMPRESSION_NONE = 0,\n+ WAL_COMPRESSION_PGLZ,\n+ WAL_COMPRESSION_LZ4\n+} WalCompression;\n\nIt is not possible to set the GUC, still it is listed in the enum that\nallows us to track it. That's the same thing as\ndefault_toast_compression with its ToastCompressionId and its\ndefault_toast_compression_options.\n--\nMichael",
"msg_date": "Tue, 22 Jun 2021 09:11:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Tue, Jun 22, 2021 at 09:11:26AM +0900, Michael Paquier wrote:\n> On Sun, Jun 20, 2021 at 11:15:08PM +0500, Andrey Borodin wrote:\n> > I have some small questions.\n> > \n> > 1.\n> > +\t\t\treport_invalid_record(record, \"image at %X/%X compressed with %s not supported, block %d\",\n> > +\t\t\t\t\t\t\t\t (uint32) (record->ReadRecPtr >> 32),\n> > +\t\t\t\t\t\t\t\t (uint32) record->ReadRecPtr,\n> > +\t\t\t\t\t\t\t\t \"lz4\",\n> > +\t\t\t\t\t\t\t\t block_id);\n> > Can we point out to user that the problem is in the build?\n> \n> What about the following error then? Say:\n> \"image at %X/%X compressed with LZ4 not supported by build, block\n> %d\".\n\nThe two similar, existing messages are:\n\n+#define NO_LZ4_SUPPORT() \\\n+ ereport(ERROR, \\\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), \\\n+ errmsg(\"unsupported LZ4 compression method\"), \\\n+ errdetail(\"This functionality requires the server to be built with lz4 support.\"), \\\n+ errhint(\"You need to rebuild PostgreSQL using --with-lz4.\")))\n\nsrc/bin/pg_dump/pg_backup_archiver.c: fatal(\"cannot restore from compressed archive (compression not supported in this installation)\");\nsrc/bin/pg_dump/pg_backup_archiver.c: pg_log_warning(\"archive is compressed, but this installation does not support compression -- no data will be available\");\nsrc/bin/pg_dump/pg_dump.c: pg_log_warning(\"requested compression not available in this installation -- archive will be uncompressed\");\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 21 Jun 2021 19:19:27 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Mon, Jun 21, 2021 at 07:19:27PM -0500, Justin Pryzby wrote:\n> The two similar, existing messages are:\n> \n> +#define NO_LZ4_SUPPORT() \\\n> + ereport(ERROR, \\\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), \\\n> + errmsg(\"unsupported LZ4 compression method\"), \\\n> + errdetail(\"This functionality requires the server to be built with lz4 support.\"), \\\n> + errhint(\"You need to rebuild PostgreSQL using --with-lz4.\")))\n> \n> src/bin/pg_dump/pg_backup_archiver.c: fatal(\"cannot restore from compressed archive (compression not supported in this installation)\");\n> src/bin/pg_dump/pg_backup_archiver.c: pg_log_warning(\"archive is compressed, but this installation does not support compression -- no data will be available\");\n> src/bin/pg_dump/pg_dump.c: pg_log_warning(\"requested compression not available in this installation -- archive will be uncompressed\");\n\nThe difference between the first message and the rest is that the\nbackend has much more room in terms of error verbosity while\nxlogreader.c needs to worry also about the frontend. In this case, we\nneed to worry about the block involved and its LSN. Perhaps you have\na suggestion?\n--\nMichael",
"msg_date": "Tue, 22 Jun 2021 10:13:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Tue, May 25, 2021 at 12:05:19PM +0530, Dilip Kumar wrote:\n> +++ b/src/test/recovery/t/011_crash_recovery.pl\n> @@ -14,7 +14,7 @@ use Config;\n> plan tests => 3;\n> \n> my $node = get_new_node('primary');\n> -$node->init(allows_streaming => 1);\n> +$node->init();\n> $node->start;\n> \n> How this change is relevant?\n\nIt's necessary for the tests to pass - see the prior discussions.\nRevert them and the tests fail.\n\ntime make -C src/test/recovery check\n# Failed test 'new xid after restart is greater'\n\n@Michael: I assume that if you merge this patch, you'd set your animals to use\nwal_compression=lz4, and then they would fail the recovery tests. So the\npatches that you say are unrelated still seem to me to be a prerequisite.\n\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com> \nSubject: [PATCH v8 2/9] Run 011_crash_recovery.pl with wal_level=minimal \n\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com> \nSubject: [PATCH v8 3/9] Make sure published XIDs are persistent \n\n+/* compression methods supported */\n+#define BKPIMAGE_COMPRESS_PGLZ 0x04\n+#define BKPIMAGE_COMPRESS_ZLIB 0x08\n+#define BKPIMAGE_COMPRESS_LZ4 0x10\n+#define BKPIMAGE_COMPRESS_ZSTD 0x20\n+#define BKPIMAGE_IS_COMPRESSED(info) \\\n+ ((info & (BKPIMAGE_COMPRESS_PGLZ | BKPIMAGE_COMPRESS_ZLIB | \\\n+ BKPIMAGE_COMPRESS_LZ4 | BKPIMAGE_COMPRESS_ZSTD)) != 0)\n\nYou encouraged saving bits here, so I'm surprised to see that your patches\nuse one bit per compression method: 2 bits to support no/pglz/lz4, 3 to add\nzstd, and the previous patch used 4 bits to also support zlib.\n\nThere are spare bits available for that, but now there can be an inconsistency\nif two bits are set. Also, 2 bits could support 4 methods (including \"no\").\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 21 Jun 2021 22:13:58 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Mon, Jun 21, 2021 at 10:13:58PM -0500, Justin Pryzby wrote:\n> @Michael: I assume that if you merge this patch, you'd set your animals to use\n> wal_compression=lz4, and then they would fail the recovery tests.\n\nYes, I'd like to do that on my animal dangomushi.\n\n> So the patches that you say are unrelated still seem to me to be a\n> prerequisite .\n\nI may be missing something, of course, but I still don't see why\nthat's necessary? We don't get any test failures on HEAD by switching\nwal_compression to on, no? That's easy enough to test with a two-line\nchange that changes the default.\n\n> +/* compression methods supported */\n> +#define BKPIMAGE_COMPRESS_PGLZ 0x04\n> +#define BKPIMAGE_COMPRESS_ZLIB 0x08\n> +#define BKPIMAGE_COMPRESS_LZ4 0x10\n> +#define BKPIMAGE_COMPRESS_ZSTD 0x20\n> +#define BKPIMAGE_IS_COMPRESSED(info) \\\n> + ((info & (BKPIMAGE_COMPRESS_PGLZ | BKPIMAGE_COMPRESS_ZLIB | \\\n> + BKPIMAGE_COMPRESS_LZ4 | BKPIMAGE_COMPRESS_ZSTD)) != 0)\n> \n> You encouraged saving bits here, so I'm surprised to see that your patches\n> use one bit per compression method: 2 bits to support no/pglz/lz4, 3 to add\n> zstd, and the previous patch used 4 bits to also support zlib.\n\nYeah, I know. I have just finished with that to get something\nreadable for the sake of the tests. As you say, the point is moot\njust we add one new method, anyway, as we need just one new bit.\nAnd that's what I would like to do for v15 with LZ4 as the resulting\npatch is simple. It would be an idea to discuss more compression\nmethods here once we hear more from users when this is released in the\nfield, re-considering at this point if more is necessary or not.\n--\nMichael",
"msg_date": "Tue, 22 Jun 2021 12:53:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Tue, Jun 22, 2021 at 12:53:46PM +0900, Michael Paquier wrote:\n> > So the patches that you say are unrelated still seem to me to be a\n> > prerequisite .\n> \n> I may be missing something, of course, but I still don't see why\n> that's necessary? We don't get any test failures on HEAD by switching\n> wal_compression to on, no? That's easy enough to test with a two-line\n> change that changes the default.\n\nCurious. I found that before a4d75c86bf, there was an issue without the\n\"extra\" patches.\n\n|commit a4d75c86bf15220df22de0a92c819ecef9db3849\n|Author: Tomas Vondra <tomas.vondra@postgresql.org>\n|Date: Fri Mar 26 23:22:01 2021 +0100\n|\n| Extended statistics on expressions\n\nI have no idea why that patch changes the behavior, but before a4d7, this patch\nseries failed like:\n\n|$ time time make -C src/test/recovery check\n...\n|# Failed test 'new xid after restart is greater'\n|# at t/011_crash_recovery.pl line 53.\n|# '539'\n|# >\n|# '539'\n|\n|# Failed test 'xid is aborted after crash'\n|# at t/011_crash_recovery.pl line 57.\n|# got: 'committed'\n|# expected: 'aborted'\n|# Looks like you failed 2 tests of 3.\n|t/011_crash_recovery.pl .............. Dubious, test returned 2 (wstat 512, 0x200)\n|Failed 2/3 subtests \n\nI checked that my most recent WAL compression patch applied on top of\na4d75c86bf works ok without the \"extra\" patches but fails when applied to\na4d75c86bf~1.\n\nI think Andrey has been saying that since this already fails with PGLZ wal\ncompression, we could consider this to be a pre-existing problem. I'm not\nthrilled with that interpretation, but it's not wrong.\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 26 Jun 2021 18:11:26 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Sat, Jun 26, 2021 at 06:11:26PM -0500, Justin Pryzby wrote:\n> Curious. I found that before a4d75c86bf, there was an issue without the\n> \"extra\" patches.\n\nIs this issue different than the XID problem not matching when using\nwal_level = minimal in test 011_crash_recovery.pl? I am not sure to\nunderstand if you are \n\n> I have no idea why that patch changes the behavior, but before a4d7, this patch\n> series failed like:\n\nNot seeing the link here. 011_crash_recovery.pl has nothing to do\nwith extended statistics, normally.\n\n> I think Andrey has been saying that since this already fails with PGLZ wal\n> compression, we could consider this to be a pre-existing problem. I'm not\n> thrilled with that interpretation, but it's not wrong.\n\nRemoving \"allows_streaming => 1\" in 011_crash_recovery.pl is enough to\nmake the test fail on HEAD. And the test fails equally without or\nwithout any changes related to wal_compression, so adding or removing\noptions to wal_compression is not going to change anything with that.\nThere is simply no relationship I can spot, though I may be missing of\ncourse an argument here. Let's just discuss this recovery issue where\nit should be discussed (these are patches 0002 and 0003 in the patch\nv9 sent upthread):\nhttps://www.postgresql.org/message-id/20210308.173242.463790587797836129.horikyota.ntt%40gmail.com\n--\nMichael",
"msg_date": "Mon, 28 Jun 2021 16:36:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Mon, Jun 28, 2021 at 04:36:42PM +0900, Michael Paquier wrote:\n> Is this issue different than the XID problem not matching when using\n> wal_level = minimal in test 011_crash_recovery.pl? I am not sure to\n> understand if you are \n\n(This paragraph has been cut in half)\nreferring to a different problem or not.\n--\nMichael",
"msg_date": "Mon, 28 Jun 2021 16:53:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "\n\n> 28 июня 2021 г., в 12:36, Michael Paquier <michael@paquier.xyz> написал(а):\n> \n> Removing \"allows_streaming => 1\" in 011_crash_recovery.pl is enough to\n> make the test fail on HEAD. And the test fails equally without or\n> without any changes related to wal_compression, so adding or removing\n> options to wal_compression is not going to change anything with that.\n> There is simply no relationship I can spot, though I may be missing of\n> course an argument here. \nThere is no relationship at all. That test 011_crash_recovery.pl is failing depending on random fluctuation in WAL size. Currently, it does not affect this thread anyhow (except for confusion I made by importing patch with fix from other thread, sorry).\n\n\n> Let's just discuss this recovery issue where\n> it should be discussed (these are patches 0002 and 0003 in the patch\n> v9 sent upthread):\n> https://www.postgresql.org/message-id/20210308.173242.463790587797836129.horikyota.ntt%40gmail.com\n+1.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 28 Jun 2021 13:01:00 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Tue, Jun 22, 2021 at 09:11:26AM +0900, Michael Paquier wrote:\n> What about the following error then? Say:\n> \"image at %X/%X compressed with LZ4 not supported by build, block\n> %d\".\n> \n>> Also, maybe %s can be inlined to lz4 in this case.\n> \n> I think that's a remnant of the zstd part of the patch set, where I\n> wanted to have only one translatable message. Sure, I can align lz4\n> with the message.\n> \n>> 2.\n>> > const char *method = \"???\";\n>> Maybe we can use something like \"unknown\" for unknown compression\n>> methods? Or is it too long string for waldump output?\n> \n> A couple of extra bytes for pg_waldump will not matter much. Using\n> \"unknown\" is fine by me.\n\nI have kept 1. as-is for translability, and included 2.\n\nNow that v15 is open for business, I have looked again at this stuff\nthis morning and committed the LZ4 part after some adjustments:\n- Avoid the use of default in the switches used for the compression,\nso as the compiler would warn when introducing a new value in the enum\nused for wal_compression.\n- Switched to LZ4_compress_default(). LZ4_compress_fast() could be\nused with LZ4_ACCELERATION_DEFAULT, but toast_compression.c uses the\nformer.\n\nI got check-world tested with wal_compression = {lz4,pglz,off}, and\ndid some manuals checks, including some stuff without --with-lz4.\nAttached is the rest of the patch set for zstd, for reference, rebased\nwith the changes that have been committed. This still requires proper\nsupport in the MSVC scripts.\n\nMy buildfarm machine has been changed to use wal_compression = lz4,\nwhile on it for HEAD runs.\n--\nMichael",
"msg_date": "Tue, 29 Jun 2021 12:41:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "\n\n> 29 июня 2021 г., в 08:41, Michael Paquier <michael@paquier.xyz> написал(а):\n> \n> Now that v15 is open for business, I have looked again at this stuff\n> this morning and committed the LZ4 part \n\nThat's great, thanks Michael!\n\nBest regards, Andrey Borodin.\n\n\n\n",
"msg_date": "Tue, 29 Jun 2021 11:43:57 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Sun, Jun 13, 2021 at 08:24:12PM -0500, Justin Pryzby wrote:\n> In this patch series, I added compression information to the errcontext from\n> xlog_block_info(), and allow specifying compression levels like zlib-2. I'll\n> rearrange that commit earlier if we decide that's desirable to include.\n\n4035cd5d4 added wal_compress=lz4 and\ne9537321a added wal_compress=zstd\n\nSince 4035cd5d4, pg_waldump has shown compression info (and walinspect\nshows the same since 2258e76f9).\n\nBut if you try to replay WAL on a server which wasn't compiled with\nsupport for the requisite compression methods, it's a bit crummy that it\ndoesn't include in the error message the reason *why* it failed to\nrestore the image, if that's caused by the missing compression method.\n\nThis hunk was from my patch in June, 2021, but wasn't included in the\nfinal patches. This would include the compression info algorithm: 1)\nwhen failing to apply WAL in rm_redo_error_callback(); and, 2) in\nwal_debug.\n\n< 2022-08-31 21:37:53.325 CDT >FATAL: failed to restore block image\n< 2022-08-31 21:37:53.325 CDT >CONTEXT: WAL redo at 1201/1B931F50 for XLOG/FPI_FOR_HINT: ; blkref #0: rel 1663/16888/164320567, blk 8186 FPW, compression method: zstd\n\nIn addition to cases where someone re/compiles postgres locally, I guess this\nwould also improve the situation for PITR and standbys, which might reasonably\nbe run on a different OS, with different OS packages, or with postgres compiled\nseparately.\n\n> diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n> index 17eeff0720..1ccc51575a 100644\n> --- a/src/backend/access/transam/xlog.c\n> +++ b/src/backend/access/transam/xlog.c\n> @@ -10470,7 +10470,17 @@ xlog_block_info(StringInfo buf, XLogReaderState *record)\n> \t\t\t\t\t\t\t rnode.spcNode, rnode.dbNode, rnode.relNode,\n> \t\t\t\t\t\t\t blk);\n> \t\tif (XLogRecHasBlockImage(record, block_id))\n> -\t\t\tappendStringInfoString(buf, \" FPW\");\n> +\t\t{\n> +\t\t\tint compression =\n> +\t\t\t\tBKPIMAGE_IS_COMPRESSED(record->blocks[block_id].bimg_info) ?\n> +\t\t\t\tBKPIMAGE_COMPRESSION(record->blocks[block_id].bimg_info) : -1;\n> +\t\t\tif (compression == -1)\n> +\t\t\t\tappendStringInfoString(buf, \" FPW\");\n> +\t\t\telse\n> +\t\t\t\tappendStringInfo(buf, \" FPW, compression method %d/%s\",\n> +\t\t\t\t\tcompression, wal_compression_name(compression));\n> +\t\t}\n> +",
"msg_date": "Fri, 2 Sep 2022 06:55:11 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Fri, Sep 02, 2022 at 06:55:11AM -0500, Justin Pryzby wrote:\n> On Sun, Jun 13, 2021 at 08:24:12PM -0500, Justin Pryzby wrote:\n> > In this patch series, I added compression information to the errcontext from\n> > xlog_block_info(), and allow specifying compression levels like zlib-2. I'll\n> > rearrange that commit earlier if we decide that's desirable to include.\n> \n> 4035cd5d4 added wal_compress=lz4 and\n> e9537321a added wal_compress=zstd\n> \n> Since 4035cd5d4, pg_waldump has shown compression info (and walinspect\n> shows the same since 2258e76f9).\n> \n> But if you try to replay WAL on a server which wasn't compiled with\n> support for the requisite compression methods, it's a bit crummy that it\n> doesn't include in the error message the reason *why* it failed to\n> restore the image, if that's caused by the missing compression method.\n\nThat's also hitting an elog().\n\n2022-09-04 15:56:29.916 CDT startup[2625] FATAL: XX000: failed to restore block image\n2022-09-04 15:56:29.916 CDT startup[2625] DETAIL: image at 0/1D11CB8 compressed with zstd not supported by build, block 0\n2022-09-04 15:56:29.916 CDT startup[2625] CONTEXT: WAL redo at 0/1D11CB8 for Heap/DELETE: off 50 flags 0x00 KEYS_UPDATED ; blkref #0: rel 1663/16384/2610, blk 4 FPW\n2022-09-04 15:56:29.916 CDT startup[2625] LOCATION: XLogReadBufferForRedoExtended, xlogutils.c:396\n\n(gdb) bt\n#0 report_invalid_record (state=0x555555e33ff0, fmt=0x555555b1c1a8 \"image at %X/%X compressed with %s not supported by build, block %d\") at xlogreader.c:74\n#1 0x00005555556beeec in RestoreBlockImage (record=record@entry=0x555555e33ff0, block_id=block_id@entry=0 '\\000', page=page@entry=0x7fffee9bdc00 \"\") at xlogreader.c:2078\n#2 0x00005555556c5d39 in XLogReadBufferForRedoExtended (record=record@entry=0x555555e33ff0, block_id=block_id@entry=0 '\\000', mode=mode@entry=RBM_NORMAL, get_cleanup_lock=get_cleanup_lock@entry=false, \n buf=buf@entry=0x7fffffffd760) at xlogutils.c:395\n#3 0x00005555556c5e4a in XLogReadBufferForRedo (record=record@entry=0x555555e33ff0, block_id=block_id@entry=0 '\\000', buf=buf@entry=0x7fffffffd760) at xlogutils.c:320\n#4 0x000055555565bd5b in heap_xlog_delete (record=0x555555e33ff0) at heapam.c:9032\n#5 0x00005555556622b7 in heap_redo (record=<optimized out>) at heapam.c:9836\n#6 0x00005555556c15ed in ApplyWalRecord (xlogreader=0x555555e33ff0, record=record@entry=0x7fffee2b6820, replayTLI=replayTLI@entry=0x7fffffffd870) at ../../../../src/include/access/xlog_internal.h:379\n#7 0x00005555556c4c30 in PerformWalRecovery () at xlogrecovery.c:1725\n#8 0x00005555556b7f23 in StartupXLOG () at xlog.c:5291\n#9 0x0000555555ac4491 in InitPostgres (in_dbname=in_dbname@entry=0x555555e09390 \"postgres\", dboid=dboid@entry=0, username=username@entry=0x555555dedda0 \"pryzbyj\", useroid=useroid@entry=0, \n load_session_libraries=<optimized out>, override_allow_connections=override_allow_connections@entry=false, out_dbname=0x0) at postinit.c:731\n#10 0x000055555598471f in PostgresMain (dbname=0x555555e09390 \"postgres\", username=username@entry=0x555555dedda0 \"pryzbyj\") at postgres.c:4085\n#11 0x00005555559851b0 in PostgresSingleUserMain (argc=5, argv=0x555555de7530, username=0x555555dedda0 \"pryzbyj\") at postgres.c:3986\n#12 0x0000555555840533 in main (argc=5, argv=0x555555de7530) at main.c:194\n\nI guess it should be promoted to an ereport(), since it's now a\nuser-facing error rathere than an internal one.\n\n$ ./tmp_install.without-zstd/usr/local/pgsql/bin/postgres -D ./src/test/regress/tmp_check/data\n2022-09-04 15:28:37.446 CDT postmaster[29964] LOG: starting PostgreSQL 16devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit\n2022-09-04 15:28:37.446 CDT postmaster[29964] LOG: listening on IPv4 address \"127.0.0.1\", port 5432\n2022-09-04 15:28:37.528 CDT postmaster[29964] LOG: listening on Unix socket \"/tmp/.s.PGSQL.5432\"\n2022-09-04 15:28:37.587 CDT startup[29972] LOG: database system was interrupted while in recovery at 2022-09-04 15:27:44 CDT\n2022-09-04 15:28:37.587 CDT startup[29972] HINT: This probably means that some data is corrupted and you will have to use the last backup for recovery.\n2022-09-04 15:28:38.010 CDT startup[29972] LOG: database system was not properly shut down; automatic recovery in progress\n2022-09-04 15:28:38.038 CDT startup[29972] LOG: redo starts at 0/1D118C0\n2022-09-04 15:28:38.039 CDT startup[29972] LOG: could not stat file \"pg_tblspc/16502\": No such file or directory\n2022-09-04 15:28:38.039 CDT startup[29972] CONTEXT: WAL redo at 0/1D11970 for Tablespace/DROP: 16502\n2022-09-04 15:28:38.039 CDT startup[29972] FATAL: failed to restore block image\n2022-09-04 15:28:38.039 CDT startup[29972] DETAIL: image at 0/1D11CB8 compressed with zstd not supported by build, block 0\n2022-09-04 15:28:38.039 CDT startup[29972] CONTEXT: WAL redo at 0/1D11CB8 for Heap/DELETE: off 50 flags 0x00 KEYS_UPDATED ; blkref #0: rel 1663/16384/2610, blk 4 FPW\n\ndiff --git a/src/backend/access/transam/xlogutils.c b/src/backend/access/transam/xlogutils.c\nindex 0cda22597fe..01c7454bcc7 100644\n--- a/src/backend/access/transam/xlogutils.c\n+++ b/src/backend/access/transam/xlogutils.c\n@@ -393,7 +393,11 @@ XLogReadBufferForRedoExtended(XLogReaderState *record,\n \t\t\t\t\t\t\t\t\t prefetch_buffer);\n \t\tpage = BufferGetPage(*buf);\n \t\tif (!RestoreBlockImage(record, block_id, page))\n-\t\t\telog(ERROR, \"failed to restore block image\");\n+\t\t\tereport(ERROR,\n+\t\t\t\t\terrcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+\t\t\t\t\terrmsg(\"failed to restore block image\"),\n+\t\t\t\t\terrdetail(\"%s\", record->errormsg_buf));\n+\n \n \t\t/*\n \t\t * The page may be uninitialized. If so, we can't set the LSN because\n\n\n",
"msg_date": "Sun, 4 Sep 2022 19:23:20 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "Hi,\n\nI have a small question for those involved:\n\nContext: I'm trying to get the smallest BKPIMAGE size possible\nregardless of CPU cost, which means I'm trying multiple compressions\nto get the smallest data possible with the options available. This\nmeans ignoring the wal_compression GUC in XLogCompressBackupBlock and\nbrute-forcing the smallest compression available, potentially layering\ncompression methods, and returning the bimg_info compression flags\nthat will be stored in XLogCompressBackupBlock and used to decompress\nthe block's data.\n\nIs there a prescribed order of compression algorithms to apply when\n(de)compressing full page images in WAL, when the bimg_info has more\nthan one BKPIMAGE_COMPRESS_*-flags set? That is, when I want to check\nthe compression of the block image with both ZSTD and LZ4, which order\nis the ordering indicated by bimg_info = (COMPRESS_LZ4 |\nCOMPRESS_ZSTD)?\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Mon, 5 Sep 2022 14:45:57 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Mon, Sep 05, 2022 at 02:45:57PM +0200, Matthias van de Meent wrote:\n> Hi,\n> \n> I have a small question for those involved:\n\nI suggest to \"reply all\"\n\n> Context: I'm trying to get the smallest BKPIMAGE size possible\n> regardless of CPU cost, which means I'm trying multiple compressions\n> to get the smallest data possible with the options available. This\n> means ignoring the wal_compression GUC in XLogCompressBackupBlock and\n> brute-forcing the smallest compression available, potentially layering\n> compression methods, and returning the bimg_info compression flags\n> that will be stored in XLogCompressBackupBlock and used to decompress\n> the block's data.\n\nI think once you apply one compression method/algorithm, you shouldn't\nexpect other \"layered\" methods to be able to compress it at all. I\nthink you'll get better compression by using a higher compression level\nin zstd (or zlib) than with any combination of methods. A patch for\nconfigurable compression level was here:\n\nhttps://postgr.es/m/20220222231948.GJ9008@telsasoft.com\n\n> Is there a prescribed order of compression algorithms to apply when\n> (de)compressing full page images in WAL, when the bimg_info has more\n> than one BKPIMAGE_COMPRESS_*-flags set? That is, when I want to check\n> the compression of the block image with both ZSTD and LZ4, which order\n> is the ordering indicated by bimg_info = (COMPRESS_LZ4 |\n> COMPRESS_ZSTD)?\n\nThere's currently a separate bit for each method, but it's not supported\nto \"stack\" them (See also the \"Save bits\" patch, above).\n\nThis came up before when Greg asked about it.\nhttps://www.postgresql.org/message-id/20210622031358.GF29179@telsasoft.com\nhttps://www.postgresql.org/message-id/20220131222800.GY23027@telsasoft.com\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 5 Sep 2022 08:02:35 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Sun, Sep 04, 2022 at 07:23:20PM -0500, Justin Pryzby wrote:\n> That's also hitting an elog().\n> \n> 2022-09-04 15:56:29.916 CDT startup[2625] FATAL: XX000: failed to restore block image\n> 2022-09-04 15:56:29.916 CDT startup[2625] DETAIL: image at 0/1D11CB8 compressed with zstd not supported by build, block 0\n> 2022-09-04 15:56:29.916 CDT startup[2625] CONTEXT: WAL redo at 0/1D11CB8 for Heap/DELETE: off 50 flags 0x00 KEYS_UPDATED ; blkref #0: rel 1663/16384/2610, blk 4 FPW\n> 2022-09-04 15:56:29.916 CDT startup[2625] LOCATION: XLogReadBufferForRedoExtended, xlogutils.c:396\n> \n> I guess it should be promoted to an ereport(), since it's now a\n> user-facing error rathere than an internal one.\n> \n> diff --git a/src/backend/access/transam/xlogutils.c b/src/backend/access/transam/xlogutils.c\n> index 0cda22597fe..01c7454bcc7 100644\n> --- a/src/backend/access/transam/xlogutils.c\n> +++ b/src/backend/access/transam/xlogutils.c\n> @@ -393,7 +393,11 @@ XLogReadBufferForRedoExtended(XLogReaderState *record,\n> \t\t\t\t\t\t\t\t\t prefetch_buffer);\n> \t\tpage = BufferGetPage(*buf);\n> \t\tif (!RestoreBlockImage(record, block_id, page))\n> -\t\t\telog(ERROR, \"failed to restore block image\");\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\terrcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> +\t\t\t\t\terrmsg(\"failed to restore block image\"),\n> +\t\t\t\t\terrdetail(\"%s\", record->errormsg_buf));\n> +\n\nYes, you are right here. elog()'s should never be used for things\nthat could be triggered by the user, even if this one depends on the\nbuild options. I think that the error message ought to be updated as\n\"could not restore block image\" instead, to be more in line with the\nproject policy.\n--\nMichael",
"msg_date": "Tue, 6 Sep 2022 15:47:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Tue, Sep 06, 2022 at 03:47:05PM +0900, Michael Paquier wrote:\n> On Sun, Sep 04, 2022 at 07:23:20PM -0500, Justin Pryzby wrote:\n>> \t\tpage = BufferGetPage(*buf);\n>> \t\tif (!RestoreBlockImage(record, block_id, page))\n>> -\t\t\telog(ERROR, \"failed to restore block image\");\n>> +\t\t\tereport(ERROR,\n>> +\t\t\t\t\terrcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n>> +\t\t\t\t\terrmsg(\"failed to restore block image\"),\n>> +\t\t\t\t\terrdetail(\"%s\", record->errormsg_buf));\n>> +\n> \n> Yes, you are right here. elog()'s should never be used for things\n> that could be triggered by the user, even if this one depends on the\n> build options. I think that the error message ought to be updated as\n> \"could not restore block image\" instead, to be more in line with the\n> project policy.\n\nAt the end, I have not taken the approach to use errdetail() for this\nproblem as errormsg_buf is designed to include an error string. So, I\nhave finished by refining the error messages generated in\nRestoreBlockImage(), consuming them with an ERRCODE_INTERNAL_ERROR.\n\nThis approach addresses a second issue, actually, because we have\nnever provided any context when there are inconsistencies in the\ndecoded record for max_block_id, has_image or in_use when restoring a\nblock image. This one is older than v15, but we have received\ncomplaints about that for ~14 as far as I know, so I would leave this\nchange for HEAD and REL_15_STABLE.\n--\nMichael",
"msg_date": "Wed, 7 Sep 2022 15:29:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Wed, Sep 07, 2022 at 03:29:08PM +0900, Michael Paquier wrote:\n> At the end, I have not taken the approach to use errdetail() for this\n> problem as errormsg_buf is designed to include an error string. So, I\n> have finished by refining the error messages generated in\n> RestoreBlockImage(), consuming them with an ERRCODE_INTERNAL_ERROR.\n\nThe cases involving max_block_id, has_image and in_use are still \"can't\nhappen\" cases, which used to hit elog(), and INTERNAL_ERROR sounds right\nfor them.\n\nBut that's also what'll happen when attempting to replay WAL using a postgres\nbuild which doesn't support the necessary compression method. I ran into this\nwhile compiling postgres locally while reporting the recovery_prefetch bug,\nwhen I failed to compile --with-zstd. Note that basebackup does:\n\nsrc/backend/backup/basebackup_zstd.c- ereport(ERROR,\nsrc/backend/backup/basebackup_zstd.c- (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\nsrc/backend/backup/basebackup_zstd.c: errmsg(\"zstd compression is not supported by this build\")));\nsrc/backend/backup/basebackup_zstd.c- return NULL; /* keep compiler quiet */\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 7 Sep 2022 03:57:29 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Wed, Sep 07, 2022 at 03:57:29AM -0500, Justin Pryzby wrote:\n> But that's also what'll happen when attempting to replay WAL using a postgres\n> build which doesn't support the necessary compression method. I ran into this\n> while compiling postgres locally while reporting the recovery_prefetch bug,\n> when I failed to compile --with-zstd.\n\nWell, I am aware of that as that's how I have tested my change. I\nthink that the case you are mentioning is really different than this\nchange, though. The case you are mentioning gets triggered with the\nserver-side compression of pg_basebackup with a client application,\nwhile the case of a block image restored can only happen when using\ninconsistent build options between a primary and a standby (at least\nin core, for the code paths touched by this patch). Before posting my\nprevious patch, I have considered a few options:\n- Extend errormsg_buf with an error code, but the frontend does not\ncare about that.\n- Make RestoreBlockImage() a backend-only routine and provide a better\nerror control without filling in errormsg_buf, but I'd like to believe\nthat this code is useful for some frontend code even if core does not\nuse it yet in a frontend context.\n- Change the signature of RestoreBlockImage() to return an enum with\nat least a tri state instead of a boolean. For this one I could not\nconvince myself that this is worth the complexity, as we are talking\nabout inconsistent build options between nodes doing physical\nreplication, and the error message is the useful piece to know what's\nhappening (frontends are only going to consume the error message\nanyway).\n--\nMichael",
"msg_date": "Wed, 7 Sep 2022 19:02:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
},
{
"msg_contents": "On Wed, Sep 07, 2022 at 07:02:07PM +0900, Michael Paquier wrote:\n> Before posting my previous patch, I have considered a few options:\n> - Extend errormsg_buf with an error code, but the frontend does not\n> care about that.\n> - Make RestoreBlockImage() a backend-only routine and provide a better\n> error control without filling in errormsg_buf, but I'd like to believe\n> that this code is useful for some frontend code even if core does not\n> use it yet in a frontend context.\n> - Change the signature of RestoreBlockImage() to return an enum with\n> at least a tri state instead of a boolean. For this one I could not\n> convince myself that this is worth the complexity, as we are talking\n> about inconsistent build options between nodes doing physical\n> replication, and the error message is the useful piece to know what's\n> happening (frontends are only going to consume the error message\n> anyway).\n\nAfter a second look, I was not feeling enthusiastic about adding more\ncomplications in this code path for this case, so I have finished by\napplying my previous patch to address this open item.\n\nI am wondering if there is a use-case for backpatching something like\nthat to older versions though? FPW compression is available since 9.5\nbut the code has never consumed the error message produced by\nRestoreBlockImage() when pglz fails to decompress an image, and there\nis equally no exact information provided when the status data of the\nblock in the record is incorrect, even if recovery provides some\ncontext associated to the record being replayed.\n--\nMichael",
"msg_date": "Fri, 9 Sep 2022 10:18:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Different compression methods for FPI"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile investigating on some strange \"out of shared memory\" error reported on\nthe french BBS [1], I noticed that 09adc9a8c09 (adding Robert in Cc) changed\nShmemAlloc alignment to CACHELINEALIGN but didn't update any related code that\ndepended on it.\n\nMost of the core code isn't impacted as it doesn't have to reserve any shared\nmemory, but AFAICT pg_prewarm and pg_stat_statements can now slightly\nunderestimate the amount of shared memory they'll use, and similarly for any\nthird party extension that still rely on MAXALIGN alignment. As a consequence\nthose extension can consume a few hundred bytes more than they reserved, which\nprobably will be borrowed from the lock hashtables reserved size that isn't\nalloced immediately. This can later lead to ShmemAlloc failing when trying to\nacquire a lock while the hash table is almost full but should still have enough\nroom to store it, which could explain the error reported.\n\nPFA a patch that fixes pg_prewarm and pg_stat_statements explicit alignment to\nCACHELINEALIGN, and also update the alignment in hash_estimate_size() to what I\nthink ShmemInitHash will actually consume.\n\n[1] https://forums.postgresql.fr/viewtopic.php?pid=32138#p32138 sorry, it's all\nin French.",
"msg_date": "Sat, 27 Feb 2021 16:08:15 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Shared memory size computation oversight?"
},
{
"msg_contents": "\n\n\nHi,\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Saturday, February 27, 2021 9:08 AM, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Hi,\n>\n> While investigating on some strange \"out of shared memory\" error reported on\n> the french BBS [1], I noticed that 09adc9a8c09 (adding Robert in Cc) changed\n> ShmemAlloc alignment to CACHELINEALIGN but didn't update any related code that\n> depended on it.\n\nNice catch.\n\n>\n> Most of the core code isn't impacted as it doesn't have to reserve any shared\n> memory, but AFAICT pg_prewarm and pg_stat_statements can now slightly\n> underestimate the amount of shared memory they'll use, and similarly for any\n> third party extension that still rely on MAXALIGN alignment. As a consequence\n> those extension can consume a few hundred bytes more than they reserved, which\n> probably will be borrowed from the lock hashtables reserved size that isn't\n> alloced immediately. This can later lead to ShmemAlloc failing when trying to\n> acquire a lock while the hash table is almost full but should still have enough\n> room to store it, which could explain the error reported.\n>\n> PFA a patch that fixes pg_prewarm and pg_stat_statements explicit alignment to\n> CACHELINEALIGN, and also update the alignment in hash_estimate_size() to what I\n> think ShmemInitHash will actually consume.\n\n\nPlease excuse me as I speak most from the side of ignorance. I am slightly curious\nto understand something in your patch, if you can be kind enough to explain it to\nme.\n\nThe commit 09adc9a8c09 you pointed out to, as far as I can see, changed the total\nsize of the shared memory, not individual bits. It did explain that the users of\nthat space had properly padded their data, but even assuming that they did not do\nthat as asked, the result would (or rather should) be cache misses, not running out\nof reserved memory.\n\nMy limited understanding is also based in a comment in CreateSharedMemoryAndSemaphores()\n\n /*\n * Size of the Postgres shared-memory block is estimated via\n * moderately-accurate estimates for the big hogs, plus 100K for the\n * stuff that's too small to bother with estimating.\n *\n * We take some care during this phase to ensure that the total size\n * request doesn't overflow size_t. If this gets through, we don't\n * need to be so careful during the actual allocation phase.\n */\n size = 100000;\n\nOf course, my argument falls bare, if the size estimates of each of the elements is\nrather underestimated. Indeed this is the argument in the present patch expressed in\ncode in hash_estimate_size most prominently, here:\n\n size = add_size(size,\n- mul_size(nElementAllocs,\n- mul_size(elementAllocCnt, elementSize)));\n+ CACHELINEALIGN(Nmul_size(nElementAllocs,\n+ mul_size(elementAllocCnt, elementSize))));\n\n(small note, Nmul_size is a typo of mul_size, but that is minor, by amending it the\ncode compiles).\n\nTo conclude, the running out of shared memory, seems to me to be fixed rather\nvaguely with this patch. I am not claiming that increasing the memory used by\nthe elements is not needed, I am claiming that I can not clearly see how is that\nspecific increase needed.\n\nThank you for your patience,\n//Georgios -- https://www.vmware.com\n\n>\n> [1] https://forums.postgresql.fr/viewtopic.php?pid=32138#p32138 sorry, it's all\n> in French.\n\n\n\n\n",
"msg_date": "Wed, 03 Mar 2021 15:37:20 +0000",
"msg_from": "Georgios <gkokolatos@protonmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Shared memory size computation oversight?"
},
{
"msg_contents": "Georgios <gkokolatos@protonmail.com> writes:\n> My limited understanding is also based in a comment in CreateSharedMemoryAndSemaphores()\n\n> * Size of the Postgres shared-memory block is estimated via\n> * moderately-accurate estimates for the big hogs, plus 100K for the\n> * stuff that's too small to bother with estimating.\n\nRight. That 100K slop factor is capable of hiding a multitude of sins.\n\nI have not looked at this patch, but I think the concern is basically that\nif we have space-estimation infrastructure that misestimates what it is\nsupposed to estimate, then if that infrastructure is used to estimate the\nsize of any of the \"big hog\" data structures, we might misestimate by\nenough that the slop factor wouldn't hide it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Mar 2021 11:23:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Shared memory size computation oversight?"
},
{
"msg_contents": "On Wed, Mar 03, 2021 at 03:37:20PM +0000, Georgios wrote:\n> \n> Please excuse me as I speak most from the side of ignorance. I am slightly curious\n> to understand something in your patch, if you can be kind enough to explain it to\n> me.\n> \n> The commit 09adc9a8c09 you pointed out to, as far as I can see, changed the total\n> size of the shared memory, not individual bits. It did explain that the users of\n> that space had properly padded their data, but even assuming that they did not do\n> that as asked, the result would (or rather should) be cache misses, not running out\n> of reserved memory.\n\nIt actually changed the allocation of individual bits inside the fixed size\nshared memory, as underlying users ends up calling shmemalloc(),\nbut did not accurately changed the actual total size of shared memory as many\nestimates are done either still using MAXALIGN or simply not accounting for the\npadding.\n\n> My limited understanding is also based in a comment in CreateSharedMemoryAndSemaphores()\n> \n> /*\n> * Size of the Postgres shared-memory block is estimated via\n> * moderately-accurate estimates for the big hogs, plus 100K for the\n> * stuff that's too small to bother with estimating.\n> *\n> * We take some care during this phase to ensure that the total size\n> * request doesn't overflow size_t. If this gets through, we don't\n> * need to be so careful during the actual allocation phase.\n> */\n> size = 100000;\n> \n> Of course, my argument falls bare, if the size estimates of each of the elements is\n> rather underestimated. Indeed this is the argument in the present patch expressed in\n> code in hash_estimate_size most prominently, here:\n> \n> size = add_size(size,\n> - mul_size(nElementAllocs,\n> - mul_size(elementAllocCnt, elementSize)));\n> + CACHELINEALIGN(Nmul_size(nElementAllocs,\n> + mul_size(elementAllocCnt, elementSize))));\n> \n> (small note, Nmul_size is a typo of mul_size, but that is minor, by amending it the\n> code compiles).\n\nOops, I'll fix that.\n\n> To conclude, the running out of shared memory, seems to me to be fixed rather\n> vaguely with this patch. I am not claiming that increasing the memory used by\n> the elements is not needed, I am claiming that I can not clearly see how is that\n> specific increase needed.\n\nI'm also not entirely convinced that those fixes are enough to explain the \"out\nof shared memory\" issue originally reported, but that's the only class of\nproblem that I could spot which could possibly explain it.\n\nAs Robert initially checked, it should be at worse a 6kB understimate with\ndefault parameters (it may be slightly more now as we have more shared memory\nusers, but it should be the same order of magnitude), and I don't think that\nthere it will vary a lot with huge shared_buffers and/or max_connections.\n\nThose 6kB should be compared to how much room is giving the initial 100k vs how\nmuch the small stuff is actually computing.\n\nNote that there are still some similar issue in the code. A quick look at\nProcGlobalShmemSize() vs InitProcGlobal() seems to indicate that it's missing 5\nCACHELINEALIGN in the estimate. Unless someone object, I'll soon do a full\nreview of all the estimates in CreateSharedMemoryAndSemaphores() and fix all\nadditional occurences that I can find.\n\n\n",
"msg_date": "Thu, 4 Mar 2021 00:36:08 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Shared memory size computation oversight?"
},
{
"msg_contents": "On Wed, Mar 03, 2021 at 11:23:54AM -0500, Tom Lane wrote:\n> Georgios <gkokolatos@protonmail.com> writes:\n> > My limited understanding is also based in a comment in CreateSharedMemoryAndSemaphores()\n> \n> > * Size of the Postgres shared-memory block is estimated via\n> > * moderately-accurate estimates for the big hogs, plus 100K for the\n> > * stuff that's too small to bother with estimating.\n> \n> Right. That 100K slop factor is capable of hiding a multitude of sins.\n> \n> I have not looked at this patch, but I think the concern is basically that\n> if we have space-estimation infrastructure that misestimates what it is\n> supposed to estimate, then if that infrastructure is used to estimate the\n> size of any of the \"big hog\" data structures, we might misestimate by\n> enough that the slop factor wouldn't hide it.\n\nExactly. And now that I looked deeper I can see that multiple estimates are\nentirely ignoring the padding alignment (e.g. ProcGlobalShmemSize()), which can\nexceed the 6kB originally estimated by Robert.\n\n\n",
"msg_date": "Thu, 4 Mar 2021 00:40:52 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Shared memory size computation oversight?"
},
{
"msg_contents": "On Thu, Mar 04, 2021 at 12:40:52AM +0800, Julien Rouhaud wrote:\n> On Wed, Mar 03, 2021 at 11:23:54AM -0500, Tom Lane wrote:\n>> I have not looked at this patch, but I think the concern is basically that\n>> if we have space-estimation infrastructure that misestimates what it is\n>> supposed to estimate, then if that infrastructure is used to estimate the\n>> size of any of the \"big hog\" data structures, we might misestimate by\n>> enough that the slop factor wouldn't hide it.\n> \n> Exactly. And now that I looked deeper I can see that multiple estimates are\n> entirely ignoring the padding alignment (e.g. ProcGlobalShmemSize()), which can\n> exceed the 6kB originally estimated by Robert.\n\nWe are going to have a hard time catching up all the places that are\ndoing an incorrect estimation, and have an even harder time making\nsure that similar errors don't happen in the future. Should we add a \n{add,mul}_shmem_size() to make sure that the size calculated is\ncorrectly aligned, that uses CACHELINEALIGN before returning the\nresult size?\n--\nMichael",
"msg_date": "Thu, 4 Mar 2021 16:05:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Shared memory size computation oversight?"
},
{
"msg_contents": "On Thu, Mar 04, 2021 at 04:05:10PM +0900, Michael Paquier wrote:\n> On Thu, Mar 04, 2021 at 12:40:52AM +0800, Julien Rouhaud wrote:\n> > On Wed, Mar 03, 2021 at 11:23:54AM -0500, Tom Lane wrote:\n> >> I have not looked at this patch, but I think the concern is basically that\n> >> if we have space-estimation infrastructure that misestimates what it is\n> >> supposed to estimate, then if that infrastructure is used to estimate the\n> >> size of any of the \"big hog\" data structures, we might misestimate by\n> >> enough that the slop factor wouldn't hide it.\n> > \n> > Exactly. And now that I looked deeper I can see that multiple estimates are\n> > entirely ignoring the padding alignment (e.g. ProcGlobalShmemSize()), which can\n> > exceed the 6kB originally estimated by Robert.\n> \n> We are going to have a hard time catching up all the places that are\n> doing an incorrect estimation, and have an even harder time making\n> sure that similar errors don't happen in the future. Should we add a \n> {add,mul}_shmem_size() to make sure that the size calculated is\n> correctly aligned, that uses CACHELINEALIGN before returning the\n> result size?\n\nI was also considering adding new (add|mull)_*_size functions to avoid having\ntoo messy code. I'm not terribly happy with xxx_shm_size(), as not all call to\nthose functions would require an alignment. Maybe (add|mull)shmemalign_size?\n\nBut before modifying dozens of calls, should we really fix those or only\nincrease a bit the \"slop factor\", or a mix of it?\n\nFor instance, I can see that for instance BackendStatusShmemSize() never had\nany padding consideration, while others do.\n\nMaybe only fixing contribs, some macro like PredXactListDataSize that already\ndo a MAXALIGN, SimpleLruShmemSize and hash_estimate_size() would be a short\npatch and should significantly improve the estimation.\n\n\n",
"msg_date": "Thu, 4 Mar 2021 15:23:33 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Shared memory size computation oversight?"
},
{
"msg_contents": "On Thu, Mar 04, 2021 at 03:23:33PM +0800, Julien Rouhaud wrote:\n> I was also considering adding new (add|mull)_*_size functions to avoid having\n> too messy code. I'm not terribly happy with xxx_shm_size(), as not all call to\n> those functions would require an alignment. Maybe (add|mull)shmemalign_size?\n> \n> But before modifying dozens of calls, should we really fix those or only\n> increase a bit the \"slop factor\", or a mix of it?\n> \n> For instance, I can see that for instance BackendStatusShmemSize() never had\n> any padding consideration, while others do.\n> \n> Maybe only fixing contribs, some macro like PredXactListDataSize that already\n> do a MAXALIGN, SimpleLruShmemSize and hash_estimate_size() would be a short\n> patch and should significantly improve the estimation.\n\nThe lack of complaints in this area looks to me like a sign that we\nmay not really need to backpatch something, so I would not be against\na precise chirurgy, with a separate set of {add,mul}_size() routines\nthat are used where adapted, so as it is easy to track down which size\nestimations expect an extra padding. I would be curious to hear more\nthoughts from others here.\n\nSaying that, calling a new routine something like add_shmem_align_size\nmakes it clear what's its purpose.\n--\nMichael",
"msg_date": "Thu, 4 Mar 2021 17:21:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Shared memory size computation oversight?"
},
{
"msg_contents": "\n\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Thursday, March 4, 2021 9:21 AM, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Mar 04, 2021 at 03:23:33PM +0800, Julien Rouhaud wrote:\n>\n> > I was also considering adding new (add|mull)_*_size functions to avoid having\n> > too messy code. I'm not terribly happy with xxx_shm_size(), as not all call to\n> > those functions would require an alignment. Maybe (add|mull)shmemalign_size?\n> > But before modifying dozens of calls, should we really fix those or only\n> > increase a bit the \"slop factor\", or a mix of it?\n> > For instance, I can see that for instance BackendStatusShmemSize() never had\n> > any padding consideration, while others do.\n> > Maybe only fixing contribs, some macro like PredXactListDataSize that already\n> > do a MAXALIGN, SimpleLruShmemSize and hash_estimate_size() would be a short\n> > patch and should significantly improve the estimation.\n>\n> The lack of complaints in this area looks to me like a sign that we\n> may not really need to backpatch something, so I would not be against\n> a precise chirurgy, with a separate set of {add,mul}_size() routines\n> that are used where adapted, so as it is easy to track down which size\n> estimations expect an extra padding. I would be curious to hear more\n> thoughts from others here.\n>\n> Saying that, calling a new routine something like add_shmem_align_size\n> makes it clear what's its purpose.\n\nMy limited opinion on the matter after spending some time yesterday through\nthe related code, is that with the current api it is easy to miss something\nand underestimate or just be sloppy. If only from the readability point of\nview, adding the proposed add_shmem_align_size will be beneficial.\n\nI hold no opinion on backpatching.\n\n//Georgios\n\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Michael\n\n\n\n\n",
"msg_date": "Thu, 04 Mar 2021 08:43:51 +0000",
"msg_from": "Georgios <gkokolatos@protonmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Shared memory size computation oversight?"
},
{
"msg_contents": "On Thu, Mar 04, 2021 at 08:43:51AM +0000, Georgios wrote:\n> \n> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\n> On Thursday, March 4, 2021 9:21 AM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> > On Thu, Mar 04, 2021 at 03:23:33PM +0800, Julien Rouhaud wrote:\n> >\n> > > I was also considering adding new (add|mull)_*_size functions to avoid having\n> > > too messy code. I'm not terribly happy with xxx_shm_size(), as not all call to\n> > > those functions would require an alignment. Maybe (add|mull)shmemalign_size?\n> > > But before modifying dozens of calls, should we really fix those or only\n> > > increase a bit the \"slop factor\", or a mix of it?\n> > > For instance, I can see that for instance BackendStatusShmemSize() never had\n> > > any padding consideration, while others do.\n> > > Maybe only fixing contribs, some macro like PredXactListDataSize that already\n> > > do a MAXALIGN, SimpleLruShmemSize and hash_estimate_size() would be a short\n> > > patch and should significantly improve the estimation.\n> >\n> > The lack of complaints in this area looks to me like a sign that we\n> > may not really need to backpatch something, so I would not be against\n> > a precise chirurgy, with a separate set of {add,mul}_size() routines\n> > that are used where adapted, so as it is easy to track down which size\n> > estimations expect an extra padding. I would be curious to hear more\n> > thoughts from others here.\n> >\n> > Saying that, calling a new routine something like add_shmem_align_size\n> > makes it clear what's its purpose.\n> \n> My limited opinion on the matter after spending some time yesterday through\n> the related code, is that with the current api it is easy to miss something\n> and underestimate or just be sloppy. If only from the readability point of\n> view, adding the proposed add_shmem_align_size will be beneficial.\n> \n> I hold no opinion on backpatching.\n\nSo here's a v2 patch which hopefully account for all unaccounted alignment\npadding. There's also a significant change in the shared hash table size\nestimation. AFAICT, allocation will be done this way:\n\n- num_freelists allocs of init_size / num_freelists entries (on average) for\n partitioned htab, num_freelist being 1 for non partitioned table,\n NUM_FREELISTS (32) otherwise.\n\n- then the rest of the entries, if any, are alloced in groups of\n choose_nelem_alloc() entries\n\nWith this patch applied, I see an extra 16KB of shared memory being requested.",
"msg_date": "Fri, 5 Mar 2021 01:52:28 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Shared memory size computation oversight?"
},
{
"msg_contents": "I spent a bit of time looking into this. Using current HEAD, I\ninstrumented CreateSharedMemoryAndSemaphores to log the size estimates\nreturned by the various estimation subroutines, plus the shared memory\nspace actually consumed (i.e, the change in ShmemSegHdr->freeoffset)\nby the various shared memory initialization functions. There were only\ntwo estimates that were way off: LockShmemSize requested 1651771 more\nbytes than InitLocks actually consumed, and PredicateLockShmemSize\nrequested 167058 more bytes than InitPredicateLocks consumed. I believe\nboth of those are intentional, reflecting space that may be eaten by the\nlock tables later.\n\nMeanwhile, looking at ShmemSegHdr->freeoffset vs ShmemSegHdr->totalsize,\nthe actual remaining shmem space after postmaster startup is 1919488\nbytes. (Running the core regression tests doesn't consume any of that\nremaining space, btw.) Subtracting the extra lock-table space, we have\n100659 bytes to spare, which is as near as makes no difference to the\nintended slop of 100000 bytes.\n\nMy conclusion is that we don't need to do anything, indeed the proposed\nchanges will probably just lead to overestimation.\n\nIt's certainly possible that there's something amiss somewhere. These\nnumbers were all taken with out-of-the-box configuration, so it could be\nthat changing some postgresql.conf entries would expose that some module\nis not scaling its request correctly. Also, I don't have any extensions\nloaded, so this proves nothing about the correctness of any of those.\nBut it appears to me that the general scheme is working perfectly fine,\nso we do not need to complicate it.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 04 Mar 2021 12:53:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Shared memory size computation oversight?"
},
{
"msg_contents": "On Thu, Mar 04, 2021 at 12:53:54PM -0500, Tom Lane wrote:\n> My conclusion is that we don't need to do anything, indeed the proposed\n> changes will probably just lead to overestimation.\n> \n> It's certainly possible that there's something amiss somewhere. These\n> numbers were all taken with out-of-the-box configuration, so it could be\n> that changing some postgresql.conf entries would expose that some module\n> is not scaling its request correctly. Also, I don't have any extensions\n> loaded, so this proves nothing about the correctness of any of those.\n> But it appears to me that the general scheme is working perfectly fine,\n> so we do not need to complicate it.\n\nThanks for looking at it. I agree that most of the changes aren't worth the\ncomplication and risk to overestimate the shmem size.\n\nBut I did some more experiments comparing the current version and the patched\nversion of the lock and pred lock shared hash table size estimations with some\nconfiguration changes, and I find the following delta in the estimated size:\n\n- original configuration : 15kB\n- max_connection = 1000 : 30kB\n- max_connection = 1000 and max_locks_per_xact = 256 : 96kB\n\nI don't know if my version is totally wrong or not, but if it's not it could be\nworthwhile to apply the hash tab part as it would mean that it doesn't scale\nproperly.\n\n\n",
"msg_date": "Fri, 5 Mar 2021 14:05:22 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Shared memory size computation oversight?"
},
{
"msg_contents": "On 27.02.21 09:08, Julien Rouhaud wrote:\n> PFA a patch that fixes pg_prewarm and pg_stat_statements explicit alignment to\n> CACHELINEALIGN, and also update the alignment in hash_estimate_size() to what I\n> think ShmemInitHash will actually consume.\n\nFor extensions, wouldn't it make things better if \nRequestAddinShmemSpace() applied CACHELINEALIGN() to its argument?\n\nIf you consider the typical sequence of RequestAddinShmemSpace(mysize()) \nand later ShmemInitStruct(..., mysize(), ...), then the size gets \nrounded up to cache-line size in the second case and not the first. The \npremise in your patch is that the extension should figure that out \nbefore calling RequestAddinShmemSpace(), but that seems wrong.\n\nBtw., I think your patch was wrong to apply CACHELINEALIGN() to \nintermediate results rather than at the end.\n\n\n",
"msg_date": "Thu, 12 Aug 2021 15:34:27 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Shared memory size computation oversight?"
},
{
"msg_contents": "On Thu, Aug 12, 2021 at 9:34 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 27.02.21 09:08, Julien Rouhaud wrote:\n> > PFA a patch that fixes pg_prewarm and pg_stat_statements explicit alignment to\n> > CACHELINEALIGN, and also update the alignment in hash_estimate_size() to what I\n> > think ShmemInitHash will actually consume.\n>\n> For extensions, wouldn't it make things better if\n> RequestAddinShmemSpace() applied CACHELINEALIGN() to its argument?\n>\n> If you consider the typical sequence of RequestAddinShmemSpace(mysize())\n> and later ShmemInitStruct(..., mysize(), ...),\n\nBut changing RequestAddinShmemSpace() to apply CACHELINEALIGN() would\nonly really work for that specific usage only? If an extension does\nmultiple allocations you can't rely on correct results.\n\n> then the size gets\n> rounded up to cache-line size in the second case and not the first. The\n> premise in your patch is that the extension should figure that out\n> before calling RequestAddinShmemSpace(), but that seems wrong.\n>\n> Btw., I think your patch was wrong to apply CACHELINEALIGN() to\n> intermediate results rather than at the end.\n\nI'm not sure why you think it's wrong. It's ShmemInitStruct() that\nwill apply the padding, so if the extension calls it multiple times\n(whether directly or indirectly), then the padding will be added\nmultiple times. Which means that in theory the extension should\naccount for it multiple time in the amount of memory it's requesting.\n\nBut given the later discussion, ISTM that there's an agreement that\nany number of CACHELINEALIGN() overhead for any number of allocation\ncan be ignored as it should be safely absorbed by the 100kB extra\nspace. I think that the real problem, as mentioned in my last email,\nis that the shared htab size estimation doesn't really scale and can\neasily eat the whole extra space if you use some not that unrealistic\nparameters. I still don't know if I made any mistake trying to\nproperly account for the htab allocation, but if I didn't it can be\nproblematic.\n\n\n",
"msg_date": "Thu, 12 Aug 2021 22:18:47 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Shared memory size computation oversight?"
},
{
"msg_contents": "On 12.08.21 16:18, Julien Rouhaud wrote:\n> On Thu, Aug 12, 2021 at 9:34 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> On 27.02.21 09:08, Julien Rouhaud wrote:\n>>> PFA a patch that fixes pg_prewarm and pg_stat_statements explicit alignment to\n>>> CACHELINEALIGN, and also update the alignment in hash_estimate_size() to what I\n>>> think ShmemInitHash will actually consume.\n>>\n>> For extensions, wouldn't it make things better if\n>> RequestAddinShmemSpace() applied CACHELINEALIGN() to its argument?\n>>\n>> If you consider the typical sequence of RequestAddinShmemSpace(mysize())\n>> and later ShmemInitStruct(..., mysize(), ...),\n> \n> But changing RequestAddinShmemSpace() to apply CACHELINEALIGN() would\n> only really work for that specific usage only? If an extension does\n> multiple allocations you can't rely on correct results.\n\nI think you can do different things here to create inconsistent results, \nbut I think there should be one common, standard, normal, \nstraightforward way to get a correct result.\n\n>> Btw., I think your patch was wrong to apply CACHELINEALIGN() to\n>> intermediate results rather than at the end.\n> \n> I'm not sure why you think it's wrong. It's ShmemInitStruct() that\n> will apply the padding, so if the extension calls it multiple times\n> (whether directly or indirectly), then the padding will be added\n> multiple times. Which means that in theory the extension should\n> account for it multiple time in the amount of memory it's requesting.\n\nYeah, in that case it's probably rather the case that there is one \nCACHELINEALIGN() too few, since pg_stat_statements does two separate \nshmem allocations.\n\n\n",
"msg_date": "Fri, 13 Aug 2021 10:52:50 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Shared memory size computation oversight?"
},
{
"msg_contents": "On Fri, Aug 13, 2021 at 10:52:50AM +0200, Peter Eisentraut wrote:\n> On 12.08.21 16:18, Julien Rouhaud wrote:\n> > \n> > But changing RequestAddinShmemSpace() to apply CACHELINEALIGN() would\n> > only really work for that specific usage only? If an extension does\n> > multiple allocations you can't rely on correct results.\n> \n> I think you can do different things here to create inconsistent results, but\n> I think there should be one common, standard, normal, straightforward way to\n> get a correct result.\n\nUnless I'm missing something, the standard and straightforward way to get a\ncorrect result is to account for padding bytes in the C code, as it's currently\ndone in all contrib modules. The issue is that those modules aren't using the\ncorrect alignment anymore.\n> \n> > > Btw., I think your patch was wrong to apply CACHELINEALIGN() to\n> > > intermediate results rather than at the end.\n> > \n> > I'm not sure why you think it's wrong. It's ShmemInitStruct() that\n> > will apply the padding, so if the extension calls it multiple times\n> > (whether directly or indirectly), then the padding will be added\n> > multiple times. Which means that in theory the extension should\n> > account for it multiple time in the amount of memory it's requesting.\n> \n> Yeah, in that case it's probably rather the case that there is one\n> CACHELINEALIGN() too few, since pg_stat_statements does two separate shmem\n> allocations.\n\nI still don't get it. Aligning the total shmem size is totally different from\nproperly padding all single allocation sizes, and is almost never the right\nanswer.\n\nUsing a naive example, if your extension needs to ShmemInitStruct() twice 64B,\npostgres will consume 2*128B. But if you only rely on RequestAddinShmemSpace()\nto add a CACHELINEALIGN(), then no padding at all will be added, and you'll\nthen be not one but two CACHELINEALIGN() too few.\n\nBut again, the real issue is not the CACHELINEALIGN() roundings, as those have\na more or less stable size and are already accounted for in the extra 100kB,\nbut the dynahash size estimation which seems to be increasingly off as the\nnumber of entries grows.\n\n\n",
"msg_date": "Fri, 13 Aug 2021 20:32:56 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Shared memory size computation oversight?"
}
] |
[
{
"msg_contents": "Hi,\n\nFinding all matches in a string is convenient using regexp_matches() with the 'g' flag.\n\nBut if instead wanting to know the start and end positions of the occurrences,\none would have to first call regexp_matches(...,'g') to get all matches,\nand then iterate through the results and search using strpos() and length()\nrepeatedly to find all start and end positions.\n\nAssuming regexp_matches() internally already have knowledge of the occurrences,\nmaybe we could add a regexp_ranges() function that returns a two-dimensional array,\nwith all the [[start,end], ...] positions?\n\nSome other databases have a singular regexp_position() function,\nthat just returns the start positions for the first match.\nbut I don't think such function adds much value,\nbut if adding the pluralis one then maybe the singularis should be added as well,\nfor completeness, since we have array_position() and array_positions().\n\nI just wanted to share this idea now since there is currently a lot of other awesome work on the regex engine,\nand hear what others who are currently thinking a lot about regexes think of the idea.\n\n/Joel\nHi,Finding all matches in a string is convenient using regexp_matches() with the 'g' flag.But if instead wanting to know the start and end positions of the occurrences,one would have to first call regexp_matches(...,'g') to get all matches,and then iterate through the results and search using strpos() and length()repeatedly to find all start and end positions.Assuming regexp_matches() internally already have knowledge of the occurrences,maybe we could add a regexp_ranges() function that returns a two-dimensional array,with all the [[start,end], ...] positions?Some other databases have a singular regexp_position() function,that just returns the start positions for the first match.but I don't think such function adds much value,but if adding the pluralis one then maybe the singularis should be added as well,for completeness, since we have array_position() and array_positions().I just wanted to share this idea now since there is currently a lot of other awesome work on the regex engine,and hear what others who are currently thinking a lot about regexes think of the idea./Joel",
"msg_date": "Sat, 27 Feb 2021 20:51:27 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "regexp_positions()"
},
{
"msg_contents": "On Sat, Feb 27, 2021 at 08:51:27PM +0100, Joel Jacobson wrote:\n> Hi,\n> \n> Finding all matches in a string is convenient using regexp_matches() with the 'g' flag.\n> \n> But if instead wanting to know the start and end positions of the occurrences,\n> one would have to first call regexp_matches(...,'g') to get all matches,\n> and then iterate through the results and search using strpos() and length()\n> repeatedly to find all start and end positions.\n> \n> Assuming regexp_matches() internally already have knowledge of the occurrences,\n> maybe we could add a regexp_ranges() function that returns a two-dimensional array,\n> with all the [[start,end], ...] positions?\n\nMaybe an int4multirange, which would fit unless I'm misunderstanding\ng's meaning with respect to non-overlapping patterns, but that might\nbe a little too cute and not easy ever to extend.\n\nCome to that, would a row structure that looked like\n\n (match, start, end)\n\nbe useful?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sun, 28 Feb 2021 03:13:48 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: regexp_positions()"
},
{
"msg_contents": "Hi,\n\nOn Sun, Feb 28, 2021, at 03:13, David Fetter wrote:\n> Maybe an int4multirange, which would fit unless I'm misunderstanding\n> g's meaning with respect to non-overlapping patterns, but that might\n> be a little too cute and not easy ever to extend.\n> \n> Come to that, would a row structure that looked like\n> \n> (match, start, end)\n> \n> be useful?\n\nNice, didn't know about the new multirange.\nIts data structure seems like a perfect fit for this.\n\nHmm, I cannot find any catalog function to extract the ranges from the data structure though?\nAs a caller, I might need the exact start/end values,\nnot just wanting to know if a certain values overlaps any of the ranges.\nIs there such a function?\n\nHere is a PoC that just returns the start_pos and end_pos for all the matches.\nIt would be simple to modify it to instead return multirange.\n\nCREATE OR REPLACE FUNCTION regexp_ranges(string text, pattern text, OUT start_pos integer, OUT end_pos integer)\nRETURNS SETOF RECORD\nLANGUAGE plpgsql\nAS\n$$\nDECLARE\nmatch text;\nremainder text := string;\nlen integer;\nBEGIN\nend_pos := 0;\n--\n-- Ignore possible capture groups, instead just wrap the entire regex\n-- in an enclosing capture group, which is then extracted as the first array element.\n--\nFOR match IN SELECT (regexp_matches(string,format('(%s)',pattern),'g'))[1] LOOP\n len := length(match);\n start_pos := position(match in remainder) + end_pos;\n end_pos := start_pos + len - 1;\n RETURN NEXT;\n remainder := right(remainder, -len);\nEND LOOP;\nRETURN;\nEND\n$$;\n\nThis works fine for small strings:\n\nSELECT * FROM regexp_ranges('aaaa aa aaa','a+');\nstart_pos | end_pos\n-----------+---------\n 1 | 4\n 6 | 7\n 10 | 12\n(3 rows)\n\nTime: 0.209 ms\n\nBut quickly gets slow for longer strings:\n\nSELECT COUNT(*) FROM regexp_ranges(repeat('aaaa aa aaa',10000),'a+');\n20001\nTime: 98.663 ms\n\nSELECT COUNT(*) FROM regexp_ranges(repeat('aaaa aa aaa',20000),'a+');\n40001\nTime: 348.027 ms\n\nSELECT COUNT(*) FROM regexp_ranges(repeat('aaaa aa aaa',30000),'a+');\n60001\nTime: 817.412 ms\n\nSELECT COUNT(*) FROM regexp_ranges(repeat('aaaa aa aaa',40000),'a+');\n80001\nTime: 1478.438 ms (00:01.478)\n\nCompared to the much nicer observed O-notation for regexp_matches():\n\nSELECT COUNT(*) FROM regexp_matches(repeat('aaaa aa aaa',10000),'(a+)','g');\n20001\nTime: 12.602 ms\n\nSELECT COUNT(*) FROM regexp_matches(repeat('aaaa aa aaa',20000),'(a+)','g');\n40001\nTime: 25.161 ms\n\nSELECT COUNT(*) FROM regexp_matches(repeat('aaaa aa aaa',30000),'(a+)','g');\n60001\nTime: 44.795 ms\n\nSELECT COUNT(*) FROM regexp_matches(repeat('aaaa aa aaa',40000),'(a+)','g');\n80001\nTime: 57.292 ms\n\n/Joel\nHi,On Sun, Feb 28, 2021, at 03:13, David Fetter wrote:Maybe an int4multirange, which would fit unless I'm misunderstandingg's meaning with respect to non-overlapping patterns, but that mightbe a little too cute and not easy ever to extend.Come to that, would a row structure that looked like (match, start, end)be useful?Nice, didn't know about the new multirange.Its data structure seems like a perfect fit for this.Hmm, I cannot find any catalog function to extract the ranges from the data structure though?As a caller, I might need the exact start/end values,not just wanting to know if a certain values overlaps any of the ranges.Is there such a function?Here is a PoC that just returns the start_pos and end_pos for all the matches.It would be simple to modify it to instead return multirange.CREATE OR REPLACE FUNCTION regexp_ranges(string text, pattern text, OUT start_pos integer, OUT end_pos integer)RETURNS SETOF RECORDLANGUAGE plpgsqlAS$$DECLAREmatch text;remainder text := string;len integer;BEGINend_pos := 0;---- Ignore possible capture groups, instead just wrap the entire regex-- in an enclosing capture group, which is then extracted as the first array element.--FOR match IN SELECT (regexp_matches(string,format('(%s)',pattern),'g'))[1] LOOP len := length(match); start_pos := position(match in remainder) + end_pos; end_pos := start_pos + len - 1; RETURN NEXT; remainder := right(remainder, -len);END LOOP;RETURN;END$$;This works fine for small strings:SELECT * FROM regexp_ranges('aaaa aa aaa','a+');start_pos | end_pos-----------+--------- 1 | 4 6 | 7 10 | 12(3 rows)Time: 0.209 msBut quickly gets slow for longer strings:SELECT COUNT(*) FROM regexp_ranges(repeat('aaaa aa aaa',10000),'a+');20001Time: 98.663 msSELECT COUNT(*) FROM regexp_ranges(repeat('aaaa aa aaa',20000),'a+');40001Time: 348.027 msSELECT COUNT(*) FROM regexp_ranges(repeat('aaaa aa aaa',30000),'a+');60001Time: 817.412 msSELECT COUNT(*) FROM regexp_ranges(repeat('aaaa aa aaa',40000),'a+');80001Time: 1478.438 ms (00:01.478)Compared to the much nicer observed O-notation for regexp_matches():SELECT COUNT(*) FROM regexp_matches(repeat('aaaa aa aaa',10000),'(a+)','g');20001Time: 12.602 msSELECT COUNT(*) FROM regexp_matches(repeat('aaaa aa aaa',20000),'(a+)','g');40001Time: 25.161 msSELECT COUNT(*) FROM regexp_matches(repeat('aaaa aa aaa',30000),'(a+)','g');60001Time: 44.795 msSELECT COUNT(*) FROM regexp_matches(repeat('aaaa aa aaa',40000),'(a+)','g');80001Time: 57.292 ms/Joel",
"msg_date": "Sun, 28 Feb 2021 04:58:05 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: regexp_positions()"
},
{
"msg_contents": "I had a bug in the function, and I see I also accidentally renamed it to regexp_ranges().\n\nAttached is a fixed version of the PoC.\n\nThis function is e.g. useful when we're interested in patterns in meta-data,\nwhere we're not actually finding patterns in the data,\nbut in a string where each character corresponds to an element\nin an array, containing the actual data.\n\nIn such case, we need to know the positions of the matches,\nsince they tell what corresponding array elements that matched.\n\nFor instance, let's take the UNIX diff tool we all know as an example.\n\nLet's say you have all the raw diff lines stored in a text[] array,\nand we want to produce a unified diff, by finding hunks\nwith up to 3 unchanged lines before/after each hunk\ncontaining changes.\n\nIf we produce a text string containing one character per diff line,\nusing \"=\" for unchanged, \"+\" for addition, \"-\" for deletion.\n\nExample: =====-=======+=====-+======\n\nWe could then find the hunks using this regex:\n\n (={0,3}[-+]+={0,3})+\n\nusing regexp_positions() to find the start and end positions for each hunk:\n\nSELECT * FROM regexp_positions('=====-=======+=====-+======','(={0,3}[-+]+={0,3})+');\nstart_pos | end_pos\n-----------+---------\n 3 | 9\n 11 | 24\n(2 rows)\n\n/Joel",
"msg_date": "Sun, 28 Feb 2021 12:15:51 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: regexp_positions()"
}
] |
[
{
"msg_contents": "When md5 password authentication fails, the server log file has a helpful\ndetail to say why, usually one of:\n\nDETAIL: Role \"none\" does not exist.\nDETAIL: User \"jjanes\" has no password assigned.\nDETAIL: User \"jjanes\" has an expired password.\nDETAIL: Password does not match for user \"jjanes\".\n\nBut for scram authentication, only the first three of these will be\nreported when applicable. If the password is simply incorrect, then you do\nget a DETAIL line reporting which line of pg_hba was used, but you don't\nget a DETAIL line reporting the reason for the failure. It is pretty\nunfriendly to make the admin guess what the absence of a DETAIL is supposed\nto mean. And as far as I can tell, this is not intentional.\n\nNote that in one case you do get the \"does not match\" line. That is if the\nuser has a scram password assigned and the hba specifies plain-text\n'password' as the method. So if the absence of the DETAIL is intentional,\nit is not internally consistent.\n\nThe attached patch fixes the issue. I don't know if this is the correct\nlocation to be installing the message, maybe verify_client_proof should be\ndoing it instead. I am also not happy to be testing 'doomed' here, but it\nwas already checked a few lines up so I didn't want to go to lengths to\navoid doing it here too.\n\nCheers,\n\nJeff",
"msg_date": "Sat, 27 Feb 2021 17:02:23 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "DETAIL for wrong scram password"
},
{
"msg_contents": "On Sat, 2021-02-27 at 17:02 -0500, Jeff Janes wrote:\r\n> Note that in one case you do get the \"does not match\" line. That is\r\n> if the user has a scram password assigned and the hba specifies\r\n> plain-text 'password' as the method. So if the absence of the DETAIL\r\n> is intentional, it is not internally consistent.\r\n\r\nAgreed.\r\n\r\n> The attached patch fixes the issue. I don't know if this is the\r\n> correct location to be installing the message,\r\n> maybe verify_client_proof should be doing it instead.\r\n\r\nHmm, I agree that the current location doesn't seem quite right. If\r\nsomeone adds a new way to exit the SCRAM loop on failure, they'd\r\nprobably need to partially unwind this change to avoid printing the\r\nwrong detail message. But in my opinion, verify_client_proof()\r\nshouldn't be concerned with logging details...\r\n\r\nWhat would you think about adding the additional detail right after\r\nverify_client_proof() fails? I.e.\r\n\r\n> if (!verify_client_proof(state) || state->doomed)\r\n> {\r\n> \t/* <add logdetail here, if not doomed or already set> */\r\n> \tresult = SASL_EXCHANGE_FAILURE;\r\n> \tbreak;\r\n> }\r\n\r\nThat way the mismatched-password detail is linked directly to the\r\nmismatched-password logic.\r\n\r\nOther notes:\r\n- Did you have any thoughts around adding a regression test, since this\r\nwould be an easy thing to break in the future?\r\n- I was wondering about timing attacks against the psprintf() call to\r\nconstruct the logdetail string, but it looks like the other authn code\r\npaths aren't worried about that.\r\n\r\n--Jacob\r\n",
"msg_date": "Tue, 2 Mar 2021 17:48:05 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: DETAIL for wrong scram password"
},
{
"msg_contents": "On Tue, Mar 02, 2021 at 05:48:05PM +0000, Jacob Champion wrote:\n> What would you think about adding the additional detail right after\n> verify_client_proof() fails? I.e.\n\nAgreed. Having that once all the code paths have been taken and the\nclient proof has been verified looks more solid. On top of what's\nproposed, would it make sense to have a second logdetail for the case\nof a mock authentication? We don't log that yet, so I guess that it\ncould be useful for audit purposes?\n--\nMichael",
"msg_date": "Thu, 25 Mar 2021 16:41:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: DETAIL for wrong scram password"
},
{
"msg_contents": "On Thu, 2021-03-25 at 16:41 +0900, Michael Paquier wrote:\r\n> On top of what's\r\n> proposed, would it make sense to have a second logdetail for the case\r\n> of a mock authentication? We don't log that yet, so I guess that it\r\n> could be useful for audit purposes?\r\nIt looks like the code paths that lead to a doomed authentication\r\nalready provide their own, more specific, logdetail (role doesn't\r\nexist, role has no password, role doesn't have a SCRAM secret, etc.).\r\n\r\n--Jacob\r\n",
"msg_date": "Thu, 25 Mar 2021 15:54:10 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: DETAIL for wrong scram password"
},
{
"msg_contents": "On Thu, Mar 25, 2021 at 03:54:10PM +0000, Jacob Champion wrote:\n> It looks like the code paths that lead to a doomed authentication\n> already provide their own, more specific, logdetail (role doesn't\n> exist, role has no password, role doesn't have a SCRAM secret, etc.).\n\nYes, you are right here. I missed the parts before\nmock_scram_secret() gets called and there are comments in the whole\narea. Hmm, at the end of the day, I think that would just have\nverify_client_proof() fill in logdetail when the client proof does not\nmatch, and use a wording different than what's proposed upthread to\noutline that this is a client proof mismatch.\n--\nMichael",
"msg_date": "Fri, 26 Mar 2021 09:49:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: DETAIL for wrong scram password"
},
{
"msg_contents": "On Fri, Mar 26, 2021 at 09:49:00AM +0900, Michael Paquier wrote:\n> Yes, you are right here. I missed the parts before\n> mock_scram_secret() gets called and there are comments in the whole\n> area. Hmm, at the end of the day, I think that would just have\n> verify_client_proof() fill in logdetail when the client proof does not\n> match, and use a wording different than what's proposed upthread to\n> outline that this is a client proof mismatch.\n\nSeeing no updates, this has been marked as RwF.\n--\nMichael",
"msg_date": "Thu, 8 Apr 2021 19:59:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: DETAIL for wrong scram password"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen looking at [1], I realized we may have a side effect when removing\nredundant columns in the GROUP BY clause. Suppose we have a query with\nORDER BY 'b', and meanwhile column 'b' is also a group key. If we decide\nthat 'b' is redundant due to being functionally dependent on other GROUP\nBY columns, we would remove it from group keys. This will make us lose\nthe opportunity to leverage the index on 'b'.\n\nHere is an example for illustration.\n\n# create table t (a int primary key, b int);\n# insert into t select i, i%1000 from generate_series(1,1000000)i;\n# create index on t(b);\n\nBy default, we remove 'b' from group keys and generate a plan as below:\n\n# explain (costs off) select b from t group by a, b order by b limit 10;\n QUERY PLAN\n------------------------------------------------\n Limit\n -> Sort\n Sort Key: b\n -> Group\n Group Key: a\n -> Index Scan using t_pkey on t\n(6 rows)\n\nThe index on 'b' is not being used and we'll have to retrieve all the\ndata underneath to perform the sort work.\n\nOn the other hand, if we keep 'b' as a group column, we can get such a\nplan as:\n\n# explain (costs off) select b from t group by a, b order by b limit 10;\n QUERY PLAN\n-------------------------------------------------\n Limit\n -> Group\n Group Key: b, a\n -> Incremental Sort\n Sort Key: b, a\n Presorted Key: b\n -> Index Scan using t_b_idx on t\n(7 rows)\n\nWith the help of 't_b_idx', we can avoid the full scan on 't' and it\nwould run much faster.\n\nAny thoughts?\n\n[1]\nhttps://www.postgresql.org/message-id/flat/16869-26346b77d6ccaeec%40postgresql.org\n\nThanks\nRichard\n\nHi,When looking at [1], I realized we may have a side effect when removingredundant columns in the GROUP BY clause. Suppose we have a query withORDER BY 'b', and meanwhile column 'b' is also a group key. If we decidethat 'b' is redundant due to being functionally dependent on other GROUPBY columns, we would remove it from group keys. This will make us losethe opportunity to leverage the index on 'b'.Here is an example for illustration.# create table t (a int primary key, b int);# insert into t select i, i%1000 from generate_series(1,1000000)i;# create index on t(b);By default, we remove 'b' from group keys and generate a plan as below:# explain (costs off) select b from t group by a, b order by b limit 10; QUERY PLAN------------------------------------------------ Limit -> Sort Sort Key: b -> Group Group Key: a -> Index Scan using t_pkey on t(6 rows)The index on 'b' is not being used and we'll have to retrieve all thedata underneath to perform the sort work.On the other hand, if we keep 'b' as a group column, we can get such aplan as:# explain (costs off) select b from t group by a, b order by b limit 10; QUERY PLAN------------------------------------------------- Limit -> Group Group Key: b, a -> Incremental Sort Sort Key: b, a Presorted Key: b -> Index Scan using t_b_idx on t(7 rows)With the help of 't_b_idx', we can avoid the full scan on 't' and itwould run much faster.Any thoughts?[1] https://www.postgresql.org/message-id/flat/16869-26346b77d6ccaeec%40postgresql.orgThanksRichard",
"msg_date": "Sun, 28 Feb 2021 15:52:24 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Side effect of remove_useless_groupby_columns"
},
{
"msg_contents": "On Sun, 28 Feb 2021 at 20:52, Richard Guo <guofenglinux@gmail.com> wrote:\n> When looking at [1], I realized we may have a side effect when removing\n> redundant columns in the GROUP BY clause. Suppose we have a query with\n> ORDER BY 'b', and meanwhile column 'b' is also a group key. If we decide\n> that 'b' is redundant due to being functionally dependent on other GROUP\n> BY columns, we would remove it from group keys. This will make us lose\n> the opportunity to leverage the index on 'b'.\n\n> Any thoughts?\n\nI had initially thought that this was a non-issue before incremental\nsort was added, but the following case highlights that's wrong.\n\n# create table t (a int not null, b int);\n# insert into t select i, i%1000 from generate_series(1,1000000)i;\n# create index on t(b,a);\n\n# explain (analyze) select b from t group by a, b order by b,a limit 10;\n# alter table t add constraint t_pkey primary key (a);\n# explain (analyze) select b from t group by a, b order by b,a limit 10;\n\nExecution slows down after adding the primary key since that allows\nthe functional dependency to be detected and the \"b\" column removed\nfrom the GROUP BY clause.\n\nPerhaps we could do something like add:\n\ndiff --git a/src/backend/optimizer/plan/planner.c\nb/src/backend/optimizer/plan/planner.c\nindex 545b56bcaf..7e52212a13 100644\n--- a/src/backend/optimizer/plan/planner.c\n+++ b/src/backend/optimizer/plan/planner.c\n@@ -3182,7 +3182,8 @@ remove_useless_groupby_columns(PlannerInfo *root)\n if (!IsA(var, Var) ||\n var->varlevelsup > 0 ||\n !bms_is_member(var->varattno -\nFirstLowInvalidHeapAttributeNumber,\n-\nsurplusvars[var->varno]))\n+\nsurplusvars[var->varno]) ||\n+ list_member(parse->sortClause, sgc))\n new_groupby = lappend(new_groupby, sgc);\n }\n\nto remove_useless_groupby_columns(). It's not exactly perfect though\nsince it's still good to remove the useless GROUP BY columns if\nthere's no useful index to implement the ORDER BY. We just don't know\nthat when we call remove_useless_groupby_columns().\n\nFWIW, these also don't really seem to be all that great examples since\nit's pretty useless to put the primary key columns in the GROUP BY\nunless there's some join that's going to duplicate those columns.\nHaving the planner remove the GROUP BY completely in that case would\nbe nice. That's a topic of discussion in the UniqueKeys patch thread.\nHowever, it is possible to add a join with a join condition matching\nthe ORDER BY and have the same problem when a join type that preserves\nthe outer path's order is picked.\n\nDavid\n\n> [1] https://www.postgresql.org/message-id/flat/16869-26346b77d6ccaeec%40postgresql.org\n\n\n",
"msg_date": "Sun, 28 Feb 2021 22:14:59 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Side effect of remove_useless_groupby_columns"
},
{
"msg_contents": "Hi,\n\nOn Sun, Feb 28, 2021 at 5:15 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Sun, 28 Feb 2021 at 20:52, Richard Guo <guofenglinux@gmail.com> wrote:\n> > When looking at [1], I realized we may have a side effect when removing\n> > redundant columns in the GROUP BY clause. Suppose we have a query with\n> > ORDER BY 'b', and meanwhile column 'b' is also a group key. If we decide\n> > that 'b' is redundant due to being functionally dependent on other GROUP\n> > BY columns, we would remove it from group keys. This will make us lose\n> > the opportunity to leverage the index on 'b'.\n>\n> > Any thoughts?\n>\n> I had initially thought that this was a non-issue before incremental\n> sort was added, but the following case highlights that's wrong.\n>\n> # create table t (a int not null, b int);\n> # insert into t select i, i%1000 from generate_series(1,1000000)i;\n> # create index on t(b,a);\n>\n> # explain (analyze) select b from t group by a, b order by b,a limit 10;\n> # alter table t add constraint t_pkey primary key (a);\n> # explain (analyze) select b from t group by a, b order by b,a limit 10;\n>\n> Execution slows down after adding the primary key since that allows\n> the functional dependency to be detected and the \"b\" column removed\n> from the GROUP BY clause.\n>\n> Perhaps we could do something like add:\n>\n> diff --git a/src/backend/optimizer/plan/planner.c\n> b/src/backend/optimizer/plan/planner.c\n> index 545b56bcaf..7e52212a13 100644\n> --- a/src/backend/optimizer/plan/planner.c\n> +++ b/src/backend/optimizer/plan/planner.c\n> @@ -3182,7 +3182,8 @@ remove_useless_groupby_columns(PlannerInfo *root)\n> if (!IsA(var, Var) ||\n> var->varlevelsup > 0 ||\n> !bms_is_member(var->varattno -\n> FirstLowInvalidHeapAttributeNumber,\n> -\n> surplusvars[var->varno]))\n> +\n> surplusvars[var->varno]) ||\n> + list_member(parse->sortClause, sgc))\n> new_groupby = lappend(new_groupby, sgc);\n> }\n>\n> to remove_useless_groupby_columns(). It's not exactly perfect though\n> since it's still good to remove the useless GROUP BY columns if\n> there's no useful index to implement the ORDER BY. We just don't know\n> that when we call remove_useless_groupby_columns().\n>\n\nOr, rather than thinking it as a side effect of removing useless groupby\ncolumns, how about we do an additional optimization as 'adding useful\ngroupby columns', to add into group keys some column which matches ORDER\nBY and meanwhile is being functionally dependent on other GROUP BY\ncolumns. What I'm thinking is a scenario like below.\n\n# create table t (a int primary key, b int, c int);\n# insert into t select i, i%1000, i%1000 from generate_series(1,1000000)i;\n# create index on t(b);\n\n# explain (analyze) select b from t group by a, c order by b limit 10;\n\nFor this query, we can first remove 'c' from groupby columns with the\nexisting optimization of 'removing useless groupby columns', and then\nadd 'b' to groupby columns with the new-added optimization of 'adding\nuseful groupby columns'.\n\nWe can do that because we know 'b' is functionally dependent on 'a', and\nwe are required to be sorted by 'b', and meanwhile there is index on 'b'\nthat we can leverage.\n\nThanks\nRichard\n\nHi,On Sun, Feb 28, 2021 at 5:15 PM David Rowley <dgrowleyml@gmail.com> wrote:On Sun, 28 Feb 2021 at 20:52, Richard Guo <guofenglinux@gmail.com> wrote:\n> When looking at [1], I realized we may have a side effect when removing\n> redundant columns in the GROUP BY clause. Suppose we have a query with\n> ORDER BY 'b', and meanwhile column 'b' is also a group key. If we decide\n> that 'b' is redundant due to being functionally dependent on other GROUP\n> BY columns, we would remove it from group keys. This will make us lose\n> the opportunity to leverage the index on 'b'.\n\n> Any thoughts?\n\nI had initially thought that this was a non-issue before incremental\nsort was added, but the following case highlights that's wrong.\n\n# create table t (a int not null, b int);\n# insert into t select i, i%1000 from generate_series(1,1000000)i;\n# create index on t(b,a);\n\n# explain (analyze) select b from t group by a, b order by b,a limit 10;\n# alter table t add constraint t_pkey primary key (a);\n# explain (analyze) select b from t group by a, b order by b,a limit 10;\n\nExecution slows down after adding the primary key since that allows\nthe functional dependency to be detected and the \"b\" column removed\nfrom the GROUP BY clause.\n\nPerhaps we could do something like add:\n\ndiff --git a/src/backend/optimizer/plan/planner.c\nb/src/backend/optimizer/plan/planner.c\nindex 545b56bcaf..7e52212a13 100644\n--- a/src/backend/optimizer/plan/planner.c\n+++ b/src/backend/optimizer/plan/planner.c\n@@ -3182,7 +3182,8 @@ remove_useless_groupby_columns(PlannerInfo *root)\n if (!IsA(var, Var) ||\n var->varlevelsup > 0 ||\n !bms_is_member(var->varattno -\nFirstLowInvalidHeapAttributeNumber,\n-\nsurplusvars[var->varno]))\n+\nsurplusvars[var->varno]) ||\n+ list_member(parse->sortClause, sgc))\n new_groupby = lappend(new_groupby, sgc);\n }\n\nto remove_useless_groupby_columns(). It's not exactly perfect though\nsince it's still good to remove the useless GROUP BY columns if\nthere's no useful index to implement the ORDER BY. We just don't know\nthat when we call remove_useless_groupby_columns().Or, rather than thinking it as a side effect of removing useless groupbycolumns, how about we do an additional optimization as 'adding usefulgroupby columns', to add into group keys some column which matches ORDERBY and meanwhile is being functionally dependent on other GROUP BYcolumns. What I'm thinking is a scenario like below.# create table t (a int primary key, b int, c int);# insert into t select i, i%1000, i%1000 from generate_series(1,1000000)i;# create index on t(b);# explain (analyze) select b from t group by a, c order by b limit 10;For this query, we can first remove 'c' from groupby columns with theexisting optimization of 'removing useless groupby columns', and thenadd 'b' to groupby columns with the new-added optimization of 'addinguseful groupby columns'.We can do that because we know 'b' is functionally dependent on 'a', andwe are required to be sorted by 'b', and meanwhile there is index on 'b'that we can leverage.ThanksRichard",
"msg_date": "Mon, 1 Mar 2021 14:36:33 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Side effect of remove_useless_groupby_columns"
}
] |
[
{
"msg_contents": "Hi,\n\nI'd like to propose a patch to psql --help output:\n\nCurrently it is:\n\nUsage:\n psql [OPTION]... [DBNAME [USERNAME]]\n\n...\n\nConnection options:\n -h, --host=HOSTNAME database server host or socket directory (default: \"local socket\")\n -p, --port=PORT database server port (default: \"5432\")\n -U, --username=USERNAME database user name (default: \"paul\")\n -w, --no-password never prompt for password\n -W, --password force password prompt (should happen automatically)\n\nI'd like to change it to the following to reflect the psql ability to process a service name or a PostgreSQL URI:\n\nUsage:\n psql [OPTION]... [DBNAME [USERNAME]|service|uri]\n\n...\n\nConnection options:\n -h, --host=HOSTNAME database server host or socket directory (default: \"local socket\")\n -p, --port=PORT database server port (default: \"5432\")\n -U, --username=USERNAME database user name (default: \"paul\")\n -w, --no-password never prompt for password\n -W, --password force password prompt (should happen automatically)\n service=name service name as definited in pg_service.conf\n uri connection URI (postgresql://...)\n\n...\n\nAttached is a patch for src/bin/psql/help.c for this. The file doc/src/sgml/ref/psql-ref.sgml does not seem to need any changes for this.\n\nAny thoughts on this?\n\n\n\n\n\nCheers,\nPaul",
"msg_date": "Sun, 28 Feb 2021 10:57:32 +0100",
"msg_from": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com>",
"msg_from_op": true,
"msg_subject": "=?utf-8?Q?proposal=3A_psql_=E2=80=93help_reflecting_service_or_UR?=\n =?utf-8?Q?I_usage?="
},
{
"msg_contents": "\n\n> On Feb 28, 2021, at 1:57 AM, Paul Förster <paul.foerster@gmail.com> wrote:\n> \n> Hi,\n> \n> I'd like to propose a patch to psql --help output:\n> \n> Currently it is:\n> \n> Usage:\n> psql [OPTION]... [DBNAME [USERNAME]]\n> \n> ...\n> \n> Connection options:\n> -h, --host=HOSTNAME database server host or socket directory (default: \"local socket\")\n> -p, --port=PORT database server port (default: \"5432\")\n> -U, --username=USERNAME database user name (default: \"paul\")\n> -w, --no-password never prompt for password\n> -W, --password force password prompt (should happen automatically)\n> \n> I'd like to change it to the following to reflect the psql ability to process a service name or a PostgreSQL URI:\n> \n> Usage:\n> psql [OPTION]... [DBNAME [USERNAME]|service|uri]\n> \n> ...\n> \n> Connection options:\n> -h, --host=HOSTNAME database server host or socket directory (default: \"local socket\")\n> -p, --port=PORT database server port (default: \"5432\")\n> -U, --username=USERNAME database user name (default: \"paul\")\n> -w, --no-password never prompt for password\n> -W, --password force password prompt (should happen automatically)\n> service=name service name as definited in pg_service.conf\n\n\"definited\" is a typo.\n\nShould this say \"as defined in pg_service.conf\"? That's the default, but the user might have $PGSERVICEFILE set to something else. Perhaps you could borrow the wording of other options and use \"(default: as defined in pg_service.conf)\", or something like that, but of course being careful to still fit in the line length limit.\n\n> uri connection URI (postgresql://...)\n> \n> ...\n> \n> Attached is a patch for src/bin/psql/help.c for this. The file doc/src/sgml/ref/psql-ref.sgml does not seem to need any changes for this.\n> \n> Any thoughts on this?\n\nOther client applications follow the same pattern as psql, so if this change were adopted, it should apply to all of them.\n\nYour proposal seems like something that would have been posted to the list before, possibly multiple times. Any chance you could dig up past conversations on this subject and post links here for context?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Sun, 28 Feb 2021 08:54:06 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "=?utf-8?Q?Re=3A_proposal=3A_psql_=E2=80=93help_reflecting_service?=\n =?utf-8?Q?_or_URI_usage?="
},
{
"msg_contents": "Hi Mark,\n\n> On 28. Feb, 2021, at 17:54, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> \"definited\" is a typo.\n\nyes, definitely a typo, sorry. Thanks for pointing this out.\n\n> Should this say \"as defined in pg_service.conf\"? That's the default, but the user might have $PGSERVICEFILE set to something else. Perhaps you could borrow the wording of other options and use \"(default: as defined in pg_service.conf)\", or something like that, but of course being careful to still fit in the line length limit.\n\nI agree to all, thanks. What is the line length limit?\n\n> Other client applications follow the same pattern as psql, so if this change were adopted, it should apply to all of them.\n\nwell, psql is central and IMHO the best place to start. I'd have to try out all of them then. What I do know, though, is that pg_isready does not understand a URI (why is that?), which is very unfortunate. So, I'd have to try them all out and supply patches for them all?\n\nStill, supporting a feature and not documenting it in its help is IMHO not a good idea.\n\n> Your proposal seems like something that would have been posted to the list before, possibly multiple times. Any chance you could dig up past conversations on this subject and post links here for context?\n\nI don't know any past discussions here. I only subscribed today to psql-hackers. It might have been mentioned on psql-general, though. But I'm not sure. This idea popped into my mind just yesterday when I was playing around with psql and URIs and noticed that psql –help doesn't show them.\n\nCheers,\nPaul\n\n",
"msg_date": "Sun, 28 Feb 2021 19:10:14 +0100",
"msg_from": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com>",
"msg_from_op": true,
"msg_subject": "=?utf-8?Q?Re=3A_proposal=3A_psql_=E2=80=93help_reflecting_service?=\n =?utf-8?Q?_or_URI_usage?="
},
{
"msg_contents": "Hi Mark,\n\nI revisited my first admittedly naive and faulty attempt.\n\n> On 28. Feb, 2021, at 19:10, Paul Förster <paul.foerster@gmail.com> wrote:\n> \n>> but of course being careful to still fit in the line length limit.\n> I agree to all, thanks. What is the line length limit?\n\nstill, what is the line length limit? Where do I find it?\n\nI'd be very happy if you could take a look at this version. Thanks in advance and thanks very much for the input.\n\nCheers,\nPaul",
"msg_date": "Mon, 1 Mar 2021 14:32:47 +0100",
"msg_from": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com>",
"msg_from_op": true,
"msg_subject": "=?utf-8?Q?Re=3A_proposal=3A_psql_=E2=80=93help_reflecting_service?=\n =?utf-8?Q?_or_URI_usage?="
},
{
"msg_contents": "On Mon, Mar 1, 2021, at 10:32 AM, Paul Förster wrote:\n> still, what is the line length limit? Where do I find it?\nWe try to limit it to 80 characters but it is not a hard limit. Long \ndescriptions should certainly be split into multiple lines. \n \nThe question is: how popular is service and connection URIs? We cannot \ncertainly include all possibilities in the --help that's why we have the \ndocumentation. IMO we could probably include \"connection string\" that accepts 2\nformats: (i) multiple keyword - value string and URIs (\"service\" is included in \nthe (i)). \n \nUsage: \n psql [OPTION]... [DBNAME [USERNAME]|CONNINFO] \n \nor even \n \nUsage: \n psql [OPTION]... [DBNAME [USERNAME]] \n psql [OPTION]... [CONNINFO] \n \n \nConnection options: \n -h, --host=HOSTNAME database server host or socket directory (default: \"local socket\")\n -p, --port=PORT database server port (default: \"9999\") \n -U, --username=USERNAME database user name (default: \"euler\") \n -w, --no-password never prompt for password \n -W, --password force password prompt (should happen automatically) \n \n CONNINFO connection string to connect to (key = value strings\n or connection URI) \n \nI don't like the CONNINFO in the connection options. It seems a natural place \nbut DBNAME and USERNAME aren't included in it. Should we include it too? \nSomeone can argue that they are self-explanatory, hence, a description is not \nnecessary. \n \nIt might be a different topic but since we are talking about --help \nimprovements, I have some suggestions: \n \n* an Example section could help newbies to Postgres command-line \ntools to figure out how to inform the connection parameters. In this case, we \ncould include at least 3 examples: (i) -h, -p, -U parameters, (ii) key/value \nconnection string and (iii) connection URI. \n* Connection options could be moved to the top (maybe after General options) if\nwe consider that it is more important than the other sections (you cannot \nprobably execute psql without using a connection parameter in production).\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Mar 1, 2021, at 10:32 AM, Paul Förster wrote:still, what is the line length limit? Where do I find it?We try to limit it to 80 characters but it is not a hard limit. Long descriptions should certainly be split into multiple lines. The question is: how popular is service and connection URIs? We cannot certainly include all possibilities in the --help that's why we have the documentation. IMO we could probably include \"connection string\" that accepts 2formats: (i) multiple keyword - value string and URIs (\"service\" is included in the (i)). Usage: psql [OPTION]... [DBNAME [USERNAME]|CONNINFO] or even Usage: psql [OPTION]... [DBNAME [USERNAME]] psql [OPTION]... [CONNINFO] Connection options: -h, --host=HOSTNAME database server host or socket directory (default: \"local socket\") -p, --port=PORT database server port (default: \"9999\") -U, --username=USERNAME database user name (default: \"euler\") -w, --no-password never prompt for password -W, --password force password prompt (should happen automatically) CONNINFO connection string to connect to (key = value strings or connection URI) I don't like the CONNINFO in the connection options. It seems a natural place but DBNAME and USERNAME aren't included in it. Should we include it too? Someone can argue that they are self-explanatory, hence, a description is not necessary. It might be a different topic but since we are talking about --help improvements, I have some suggestions: * an Example section could help newbies to Postgres command-line tools to figure out how to inform the connection parameters. In this case, we could include at least 3 examples: (i) -h, -p, -U parameters, (ii) key/value connection string and (iii) connection URI. * Connection options could be moved to the top (maybe after General options) ifwe consider that it is more important than the other sections (you cannot probably execute psql without using a connection parameter in production).--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 01 Mar 2021 11:42:16 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re:_proposal:_psql_=E2=80=93help_reflecting_service_or_URI_usa?=\n =?UTF-8?Q?ge?="
},
{
"msg_contents": "\n\n> On Feb 28, 2021, at 10:10 AM, Paul Förster <paul.foerster@gmail.com> wrote:\n> \n> Hi Mark,\n> \n>> On 28. Feb, 2021, at 17:54, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>> \n>> \"definited\" is a typo.\n> \n> yes, definitely a typo, sorry. Thanks for pointing this out.\n> \n>> Should this say \"as defined in pg_service.conf\"? That's the default, but the user might have $PGSERVICEFILE set to something else. Perhaps you could borrow the wording of other options and use \"(default: as defined in pg_service.conf)\", or something like that, but of course being careful to still fit in the line length limit.\n> \n> I agree to all, thanks. What is the line length limit?\n\nThe output from --help should fit in a terminal window with only 80 characters width. For example, in src/bin/scripts/createuser.c the line about --interactive is wrapped:\n\n> printf(_(\" -S, --no-superuser role will not be superuser (default)\\n\"));\n> printf(_(\" -V, --version output version information, then exit\\n\"));\n> printf(_(\" --interactive prompt for missing role name and attributes rather\\n\"\n> \" than using defaults\\n\"));\n> printf(_(\" --replication role can initiate replication\\n\"));\n\nYou can find counter-examples where applications do not follow this rule. pg_isready is one of them.\n\n\n>> Other client applications follow the same pattern as psql, so if this change were adopted, it should apply to all of them.\n> \n> well, psql is central and IMHO the best place to start. I'd have to try out all of them then. What I do know, though, is that pg_isready does not understand a URI (why is that?), which is very unfortunate. So, I'd have to try them all out and supply patches for them all?\n\nThere is a pattern in the client applications that the --help output is less verbose than the docs (see, for example, https://www.postgresql.org/docs/13/reference-client.html). Your proposed patch makes psql's --help output a bit more verbose about this issue while leaving the other applications less so, without any obvious reason for the difference.\n\n> Still, supporting a feature and not documenting it in its help is IMHO not a good idea.\n\nOk.\n\n>> Your proposal seems like something that would have been posted to the list before, possibly multiple times. Any chance you could dig up past conversations on this subject and post links here for context?\n> \n> I don't know any past discussions here. I only subscribed today to psql-hackers. It might have been mentioned on psql-general, though. But I'm not sure. This idea popped into my mind just yesterday when I was playing around with psql and URIs and noticed that psql –help doesn't show them.\n\nI mentioned looking at the mailing list archives, but neglected to give you a link: https://www.postgresql.org/list/\n\nOver the years, many proposals get made and debated, with some accepted and some rejected. The rejected proposals often have some merit, and get suggested again, only to get rejected for the same reasons as the previous times they were suggested. So searching the archives before posting a patch can sometimes be enlightening. The difficulty in my experience is knowing what words and phrases to search for. It can be a bit time consuming trying to find a prior discussion on a topic.\n\nI don't know of any specific discussion which rejected your patch idea.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 1 Mar 2021 08:02:27 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "=?utf-8?Q?Re=3A_proposal=3A_psql_=E2=80=93help_reflecting_service?=\n =?utf-8?Q?_or_URI_usage?="
},
{
"msg_contents": "Hi Mark,\n\nsorry for the delay.\n\n> On 01. Mar, 2021, at 17:02, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> The output from --help should fit in a terminal window with only 80 characters width. For example, in src/bin/scripts/createuser.c the line about --interactive is wrapped:\n\nI see.\n\n> You can find counter-examples where applications do not follow this rule. pg_isready is one of them.\n\nyes, I noticed that.\n\n> There is a pattern in the client applications that the --help output is less verbose than the docs (see, for example, https://www.postgresql.org/docs/13/reference-client.html). Your proposed patch makes psql's --help output a bit more verbose about this issue while leaving the other applications less so, without any obvious reason for the difference.\n\nI could do the other tools too, that wouldn't be a problem. But I'm not good at writing docs. And I found that the man pages would have to be adapted too, which would be a big one for me as I'm no man guru.\n\n> Over the years, many proposals get made and debated, with some accepted and some rejected. The rejected proposals often have some merit, and get suggested again, only to get rejected for the same reasons as the previous times they were suggested. So searching the archives before posting a patch can sometimes be enlightening. The difficulty in my experience is knowing what words and phrases to search for. It can be a bit time consuming trying to find a prior discussion on a topic.\n\nI've searched the archives for quite some time and found tons of hits for the search term \"help\". But that's useless because all people ask for \"help\". :-) Beside that, I found nothing which even remotely discussed the topic.\n\nBut I generally see your points so I drop the proposal. It was only an idea after all :-) Thanks for your input.\n\nCheers,\nPaul\n\n",
"msg_date": "Sat, 6 Mar 2021 14:55:13 +0100",
"msg_from": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com>",
"msg_from_op": true,
"msg_subject": "=?utf-8?Q?Re=3A_proposal=3A_psql_=E2=80=93help_reflecting_service?=\n =?utf-8?Q?_or_URI_usage?="
},
{
"msg_contents": "Hi Euler,\n\n> On 01. Mar, 2021, at 15:42, Euler Taveira <euler@eulerto.com> wrote:\n> \n> We try to limit it to 80 characters but it is not a hard limit. Long descriptions should certainly be split into multiple lines.\n\ngot that, thanks.\n\n> The question is: how popular is service and connection URIs?\n\nwell, we use them exclusively at work, because we have a lot of Patroni clusters which may fail/switch over and we don't have an haproxy or similar. So our usual way to connect is a URI with targetServerType set.\n\n> We cannot certainly include all possibilities in the --help that's why we have the documentation. IMO we could probably include \"connection string\" that accepts 2 formats: (i) multiple keyword - value string and URIs (\"service\" is included in the (i)).\n> \n> Usage:\n> psql [OPTION]... [DBNAME [USERNAME]|CONNINFO]\n> \n> Usage:\n> psql [OPTION]... [DBNAME [USERNAME]]\n> psql [OPTION]... [CONNINFO]\n> \n> Connection options:\n> CONNINFO connection string to connect to (key = value strings\n> or connection URI)\n\nI could live with the second variant, though I'd still prefer two descriptions, one \"service=name\" and the second for the URI, as I initially suggested.\n\n> It might be a different topic but since we are talking about --help improvements, I have some suggestions:\n> \n> * an Example section could help newbies to Postgres command-line tools to figure out how to inform the connection parameters. In this case, we could include at least 3 examples: (i) -h, -p, -U parameters, (ii) key/value connection string and (iii) connection URI.\n\nthere's an example in the USAGE/Connecting to a Database section of the man page already. Also, it is documented how an URI works, so I wouldn't include an example here. Just reflecting its existence in the syntax should do. Same goes for service names.\n\n> * Connection options could be moved to the top (maybe after General options) if we consider that it is more important than the other sections (you cannot probably execute psql without using a connection parameter in production).\n\nmoving it up is IMHO merely a matter of personal taste. Making sure it's there was my initial point.\n\nBut as Mark pointed out, there's too much linked to it for me (man pages, docs, etc.). So I drop the proposal altogether. Thanks for you thoughts anyway.\n\nNow we at least have this topic finally in the mail archives. ;-)\n\nP.S.: Just curious, why do you right-pad your posts?\n\nCheers,\nPaul\n\n\n\n",
"msg_date": "Sat, 6 Mar 2021 15:18:13 +0100",
"msg_from": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com>",
"msg_from_op": true,
"msg_subject": "=?utf-8?Q?Re=3A_proposal=3A_psql_=E2=80=93help_reflecting_service?=\n =?utf-8?Q?_or_URI_usage?="
},
{
"msg_contents": "\n\n> On Mar 6, 2021, at 5:55 AM, Paul Förster <paul.foerster@gmail.com> wrote:\n> \n> Hi Mark,\n> \n> sorry for the delay.\n> \n>> On 01. Mar, 2021, at 17:02, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>> \n>> The output from --help should fit in a terminal window with only 80 characters width. For example, in src/bin/scripts/createuser.c the line about --interactive is wrapped:\n> \n> I see.\n> \n>> You can find counter-examples where applications do not follow this rule. pg_isready is one of them.\n> \n> yes, I noticed that.\n> \n>> There is a pattern in the client applications that the --help output is less verbose than the docs (see, for example, https://www.postgresql.org/docs/13/reference-client.html). Your proposed patch makes psql's --help output a bit more verbose about this issue while leaving the other applications less so, without any obvious reason for the difference.\n> \n> I could do the other tools too, that wouldn't be a problem. But I'm not good at writing docs. And I found that the man pages would have to be adapted too, which would be a big one for me as I'm no man guru.\n\nFortunately, the man pages and html docs are generated from the same sources. Those sources are written in sgml, and the tools to build the docs must be installed. From the top directory, execute `make docs` and if it complains about missing tools you will need to install them. (The build target is 'docs', but the directory containing the docs is named 'doc'.)\n\n>> Over the years, many proposals get made and debated, with some accepted and some rejected. The rejected proposals often have some merit, and get suggested again, only to get rejected for the same reasons as the previous times they were suggested. So searching the archives before posting a patch can sometimes be enlightening. The difficulty in my experience is knowing what words and phrases to search for. It can be a bit time consuming trying to find a prior discussion on a topic.\n> \n> I've searched the archives for quite some time and found tons of hits for the search term \"help\". But that's useless because all people ask for \"help\". :-) Beside that, I found nothing which even remotely discussed the topic.\n> \n> But I generally see your points so I drop the proposal. It was only an idea after all :-) Thanks for your input.\n\nOh, I'm quite sorry to hear that. The process of getting a patch accepted, especially the first time you submit one, can be discouraging. But the community greatly benefits from new contributors joining the effort, so I'd much rather you not withdraw the idea.\n\nIf you need help with certain portions of the submission, such as editing the docs, I can help with that.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 8 Mar 2021 07:39:44 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "=?utf-8?Q?Re=3A_proposal=3A_psql_=E2=80=93help_reflecting_service?=\n =?utf-8?Q?_or_URI_usage?="
},
{
"msg_contents": "Hi Mark,\n\n> On 08. Mar, 2021, at 16:39, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Fortunately, the man pages and html docs are generated from the same sources. Those sources are written in sgml, and the tools to build the docs must be installed. From the top directory, execute `make docs` and if it complains about missing tools you will need to install them. (The build target is 'docs', but the directory containing the docs is named 'doc'.)\n\nso the help files I'd change would be doc/src/sgml/ref/psql-ref.sgml, doc/src/sgml/ref/pg_isready.sgml, etc.?\n\n> Oh, I'm quite sorry to hear that. The process of getting a patch accepted, especially the first time you submit one, can be discouraging. But the community greatly benefits from new contributors joining the effort, so I'd much rather you not withdraw the idea.\n\nI'd like to, and also I'd like to do all the bin/* tools (including wrapping the long line in pg_isready ;-)), as you suggested, but I don't know the process. In my first admittedly naive attempt, I just downloaded the source from https://www.postgresql.org/ftp/source/v13.2, unpacked it and made my changes there. Then I did a diff to the original and posted it here. I don't even know if this is the correct workflow. I saw gitgub being mentioned a couple of times but I don't have an account, nor do I even know how it works.\n\nI was pretty surprised to see the lines in PWN:\n\n\"Paul Förster sent in a patch to mention database URIs in psql's --help output.\"\n\"Paul Förster sent in another revision of a patch to mention URIs and services in psql --help's output.\"\n\nIs there a FAQ somewhere that describes how properly create patches, submit them and possibly get them released? Something like a step-by-step?\n\nIs github a must-have here?\n\n> If you need help with certain portions of the submission, such as editing the docs, I can help with that.\n\nas you see above, I'm curious to learn, though doing it to all the tools will take some time for me.\n\nSorry, I'm a noob, not so much to C, but to the workflows here. Hence my questions may seem a little obvious to all the pros.\n\nCheers,\nPaul\n\n",
"msg_date": "Mon, 8 Mar 2021 17:40:22 +0100",
"msg_from": "=?utf-8?Q?Paul_F=C3=B6rster?= <paul.foerster@gmail.com>",
"msg_from_op": true,
"msg_subject": "=?utf-8?Q?Re=3A_proposal=3A_psql_=E2=80=93help_reflecting_service?=\n =?utf-8?Q?_or_URI_usage?="
},
{
"msg_contents": "\n\n> On Mar 8, 2021, at 8:40 AM, Paul Förster <paul.foerster@gmail.com> wrote:\n> \n> Hi Mark,\n> \n>> On 08. Mar, 2021, at 16:39, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>> \n>> Fortunately, the man pages and html docs are generated from the same sources. Those sources are written in sgml, and the tools to build the docs must be installed. From the top directory, execute `make docs` and if it complains about missing tools you will need to install them. (The build target is 'docs', but the directory containing the docs is named 'doc'.)\n> \n> so the help files I'd change would be doc/src/sgml/ref/psql-ref.sgml, doc/src/sgml/ref/pg_isready.sgml, etc.?\n\nYes\n\n>> Oh, I'm quite sorry to hear that. The process of getting a patch accepted, especially the first time you submit one, can be discouraging. But the community greatly benefits from new contributors joining the effort, so I'd much rather you not withdraw the idea.\n> \n> I'd like to, and also I'd like to do all the bin/* tools (including wrapping the long line in pg_isready ;-)), as you suggested, but I don't know the process. In my first admittedly naive attempt, I just downloaded the source from https://www.postgresql.org/ftp/source/v13.2, unpacked it and made my changes there. Then I did a diff to the original and posted it here. I don't even know if this is the correct workflow. I saw gitgub being mentioned a couple of times but I don't have an account, nor do I even know how it works.\n> \n> I was pretty surprised to see the lines in PWN:\n> \n> \"Paul Förster sent in a patch to mention database URIs in psql's --help output.\"\n> \"Paul Förster sent in another revision of a patch to mention URIs and services in psql --help's output.\"\n> \n> Is there a FAQ somewhere that describes how properly create patches, submit them and possibly get them released? Something like a step-by-step?\n> \n> Is github a must-have here?\n\nNo, github is not a must-have. I don't use a github account for my submissions. The project uses git for source code control, but that's not the same thing as requiring \"github\". The project switched from cvs to git a few years back.\n\nIf you can install git, using rpm or yum or whatever, then from the command line you can use\n\n git clone https://git.postgresql.org/git/postgresql.git\n\nand it will create a working directory for you. You can make modifications and commit them. When you are finished, you can run\n\n git format-patch -v 1 master\n\nand it will create a patch set containing all your changes relative to the public sources you cloned, and the patch will include your commit messages, which helps reviewers not only know what you changed, but why you made the changes, in your own words.\n\nSee https://wiki.postgresql.org/wiki/Development_information\n\n>> If you need help with certain portions of the submission, such as editing the docs, I can help with that.\n> \n> as you see above, I'm curious to learn, though doing it to all the tools will take some time for me.\n> \n> Sorry, I'm a noob, not so much to C, but to the workflows here. Hence my questions may seem a little obvious to all the pros.\n\nThat's not a problem. If this gets too verbose for the list, we can take this off-list and I can still walk you through it.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 8 Mar 2021 08:56:20 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "=?utf-8?Q?Re=3A_proposal=3A_psql_=E2=80=93help_reflecting_service?=\n =?utf-8?Q?_or_URI_usage?="
}
] |
[
{
"msg_contents": "tablecmds.c is 17k lines long, this makes it ~30 lines shorter.",
"msg_date": "Sun, 28 Feb 2021 15:18:54 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] refactor ATExec{En,Dis}ableRowSecurity"
},
{
"msg_contents": "Hi,\nFor 0002-Further-refactoring.patch, should there be assertion\ninside ATExecSetRowSecurity() on the values for rls and force_rls ?\nThere could be 3 possible values: -1, 0 and 1.\n\nCheers\n\nOn Sun, Feb 28, 2021 at 1:19 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> tablecmds.c is 17k lines long, this makes it ~30 lines shorter.\n>\n\nHi,For 0002-Further-refactoring.patch, should there be assertion inside ATExecSetRowSecurity() on the values for rls and force_rls ?There could be 3 possible values: -1, 0 and 1.CheersOn Sun, Feb 28, 2021 at 1:19 PM Justin Pryzby <pryzby@telsasoft.com> wrote:tablecmds.c is 17k lines long, this makes it ~30 lines shorter.",
"msg_date": "Sun, 28 Feb 2021 14:27:44 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] refactor ATExec{En,Dis}ableRowSecurity"
},
{
"msg_contents": "On Sun, Feb 28, 2021 at 02:27:44PM -0800, Zhihong Yu wrote:\n> For 0002-Further-refactoring.patch, should there be assertion\n> inside ATExecSetRowSecurity() on the values for rls and force_rls ?\n> There could be 3 possible values: -1, 0 and 1.\n\n0001 is a clean simplification and a good catch, so I'll see about\napplying it. 0002 just makes the code more confusing to the reader\nIMO, and its interface could easily lead to unwanted errors.\n--\nMichael",
"msg_date": "Mon, 1 Mar 2021 15:30:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] refactor ATExec{En,Dis}ableRowSecurity"
},
{
"msg_contents": "On Mon, Mar 01, 2021 at 03:30:44PM +0900, Michael Paquier wrote:\n> 0001 is a clean simplification and a good catch, so I'll see about\n> applying it. 0002 just makes the code more confusing to the reader\n> IMO, and its interface could easily lead to unwanted errors.\n\n0001 has been applied as of fabde52.\n--\nMichael",
"msg_date": "Tue, 2 Mar 2021 12:34:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] refactor ATExec{En,Dis}ableRowSecurity"
}
] |
[
{
"msg_contents": "I thought this was a good idea, but didn't hear back when I raised it before.\nI was motivated to finally look into it by Dilip's toast compression patch,\nwhich also (can do) a table rewrite when changing a column's toast compression.\n\nI called this \"set TABLE access method\" rather than just \"set access method\"\nfor the reasons given on the LIKE thread:\nhttps://www.postgresql.org/message-id/20210119210331.GN8560@telsasoft.com\n\nI've tested this with zedstore, but the lack of 2nd, in-core table AM limits\ntesting possibilities. It also limits at least my own ability to reason about\nthe AM API. For example, I was surprised to hear that toast is a concept\nthat's intended to be applied to AMs other than heap.\nhttps://www.postgresql.org/message-id/flat/CA%2BTgmoYTuT4sRtviMLOOO%2B79VnDCpCNyy9rK6UZFb7KEAVt21w%40mail.gmail.com\n\nI plan to add to CF - it seems like a minor addition, but may not be for v14.\n\nhttps://www.postgresql.org/message-id/20190818193533.GL11185@telsasoft.com\nOn Sun, Aug 18, 2019 at 02:35:33PM -0500, Justin Pryzby wrote:\n> . What do you think about pg_restore --no-tableam; similar to\n> --no-tablespaces, it would allow restoring a table to a different AM:\n> PGOPTIONS='-c default_table_access_method=zedstore' pg_restore --no-tableam ./pg_dump.dat -d postgres\n> Otherwise, the dump says \"SET default_table_access_method=heap\", which\n> overrides any value from PGOPTIONS and precludes restoring to new AM.\n...\n> . it'd be nice if there was an ALTER TABLE SET ACCESS METHOD, to allow\n> migrating data. Otherwise I think the alternative is:\n> begin; lock t;\n> CREATE TABLE new_t LIKE (t INCLUDING ALL) USING (zedstore);\n> INSERT INTO new_t SELECT * FROM t;\n> for index; do CREATE INDEX...; done\n> DROP t; RENAME new_t (and all its indices). attach/inherit, etc.\n> commit;\n>\n> . Speaking of which, I think LIKE needs a new option for ACCESS METHOD, which\n> is otherwise lost.",
"msg_date": "Sun, 28 Feb 2021 16:25:30 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Sun, Feb 28, 2021 at 04:25:30PM -0600, Justin Pryzby wrote:\n> I called this \"set TABLE access method\" rather than just \"set access method\"\n> for the reasons given on the LIKE thread:\n> https://www.postgresql.org/message-id/20210119210331.GN8560@telsasoft.com\n\nALTER TABLE applies to a table (or perhaps a sequence, still..), so\nthat sounds a bit weird to me to add again the keyword \"TABLE\" for\nthat.\n\n> I've tested this with zedstore, but the lack of 2nd, in-core table AM limits\n> testing possibilities. It also limits at least my own ability to reason about\n> the AM API.\n>\n> For example, I was surprised to hear that toast is a concept that's\n> intended to be applied to AMs other than heap.\n> https://www.postgresql.org/message-id/flat/CA%2BTgmoYTuT4sRtviMLOOO%2B79VnDCpCNyy9rK6UZFb7KEAVt21w%40mail.gmail.com\n\nWhat kind of advanced testing do you have in mind? It sounds pretty\nmuch enough to me for a basic patch to use the trick with heap2 as\nyour patch does. That would be enough to be sure that the rewrite\nhappens and that data is still around. If you are worrying about\nrecovery, a TAP test with an immediate stop of the server could\nequally be used to check after the FPWs produced for the new\nrelfilenode during the rewrite.\n--\nMichael",
"msg_date": "Mon, 1 Mar 2021 11:16:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Mon, Mar 01, 2021 at 11:16:36AM +0900, Michael Paquier wrote:\n> On Sun, Feb 28, 2021 at 04:25:30PM -0600, Justin Pryzby wrote:\n> > I called this \"set TABLE access method\" rather than just \"set access method\"\n> > for the reasons given on the LIKE thread:\n> > https://www.postgresql.org/message-id/20210119210331.GN8560@telsasoft.com\n> \n> ALTER TABLE applies to a table (or perhaps a sequence, still..), so\n> that sounds a bit weird to me to add again the keyword \"TABLE\" for\n> that.\n\nI don't know if you're following along the toast compression patch -\nAlvaro had suggested that instead of making a new catalog just for a handful of\ntuples for compression types, to instead store them in pg_am, with a new\nam_type='c'. So I proposed a patch for\n| CREATE TABLE .. (LIKE other INCLUDING ACCESS METHOD),\nbut then decided that it should say INCLUDING *TABLE* ACCESS METHOD, since\notherwise it was somewhat strange that it didn't include the compression access\nmethods (which have a separate LIKE option).\n\n> > I've tested this with zedstore, but the lack of 2nd, in-core table AM limits\n> > testing possibilities. It also limits at least my own ability to reason about\n> > the AM API.\n> >\n> > For example, I was surprised to hear that toast is a concept that's\n> > intended to be applied to AMs other than heap.\n> > https://www.postgresql.org/message-id/flat/CA%2BTgmoYTuT4sRtviMLOOO%2B79VnDCpCNyy9rK6UZFb7KEAVt21w%40mail.gmail.com\n> \n> What kind of advanced testing do you have in mind? It sounds pretty\n> much enough to me for a basic patch to use the trick with heap2 as\n> your patch does. That would be enough to be sure that the rewrite\n> happens and that data is still around.\n\nThe issue is that the toast data can be compressed, so it needs to be detoasted\nbefore pushing it to the other AM, which otherwise may not know how to\ndecompress it.\n\nIf it's not detoasted, this works with \"COMPRESSION lz4\" (since zedstore\nhappens to know how to decompress it) but that's just an accident, and it fails\nwith when using pglz. That's got to do with 2 non-core patches - when core has\nonly heap, then I don't see how something like this can be exercized.\n\npostgres=# DROP TABLE t; CREATE TABLE t (a TEXT COMPRESSION pglz) USING heap; INSERT INTO t SELECT repeat(a::text,9999) FROM generate_series(1,99)a; ALTER TABLE t SET ACCESS METHOD zedstore; SELECT * FROM t;\nDROP TABLE\nCREATE TABLE\nINSERT 0 99\nALTER TABLE\n2021-02-28 20:50:42.653 CST client backend[14958] psql ERROR: compressed lz4 data is corrupt\n2021-02-28 20:50:42.653 CST client backend[14958] psql STATEMENT: SELECT * FROM t;\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 28 Feb 2021 21:20:26 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Mon, Mar 01, 2021 at 11:16:36AM +0900, Michael Paquier wrote:\n> On Sun, Feb 28, 2021 at 04:25:30PM -0600, Justin Pryzby wrote:\n> > I called this \"set TABLE access method\" rather than just \"set access method\"\n> > for the reasons given on the LIKE thread:\n> > https://www.postgresql.org/message-id/20210119210331.GN8560@telsasoft.com\n> \n> ALTER TABLE applies to a table (or perhaps a sequence, still..), so\n> that sounds a bit weird to me to add again the keyword \"TABLE\" for\n> that.\n\nThis renames to use SET ACCESS METHOD (resolving a silly typo);\nAnd handles the tuple slots more directly;\nAnd adds docs and tab completion;\n\nAlso, since 8586bf7ed8889f39a59dd99b292014b73be85342:\n| For now it's not allowed to set a table AM for a partitioned table, as\n| we've not resolved how partitions would inherit that. Disallowing\n| allows us to introduce, if we decide that's the way forward, such a\n| behaviour without a compatibility break.\n\nI propose that it should behave like tablespace for partitioned rels:\nca4103025dfe, 33e6c34c3267\n\n-- \nJustin",
"msg_date": "Sun, 7 Mar 2021 19:07:07 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Sun, Mar 07, 2021 at 07:07:07PM -0600, Justin Pryzby wrote:\n> This renames to use SET ACCESS METHOD (resolving a silly typo);\n> And handles the tuple slots more directly;\n> And adds docs and tab completion;\n> \n> Also, since 8586bf7ed8889f39a59dd99b292014b73be85342:\n> | For now it's not allowed to set a table AM for a partitioned table, as\n> | we've not resolved how partitions would inherit that. Disallowing\n> | allows us to introduce, if we decide that's the way forward, such a\n> | behaviour without a compatibility break.\n> \n> I propose that it should behave like tablespace for partitioned rels:\n> ca4103025dfe, 33e6c34c3267\n\nSounds sensible from here. Based on the patch at hand, changing the\nAM of a partitioned table does nothing for the existing partitions,\nand newly-created partitions would inherit the new AM assigned to its\nparent. pg_dump is handling things right.\n\nFrom what I can see, the patch in itself is simple, with central parts\nin swap_relation_files() to handle the rewrite and make_new_heap() to\nassign the new correct AM.\n\n+ * Since these have no storage the tablespace can be updated with a simple\n+ * metadata only operation to update the tablespace.\n+ */\n+static void\n+ATExecSetAccessMethodNoStorage(Relation rel, Oid newAccessMethod\nThis comment still refers to tablespaces.\n\n+ /*\n+ * Record dependency on AM. This is only required for relations\n+ * that have no physical storage.\n+ */\n+ changeDependencyFor(RelationRelationId, RelationGetRelid(rel),\n+ AccessMethodRelationId, oldrelam,\n+ newAccessMethod);\nAnd? Relations with storage also require this dependency.\n\n- if (tab->newTableSpace)\n+ if (OidIsValid(tab->newTableSpace))\nYou are right, but this is just a noise diff in this patch.\n\n+ swaptemp = relform1->relam;\n+ relform1->relam = relform2->relam;\n+ relform2->relam = swaptemp;\nswap_relation_files() holds the central logic, but I can see that no\ncomments of this routine have been updated (header, comment when\nlooking at valid relfilenode{1,2}).\n\nIt seems to me that 0002 and 0003 should just be merged together.\n\n+ /* Need to detoast tuples when changing AM XXX: should\ncheck if one AM is heap and one isn't? */\n+ if (newrel->rd_rel->relam != oldrel->rd_rel->relam)\n+ {\n+ HeapTuple htup = toast_build_flattened_tuple(oldTupDesc,\n+ oldslot->tts_values,\n+ oldslot->tts_isnull);\nThis toast issue is a kind of interesting one, and it seems rather\nright to rely on toast_build_flattened_tuple() to decompress things if\nboth table AMs support toast with the internals of toast knowing what\nkind of compression has been applied to the stored tuple, rather than\nhave the table AM try to extra a toast tuple by itself. I wonder\nwhether we should have a table AM API to know what kind of compression\nis supported for a given table AMs at hand, because there is no need\nto flatten things if both the origin and the target match their\ncompression algos, which would be on HEAD to make sure that both the\norigin and table AMs have toast (relation_toast_am). Your patch,\nhere, would flatten each tuples as long as the table AMs don't \nmatch. That can be made cheaper in some cases.\n--\nMichael",
"msg_date": "Mon, 8 Mar 2021 16:30:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Mon, Mar 08, 2021 at 04:30:23PM +0900, Michael Paquier wrote:\n> This toast issue is a kind of interesting one, and it seems rather\n> right to rely on toast_build_flattened_tuple() to decompress things if\n> both table AMs support toast with the internals of toast knowing what\n> kind of compression has been applied to the stored tuple, rather than\n> have the table AM try to extra a toast tuple by itself. I wonder\n> whether we should have a table AM API to know what kind of compression\n> is supported for a given table AMs at hand, because there is no need\n> to flatten things if both the origin and the target match their\n> compression algos, which would be on HEAD to make sure that both the\n> origin and table AMs have toast (relation_toast_am). Your patch,\n> here, would flatten each tuples as long as the table AMs don't \n> match. That can be made cheaper in some cases.\n\nI actually have an idea for this one, able to test the decompression\n-> insert path when rewriting a relation with a new AM: we could add a\ndummy_table_am in src/test/modules/. By design, this table AM acts as\na blackhole, eating any data we insert into it but print into the logs\nthe data candidate for INSERT. When doing a heap -> dummy_table_am\nrewrite, the logs would provide coverage with the data from the origin\ntoast table. The opposite operation does not really matter, though it\ncould be tested. In one of my trees, I have something already close\nto that:\nhttps://github.com/michaelpq/pg_plugins/tree/master/blackhole_am/\n--\nMichael",
"msg_date": "Mon, 8 Mar 2021 17:56:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Mon, 2021-03-08 at 16:30 +0900, Michael Paquier wrote:\n> This toast issue is a kind of interesting one, and it seems rather\n> right to rely on toast_build_flattened_tuple() to decompress things\n> if\n> both table AMs support toast with the internals of toast knowing what\n> kind of compression has been applied to the stored tuple, rather than\n> have the table AM try to extra a toast tuple by itself. I wonder\n> whether we should have a table AM API to know what kind of\n> compression\n> is supported for a given table AMs at hand, because there is no need\n> to flatten things if both the origin and the target match their\n> compression algos, which would be on HEAD to make sure that both the\n> origin and table AMs have toast (relation_toast_am). Your patch,\n> here, would flatten each tuples as long as the table AMs don't \n> match. That can be made cheaper in some cases.\n\nI am confused by this. The toast-related table AM API functions are:\nrelation_needs_toast_table(), relation_toast_am(), and\nrelation_fetch_toast_slice().\n\nWhat cases are we trying to solve here?\n\n1. target of conversion is tableam1 main table, heap toast table\n2. target of conversion is tableam1 main table, no toast table\n3. target of conversion is tableam1 main table, tableam1 toast table\n4. target of conversion is tableam1 main table, tableam2 toast table\n\nOr does the problem apply to all of these cases?\n\nAnd if tableam1 can't handle some case, why can't it just detoast the\ndata itself? Shouldn't that be able to decompress anything?\n\nFor example, in columnar[1], we just always detoast/decompress because\nwe want to compress it ourselves (along with other values from the same\ncolumn), and we never have a separate toast table. Is that code\nincorrect, or will it break in v14?\n\nRegards,\n\tJeff Davis\n\n\nhttps://github.com/citusdata/citus/blob/6b1904d37a18e2975b46f0955076f84c8a387cc6/src/backend/columnar/columnar_tableam.c#L1433\n\n\n\n",
"msg_date": "Thu, 06 May 2021 13:10:53 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Thu, May 06, 2021 at 01:10:53PM -0700, Jeff Davis wrote:\n> On Mon, 2021-03-08 at 16:30 +0900, Michael Paquier wrote:\n> > This toast issue is a kind of interesting one, and it seems rather\n> > right to rely on toast_build_flattened_tuple() to decompress things\n> > if\n> > both table AMs support toast with the internals of toast knowing what\n> > kind of compression has been applied to the stored tuple, rather than\n> > have the table AM try to extra a toast tuple by itself. I wonder\n> > whether we should have a table AM API to know what kind of\n> > compression\n> > is supported for a given table AMs at hand, because there is no need\n> > to flatten things if both the origin and the target match their\n> > compression algos, which would be on HEAD to make sure that both the\n> > origin and table AMs have toast (relation_toast_am). Your patch,\n> > here, would flatten each tuples as long as the table AMs don't \n> > match. That can be made cheaper in some cases.\n> \n> I am confused by this. The toast-related table AM API functions are:\n> relation_needs_toast_table(), relation_toast_am(), and\n> relation_fetch_toast_slice().\n\nI wrote this shortly after looking at one of Dilip's LZ4 patches.\n\nAt one point in February/March, pg_attribute.attcompression defined the\ncompression used by *all* tuples in the table, rather than the compression used\nfor new tuples, and ALTER SET COMPRESSION would rewrite the table with the new\ncompression (rather than being only a catalog update).\n\n\n> What cases are we trying to solve here?\n> \n> 1. target of conversion is tableam1 main table, heap toast table\n> 2. target of conversion is tableam1 main table, no toast table\n> 3. target of conversion is tableam1 main table, tableam1 toast table\n> 4. target of conversion is tableam1 main table, tableam2 toast table\n\nI think ALTER TABLE SET ACCESS METHOD should move all data off the old AM,\nincluding its toast table. The optimization Michael suggests is when the new\nAM and old AM use the same toast AM, then the data doesn't need to be\nde/re/toasted.\n\nThanks for looking.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 6 May 2021 15:23:05 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Thu, 2021-05-06 at 15:23 -0500, Justin Pryzby wrote:\n> I think ALTER TABLE SET ACCESS METHOD should move all data off the\n> old AM,\n> including its toast table.\n\nCan you explain what you mean, and why? I'm still confused.\n\nLet's say there are 4 table AMs: A, AT, B, and BT. A's\nrelation_toast_am() returns AT, and B's relation_toast_am() returns BT.\nAT or BT are invalid if A or B have relation_needs_toast_table() return\nfalse.\n\nHere are the cases that I see:\n\nIf A = B, then AT = BT, and it's all a no-op.\n\nIf A != B and BT is invalid (e.g. converting heap to columnar), then A\nshould detoast (and perhaps decompress, as in the case of columnar)\nwhatever it gets as input and do whatever it wants. That's what\ncolumnar does and I don't see why ATRewriteTable needs to handle it.\n\nIf A != B and AT != BT, then B needs to detoast whatever it gets (but\nshould not decompress, as that would just be wasted effort), and then\nre-toast using BT. Again, I don't see a need for ATRewriteTable to do\nanything, B can handle it.\n\nThe only case I can really see where ATRewriteTable might be helpful is\nif A != B but AT = BT. In that case, in theory, you don't need to do\nanything to the toast table, just leave it where it is. But then the\nresponsibilities get a little confusing to me -- how is B supposed to\nknow that it doesn't need to toast anything? Is this the problem you\nare trying to solve?\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Thu, 06 May 2021 14:11:31 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Thu, May 06, 2021 at 02:11:31PM -0700, Jeff Davis wrote:\n> On Thu, 2021-05-06 at 15:23 -0500, Justin Pryzby wrote:\n> > I think ALTER TABLE SET ACCESS METHOD should move all data off the\n> > old AM,\n> > including its toast table.\n> \n> Can you explain what you mean, and why? I'm still confused.\n> \n> Let's say there are 4 table AMs: A, AT, B, and BT. A's\n> relation_toast_am() returns AT, and B's relation_toast_am() returns BT.\n> AT or BT are invalid if A or B have relation_needs_toast_table() return\n> false.\n> \n> Here are the cases that I see:\n> \n> If A = B, then AT = BT, and it's all a no-op.\n> \n> If A != B and BT is invalid (e.g. converting heap to columnar), then A\n> should detoast (and perhaps decompress, as in the case of columnar)\n> whatever it gets as input and do whatever it wants. That's what\n> columnar does and I don't see why ATRewriteTable needs to handle it.\n> \n> If A != B and AT != BT, then B needs to detoast whatever it gets (but\n> should not decompress, as that would just be wasted effort), and then\n> re-toast using BT. Again, I don't see a need for ATRewriteTable to do\n> anything, B can handle it.\n> \n> The only case I can really see where ATRewriteTable might be helpful is\n> if A != B but AT = BT. In that case, in theory, you don't need to do\n> anything to the toast table, just leave it where it is. But then the\n> responsibilities get a little confusing to me -- how is B supposed to\n> know that it doesn't need to toast anything? Is this the problem you\n> are trying to solve?\n\nThat's the optimization Michael is suggesting.\n\nI was approaching this after having just looked at Dilip's patch (which was\noriginally written using pg_am to support \"pluggable\" compression \"AM\"s, but\nnot otherwise related to table AM).\n\nOnce a table is converted to a new AM, its tuples had better not reference the\nold AM - it could be dropped.\n\nThe new table AM (B) shouldn't need to know anything about the old one (A). It\nshould just process incoming tuples. It makes more to me that ATRewriteTable\nwould handle this, rather than every acccess method having the same logic (at\nbest) or different logic (more likely). If ATRewriteTable didn't handle this,\ndata would become inaccessible if an AM failed to de-toast tuples.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 6 May 2021 17:19:44 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Thu, 2021-05-06 at 17:19 -0500, Justin Pryzby wrote:\n> If ATRewriteTable didn't\n> handle this,\n> data would become inaccessible if an AM failed to de-toast tuples.\n\nIf the AM fails to detoast tuples, it's got bigger problems than ALTER\nTABLE. What about INSERT INTO ... SELECT?\n\nIt's the table AM's responsibility to detoast out-of-line datums and\ntoast any values that are too large (see\nheapam.c:heap_prepare_insert()).\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Thu, 06 May 2021 17:24:25 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Thu, 2021-05-06 at 17:24 -0700, Jeff Davis wrote:\n> It's the table AM's responsibility to detoast out-of-line datums and\n> toast any values that are too large (see\n> heapam.c:heap_prepare_insert()).\n\nDo we have general agreement on this point? Did I miss another purpose\nof detoasting in tablecmds.c, or can we just remove that part of the\npatch?\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Thu, 03 Jun 2021 14:36:15 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Thu, Jun 03, 2021 at 02:36:15PM -0700, Jeff Davis wrote:\n> Do we have general agreement on this point? Did I miss another purpose\n> of detoasting in tablecmds.c, or can we just remove that part of the\n> patch?\n\nCatching up with this thread.. So, what you are suggesting here is\nthat we have no need to let ATRewriteTable() do anything about the\ndetoasting, and just push down the responsability of detoasting the\ntuple, if necessary, down to the AM layer where the tuple insertion is\nhandled, right?\n\nIn short, a table AMs would receive on a rewrite with ALTER TABLE\ntuples which may be toasted, still table_insert_tuple() should be able\nto handle both:\n- the case where this tuple was already toasted.\n- the case where this tuple has been already detoasted.\n\nYou are right that this would be more consistent with what heap does\nwith heap_prepare_insert().\n--\nMichael",
"msg_date": "Fri, 4 Jun 2021 14:58:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Fri, 2021-06-04 at 14:58 +0900, Michael Paquier wrote:\n> In short, a table AMs would receive on a rewrite with ALTER TABLE\n> tuples which may be toasted, still table_insert_tuple() should be\n> able\n> to handle both:\n> - the case where this tuple was already toasted.\n> - the case where this tuple has been already detoasted.\n\nYes. That's a current requirement, and any AM that doesn't do that is\nalready broken (e.g. for INSERT INTO ... SELECT).\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Fri, 04 Jun 2021 11:26:28 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Fri, Jun 04, 2021 at 11:26:28AM -0700, Jeff Davis wrote:\n> Yes. That's a current requirement, and any AM that doesn't do that is\n> already broken (e.g. for INSERT INTO ... SELECT).\n\nMakes sense. I was just looking at the patch, and this was the only\npart of it that made my spidey sense react.\n\nOne thing I am wondering is if we should have a dummy_table_am in\nsrc/test/modules/ to be able to stress more this feature. That does\nnot seem like a hard requirement, but relying only on heap limits a\nbit the coverage of this feature even if one changes\ndefault_table_access_method.\n--\nMichael",
"msg_date": "Sat, 5 Jun 2021 08:45:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Sat, 2021-06-05 at 08:45 +0900, Michael Paquier wrote:\n> One thing I am wondering is if we should have a dummy_table_am in\n> src/test/modules/ to be able to stress more this feature. That does\n> not seem like a hard requirement, but relying only on heap limits a\n> bit the coverage of this feature even if one changes\n> default_table_access_method.\n\nI agree that a dummy AM would be good, but implementing even a dummy AM\nis a fair amount of work. Also, there are many potential variations, so\nwe'd probably need several.\n\nThe table AM API is a work in progress, and I think it will be a few\nreleases (and require a few more table AMs in the wild) to really nail\ndown the API.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Fri, 04 Jun 2021 17:34:36 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Fri, Jun 04, 2021 at 05:34:36PM -0700, Jeff Davis wrote:\n> I agree that a dummy AM would be good, but implementing even a dummy AM\n> is a fair amount of work.\n\nNot much, honestly, the largest part being to document that properly\nso as it could be used as a template:\nhttps://www.postgresql.org/message-id/YEXm2nh/5j5P2jEl@paquier.xyz\n\n> Also, there are many potential variations, so\n> we'd probably need several.\n\nNot so sure here. GUCs or reloptions could be used to control some of\nthe behaviors. Now this really depends on the use-cases we are\nlooking to support here and the low-level facilities that could\nbenefit from this module (dummy_index_am tests reloptions for\nexample). I agree that this thread is perhaps not enough to justify\nadding this module for now.\n\n> The table AM API is a work in progress, and I think it will be a few\n> releases (and require a few more table AMs in the wild) to really nail\n> down the API.\n\nHard to say, we'll see. I'd like to believe that it could be a good\nto not set something in stone for that forever.\n--\nMichael",
"msg_date": "Sat, 5 Jun 2021 13:21:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Thu, 2021-06-03 at 14:36 -0700, Jeff Davis wrote:\n> Do we have general agreement on this point? Did I miss another\n> purpose\n> of detoasting in tablecmds.c, or can we just remove that part of the\n> patch?\n\nPer discussion with Justin, I'll drive this patch. I merged the CF\nentries into https://commitfest.postgresql.org/33/3110/\n\nNew version attached, with the detoasting code removed. Could use\nanother round of validation/cleanup in case I missed something during\nthe merge.\n\nRegards,\n\tJeff Davis",
"msg_date": "Tue, 08 Jun 2021 17:33:31 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Tue, Jun 08, 2021 at 05:33:31PM -0700, Jeff Davis wrote:\n> New version attached, with the detoasting code removed. Could use\n> another round of validation/cleanup in case I missed something during\n> the merge.\n\nThis looks rather sane to me, thanks.\n\n /* Create the transient table that will receive the re-ordered data */\n OIDNewHeap = make_new_heap(tableOid, tableSpace,\n+ accessMethod\nIt strikes me that we could extend CLUSTER/VACUUM FULL to support this\noption, in the same vein as TABLESPACE. Perhaps that's not something to\nimplement or have, just wanted to mention it.\n\n+ALTER TABLE heaptable SET ACCESS METHOD heap2;\n+explain (analyze, costs off, summary off, timing off) SELECT * FROM heaptable;\n+SELECT COUNT(a), COUNT(1) FILTER(WHERE a=1) FROM heaptable;\n+DROP TABLE heaptable;\nThere is a mix of upper and lower-case characters here. It could be\nmore consistent. It seems to me that this test should actually check\nthat pg_class.relam reflects the new value.\n\n+ /* Save info for Phase 3 to do the real work */\n+ if (OidIsValid(tab->newAccessMethod))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"cannot have multiple SET ACCESS METHOD subcommands\")));\nWorth adding a test?\n\n- * with the specified persistence, which might differ from the original's.\n+ * NewTableSpace/accessMethod/persistence, which might differ from those\nNit: the name of the variable looks inconsistent with this comment.\nThe original is weird as well.\n\nI am wondering if it would be a good idea to set the new tablespace\nand new access method fields to InvalidOid within ATGetQueueEntry. We\ndo that for the persistence. Not critical at all, still..\n\n+ pass = AT_PASS_MISC;\nMaybe add a comment here?\n--\nMichael",
"msg_date": "Wed, 9 Jun 2021 13:47:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Wed, 2021-06-09 at 13:47 +0900, Michael Paquier wrote:\n> There is a mix of upper and lower-case characters here. It could be\n> more consistent. It seems to me that this test should actually check\n> that pg_class.relam reflects the new value.\n\nDone. I also added a (negative) test for changing the AM of a\npartitioned table, which wasn't caught before.\n\n> + errmsg(\"cannot have multiple SET ACCESS METHOD\n> subcommands\")));\n> Worth adding a test?\n\nDone.\n\n> Nit: the name of the variable looks inconsistent with this comment.\n> The original is weird as well.\n\nTried to improve wording.\n\n> I am wondering if it would be a good idea to set the new tablespace\n> and new access method fields to InvalidOid within\n> ATGetQueueEntry. We\n> do that for the persistence. Not critical at all, still..\n\nDone.\n\n> + pass = AT_PASS_MISC;\n> Maybe add a comment here?\n\nDone. In that case, it doesn't matter because there's no work to be\ndone in Phase 2.\n\nRegards,\n\tJeff Davis",
"msg_date": "Wed, 09 Jun 2021 12:31:28 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Wed, Jun 9, 2021 at 12:31 PM Jeff Davis <pgsql@j-davis.com> wrote:\n\n> On Wed, 2021-06-09 at 13:47 +0900, Michael Paquier wrote:\n> > There is a mix of upper and lower-case characters here. It could be\n> > more consistent. It seems to me that this test should actually check\n> > that pg_class.relam reflects the new value.\n>\n> Done. I also added a (negative) test for changing the AM of a\n> partitioned table, which wasn't caught before.\n>\n> > + errmsg(\"cannot have multiple SET ACCESS METHOD\n> > subcommands\")));\n> > Worth adding a test?\n>\n> Done.\n>\n> > Nit: the name of the variable looks inconsistent with this comment.\n> > The original is weird as well.\n>\n> Tried to improve wording.\n>\n> > I am wondering if it would be a good idea to set the new tablespace\n> > and new access method fields to InvalidOid within\n> > ATGetQueueEntry. We\n> > do that for the persistence. Not critical at all, still..\n>\n> Done.\n>\n> > + pass = AT_PASS_MISC;\n> > Maybe add a comment here?\n>\n> Done. In that case, it doesn't matter because there's no work to be\n> done in Phase 2.\n>\n> Regards,\n> Jeff Davis\n>\n> Hi,\n\n+ /* check if another access method change was already requested\n*/\n+ if (tab->newAccessMethod)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"cannot change access method setting\ntwice\")));\n\nI think the error message can be refined - changing access method twice is\nsupported, as long as the two changes don't overlap.\n\nCheers\n\nOn Wed, Jun 9, 2021 at 12:31 PM Jeff Davis <pgsql@j-davis.com> wrote:On Wed, 2021-06-09 at 13:47 +0900, Michael Paquier wrote:\n> There is a mix of upper and lower-case characters here. It could be\n> more consistent. It seems to me that this test should actually check\n> that pg_class.relam reflects the new value.\n\nDone. I also added a (negative) test for changing the AM of a\npartitioned table, which wasn't caught before.\n\n> + errmsg(\"cannot have multiple SET ACCESS METHOD\n> subcommands\")));\n> Worth adding a test?\n\nDone.\n\n> Nit: the name of the variable looks inconsistent with this comment.\n> The original is weird as well.\n\nTried to improve wording.\n\n> I am wondering if it would be a good idea to set the new tablespace\n> and new access method fields to InvalidOid within\n> ATGetQueueEntry. We\n> do that for the persistence. Not critical at all, still..\n\nDone.\n\n> + pass = AT_PASS_MISC;\n> Maybe add a comment here?\n\nDone. In that case, it doesn't matter because there's no work to be\ndone in Phase 2.\n\nRegards,\n Jeff Davis\nHi,+ /* check if another access method change was already requested */+ if (tab->newAccessMethod)+ ereport(ERROR,+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),+ errmsg(\"cannot change access method setting twice\")));I think the error message can be refined - changing access method twice is supported, as long as the two changes don't overlap.Cheers",
"msg_date": "Wed, 9 Jun 2021 13:45:52 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Wed, Jun 09, 2021 at 01:45:52PM -0700, Zhihong Yu wrote:\n> + /* check if another access method change was already requested\n> */\n> + if (tab->newAccessMethod)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"cannot change access method setting\n> twice\")));\n> \n> I think the error message can be refined - changing access method twice is\n> supported, as long as the two changes don't overlap.\n\nHmm. Do we actually need this one? ATPrepSetAccessMethod() checks\nalready that this command cannot be specified multiple times, with an\nerror message consistent with what SET TABLESPACE does.\n--\nMichael",
"msg_date": "Thu, 10 Jun 2021 11:02:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Wed, Jun 09, 2021 at 01:47:18PM +0900, Michael Paquier wrote:\n> On Tue, Jun 08, 2021 at 05:33:31PM -0700, Jeff Davis wrote:\n> > New version attached, with the detoasting code removed. Could use\n> > another round of validation/cleanup in case I missed something during\n> > the merge.\n> \n> This looks rather sane to me, thanks.\n> \n> /* Create the transient table that will receive the re-ordered data */\n> OIDNewHeap = make_new_heap(tableOid, tableSpace,\n> + accessMethod\n> It strikes me that we could extend CLUSTER/VACUUM FULL to support this\n> option, in the same vein as TABLESPACE. Perhaps that's not something to\n> implement or have, just wanted to mention it.\n\nIt's a good thought. But remember that that c5b28604 handled REINDEX\n(TABLESPACE) but not CLUSTER/VACUUM FULL (TABLESPACE). You wrote:\nhttps://www.postgresql.org/message-id/YBuWbzoW6W7AaMv0%40paquier.xyz\n> Regarding the VACUUM and CLUSTER cases, I am not completely sure if\n> going through these for a tablespace case is the best move we can do,\n> as ALTER TABLE is able to mix multiple operations and all of them\n> require already an AEL to work. REINDEX was different thanks to the\n> case of CONCURRENTLY. Anyway, as a lot of work has been done here\n> already, I would recommend to create new threads about those two\n> topics. I am also closing this patch in the CF app.\n\nIn any case, I think we really want to have an ALTER .. SET ACCESS METHOD.\nSupporting it also in CLUSTER/VACUUM is an optional, additional feature.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 9 Jun 2021 21:35:06 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Wed, Jun 09, 2021 at 01:45:52PM -0700, Zhihong Yu wrote:\n> + /* check if another access method change was already requested\n> */\n> + if (tab->newAccessMethod)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"cannot change access method setting twice\")));\n> \n> I think the error message can be refined - changing access method twice is\n> supported, as long as the two changes don't overlap.\n\nThat language is consistent wtih existing errors.\n\nsrc/backend/commands/tablecmds.c: errmsg(\"cannot change persistence setting twice\")));\nsrc/backend/commands/tablecmds.c: errmsg(\"cannot change persistence setting twice\")));\n errmsg(\"cannot alter type of column \\\"%s\\\" twice\",\n\nHowever, I think that SET TABLESPACE is a better template to follow:\n errmsg(\"cannot have multiple SET TABLESPACE subcommands\")));\n\nMichael pointed out that there's two, redundant checks:\n\n+ if (rel->rd_rel->relam == amoid)\n+ return;\n+ \n+ /* Save info for Phase 3 to do the real work */\n+ if (OidIsValid(tab->newAccessMethod))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"cannot have multiple SET ACCESS METHOD subcommands\")));\n\nI think that the \"multiple subcommands\" test should be before the \"no-op\" test.\n\n@Jeff: In my original patch, I also proposed patches 2,3:\n\nSubject: [PATCH v2 2/3] Allow specifying acccess method of partitioned tables..\nSubject: [PATCH v2 3/3] Implement lsyscache get_rel_relam() \n\n\n",
"msg_date": "Wed, 9 Jun 2021 21:40:14 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Thu, Jun 10, 2021 at 1:01 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Wed, 2021-06-09 at 13:47 +0900, Michael Paquier wrote:\n> > There is a mix of upper and lower-case characters here. It could be\n> > more consistent. It seems to me that this test should actually check\n> > that pg_class.relam reflects the new value.\n>\n> Done. I also added a (negative) test for changing the AM of a\n> partitioned table, which wasn't caught before.\n>\n> > + errmsg(\"cannot have multiple SET ACCESS METHOD\n> > subcommands\")));\n> > Worth adding a test?\n>\n> Done.\n>\n> > Nit: the name of the variable looks inconsistent with this comment.\n> > The original is weird as well.\n>\n> Tried to improve wording.\n>\n> > I am wondering if it would be a good idea to set the new tablespace\n> > and new access method fields to InvalidOid within\n> > ATGetQueueEntry. We\n> > do that for the persistence. Not critical at all, still..\n>\n> Done.\n>\n> > + pass = AT_PASS_MISC;\n> > Maybe add a comment here?\n>\n> Done. In that case, it doesn't matter because there's no work to be\n> done in Phase 2.\n>\n\nThere are few compilation issues:\ntablecmds.c:4629:52: error: too few arguments to function call,\nexpected 3, have 2\nATSimplePermissions(rel, ATT_TABLE | ATT_MATVIEW);\n~~~~~~~~~~~~~~~~~~~ ^\ntablecmds.c:402:1: note: 'ATSimplePermissions' declared here\nstatic void ATSimplePermissions(AlterTableType cmdtype, Relation rel,\nint allowed_targets);\n^\ntablecmds.c:5983:10: warning: enumeration value 'AT_SetAccessMethod'\nnot handled in switch [-Wswitch]\nswitch (cmdtype)\n^\n1 warning and 1 error generated.\n\nAlso few comments need to be addressed, based on that I'm changing the\nstatus to \"Waiting for Author\".\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 14 Jul 2021 16:31:45 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "rebased.\n\nAlso, there were two redundant checks for multiple SET ACCESS METHOD commands.\nBut one of them wasn't hit if the ALTER was setting the current AM due to the\nno-op test.\n\nI think it's better to fail in every case, and not just sometimes (especially\nif we were to use ERRCODE_SYNTAX_ERROR).\n\nI included my 2ndary patch allowing to set the AM of partitioned table, same as\nfor a tablespace.\n\n-- \nJustin",
"msg_date": "Thu, 15 Jul 2021 22:49:23 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Thu, Jul 15, 2021 at 10:49:23PM -0500, Justin Pryzby wrote:\n> Also, there were two redundant checks for multiple SET ACCESS METHOD commands.\n> But one of them wasn't hit if the ALTER was setting the current AM due to the\n> no-op test.\n\nYep.\n\n> I think it's better to fail in every case, and not just sometimes (especially\n> if we were to use ERRCODE_SYNTAX_ERROR).\n\nLooks rather fine.\n\n- if (tab->newTableSpace)\n+ if (OidIsValid(tab->newTableSpace))\nThis has no need to be part of this patch.\n\n /*\n- * If we have ALTER TABLE <sth> SET TABLESPACE provide a list of\n- * tablespaces\n+ * Complete with list of tablespaces (for SET TABLESPACE) or table AMs (for\n+ * SET ACCESS METHOD).\n */\n+ else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"SET\", \"ACCESS\", \"METHOD\"))\n+ COMPLETE_WITH_QUERY(Query_for_list_of_table_access_methods);\n else if (Matches(\"ALTER\", \"TABLE\", MatchAny, \"SET\", \"TABLESPACE\"))\n COMPLETE_WITH_QUERY(Query_for_list_of_tablespaces);\nNit, there is no need to merge both block here. Let's keep them\nseparated.\n\n+-- negative test\n[...]\n+-- negative test\nThose descriptions could be better, and describe what they prevent\n(aka no multiple subcommands SET ACCESS METHOD and not allowed on\npartitioned tables).\n\n> I included my 2ndary patch allowing to set the AM of partitioned table, same as\n> for a tablespace.\n\nI would suggest to not hide this topic within a thread unrelated to\nit, as this is not going to ease the feedback around it. Let's start\na new thread if you feel this is necessary.\n\nJeff, you proposed to commit this patch upthread. Are you planning to\nlook at that and do so?\n--\nMichael",
"msg_date": "Mon, 19 Jul 2021 16:21:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Fri, Jul 16, 2021 at 9:19 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> rebased.\n>\n> Also, there were two redundant checks for multiple SET ACCESS METHOD commands.\n> But one of them wasn't hit if the ALTER was setting the current AM due to the\n> no-op test.\n>\n> I think it's better to fail in every case, and not just sometimes (especially\n> if we were to use ERRCODE_SYNTAX_ERROR).\n>\n> I included my 2ndary patch allowing to set the AM of partitioned table, same as\n> for a tablespace.\n\nOne of the tests is failing, please post an updated patch for this:\ncreate_am.out 2021-07-22 10:34:56.234654166 +0530\n@@ -177,6 +177,7 @@\n (1 row)\n\n -- CREATE TABLE .. PARTITION BY supports USING\n+-- new partitions will inherit from the current default, rather the\npartition root\n CREATE TABLE tableam_parted_heap2 (a text, b int) PARTITION BY list\n(a) USING heap2;\n SET default_table_access_method = 'heap';\n CREATE TABLE tableam_parted_a_heap2 PARTITION OF tableam_parted_heap2\nFOR VALUES IN ('a');\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 22 Jul 2021 10:37:12 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 10:37:12AM +0530, vignesh C wrote:\n> On Fri, Jul 16, 2021 at 9:19 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > rebased.\n> >\n> > Also, there were two redundant checks for multiple SET ACCESS METHOD commands.\n> > But one of them wasn't hit if the ALTER was setting the current AM due to the\n> > no-op test.\n> >\n> > I think it's better to fail in every case, and not just sometimes (especially\n> > if we were to use ERRCODE_SYNTAX_ERROR).\n> >\n> > I included my 2ndary patch allowing to set the AM of partitioned table, same as\n> > for a tablespace.\n> \n> One of the tests is failing, please post an updated patch for this:\n> create_am.out 2021-07-22 10:34:56.234654166 +0530\n\nIt looks like one hunk was missing/uncommitted from the 0002 patch..\n\n-- \nJustin",
"msg_date": "Thu, 22 Jul 2021 04:41:54 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 04:41:54AM -0500, Justin Pryzby wrote:\n> It looks like one hunk was missing/uncommitted from the 0002 patch..\n\nOkay, hearing nothing, I have looked again at 0001 and did some light\nadjustments, mainly in the tests. I did not spot any issues in the\npatch, so that looks good to go for me.\n--\nMichael",
"msg_date": "Tue, 27 Jul 2021 16:38:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Tue, Jul 27, 2021 at 04:38:48PM +0900, Michael Paquier wrote:\n> Okay, hearing nothing, I have looked again at 0001 and did some light\n> adjustments, mainly in the tests. I did not spot any issues in the\n> patch, so that looks good to go for me.\n\nAnd done as of b048326.\n--\nMichael",
"msg_date": "Wed, 28 Jul 2021 12:23:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Wed, 2021-07-28 at 12:23 +0900, Michael Paquier wrote:\n> On Tue, Jul 27, 2021 at 04:38:48PM +0900, Michael Paquier wrote:\n> > Okay, hearing nothing, I have looked again at 0001 and did some\n> > light\n> > adjustments, mainly in the tests. I did not spot any issues in the\n> > patch, so that looks good to go for me.\n> \n> And done as of b048326.\n\nI just returned from vacation and I was about ready to commit this\nmyself, but I noticed that it doesn't seem to be calling\nInvokeObjectPostAlterHook(). I was in the process of trying to be sure\nof where to call it. It looks like it should be called after catalog\nchanges but before CommandCounterIncrement(), and it also looks like it\nshould be called even for no-op commands.\n\nAlso, I agree with Justin that it should fail when there are multiple\nSET ACCESS METHOD subcommands consistently, regardless of whether one\nis a no-op, and it should probably throw a syntax error to match SET\nTABLESPACE.\n\nMinor nit: in tab-complete.c, why does it say \"<smt>\"? Is that just a\ntypo or is there a reason it's different from everything else, which\nuses \"<sth>\"? And what does \"sth\" mean anyway?\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Tue, 27 Jul 2021 20:40:59 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Tue, Jul 27, 2021 at 08:40:59PM -0700, Jeff Davis wrote:\n> I just returned from vacation and I was about ready to commit this\n> myself, but I noticed that it doesn't seem to be calling\n> InvokeObjectPostAlterHook().\n\nArg, sorry about that! I was unclear what the situation of the patch\nwas.\n\n> I was in the process of trying to be sure\n> of where to call it. It looks like it should be called after catalog\n> changes but before CommandCounterIncrement(), and it also looks like it\n> should be called even for no-op commands.\n\nRight. Isn't that an older issue though? A rewrite involved after a\nchange of relpersistence does not call the hook either. It looks to\nme that this should be after finish_heap_swap() to match with\nATExecSetTableSpace() in ATRewriteTables(). The only known user of\nobject_access_hook in the core code is sepgsql, so this would\ninvolve a change of behavior. And I don't recall any backpatching\nthat added a post-alter hook.\n\n> Also, I agree with Justin that it should fail when there are multiple\n> SET ACCESS METHOD subcommands consistently, regardless of whether one\n> is a no-op, and it should probably throw a syntax error to match SET\n> TABLESPACE.\n\nHmm. Okay.\n\n> Minor nit: in tab-complete.c, why does it say \"<smt>\"? Is that just a\n> typo or is there a reason it's different from everything else, which\n> uses \"<sth>\"? And what does \"sth\" mean anyway?\n\n\"Something\". That should be \"<sth>\" to be consistent with the area.\n--\nMichael",
"msg_date": "Wed, 28 Jul 2021 14:02:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Wed, 2021-07-28 at 14:02 +0900, Michael Paquier wrote:\n> Arg, sorry about that! I was unclear what the situation of the patch\n> was.\n\nNo problem, race condition ;-)\n\n> Right. Isn't that an older issue though? A rewrite involved after a\n> change of relpersistence does not call the hook either. It looks to\n> me that this should be after finish_heap_swap() to match with\n> ATExecSetTableSpace() in ATRewriteTables(). The only known user of\n> object_access_hook in the core code is sepgsql, so this would\n> involve a change of behavior. And I don't recall any backpatching\n> that added a post-alter hook.\n\nSounds like it should be a different patch. Thank you.\n\n> > Also, I agree with Justin that it should fail when there are\n> > multiple\n> > SET ACCESS METHOD subcommands consistently, regardless of whether\n> > one\n> > is a no-op, and it should probably throw a syntax error to match\n> > SET\n> > TABLESPACE.\n> \n> Hmm. Okay.\n> \n> > Minor nit: in tab-complete.c, why does it say \"<smt>\"? Is that just\n> > a\n> > typo or is there a reason it's different from everything else,\n> > which\n> > uses \"<sth>\"? And what does \"sth\" mean anyway?\n> \n> \"Something\". That should be \"<sth>\" to be consistent with the area.\n\nThese two issues are pretty minor.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Wed, 28 Jul 2021 13:05:10 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Wed, Jul 28, 2021 at 01:05:10PM -0700, Jeff Davis wrote:\n> On Wed, 2021-07-28 at 14:02 +0900, Michael Paquier wrote:\n>> Right. Isn't that an older issue though? A rewrite involved after a\n>> change of relpersistence does not call the hook either. It looks to\n>> me that this should be after finish_heap_swap() to match with\n>> ATExecSetTableSpace() in ATRewriteTables(). The only known user of\n>> object_access_hook in the core code is sepgsql, so this would\n>> involve a change of behavior. And I don't recall any backpatching\n>> that added a post-alter hook.\n> \n> Sounds like it should be a different patch. Thank you.\n\nDoing any checks around the hooks of objectaccess.h is very annoying,\nbecause we have no modules to check after them easily except sepgsql.\nAnyway, I have been checking that, with the hack-ish module attached\nand tracked down that swap_relation_files() calls\nInvokeObjectPostAlterHookArg() already (as you already spotted?), but\nthat's an internal change when it comes to SET LOGGED/UNLOGGED/ACCESS\nMETHOD :(\n\nAttached is a small module I have used for those tests, for\nreference. It passes on HEAD, and with the patch attached you can see\nthe extra entries.\n\n>>> Also, I agree with Justin that it should fail when there are\n>>> multiple\n>>> SET ACCESS METHOD subcommands consistently, regardless of whether\n>>> one\n>>> is a no-op, and it should probably throw a syntax error to match\n>>> SET\n>>> TABLESPACE.\n>> \n>> Hmm. Okay.\n\nI'd still disagree with that. One example is SET LOGGED that would\nnot fail when repeated multiple times. Also, tracking down if a SET\nACCESS METHOD subcommand has been passed down requires an extra field\nin each tab entry so I am not sure that this is worth the extra\ncomplication.\n\nI can see benefits and advantages one way or the other, and I would\ntend to keep the code a maximum simple as we never store InvalidOid\nfor a table AM. Anyway, I won't fight the majority either.\n\n>>> Minor nit: in tab-complete.c, why does it say \"<smt>\"? Is that just\n>>> a\n>>> typo or is there a reason it's different from everything else,\n>>> which\n>>> uses \"<sth>\"? And what does \"sth\" mean anyway?\n>> \n>> \"Something\". That should be \"<sth>\" to be consistent with the area.\n> \n> These two issues are pretty minor.\n\nFixed this one, while not forgetting about it. Thanks.\n--\nMichael",
"msg_date": "Thu, 29 Jul 2021 15:27:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Thu, 2021-07-29 at 15:27 +0900, Michael Paquier wrote:\n> Doing any checks around the hooks of objectaccess.h is very annoying,\n> because we have no modules to check after them easily except sepgsql.\n> Anyway, I have been checking that, with the hack-ish module attached\n> and tracked down that swap_relation_files() calls\n> InvokeObjectPostAlterHookArg() already (as you already spotted?), but\n> that's an internal change when it comes to SET LOGGED/UNLOGGED/ACCESS\n> METHOD :(\n> \n> Attached is a small module I have used for those tests, for\n> reference. It passes on HEAD, and with the patch attached you can\n> see\n> the extra entries.\n\nI see that ATExecSetTableSpace() also invokes the hook even for a no-\nop. Should we do the same thing for setting the AM?\n\n> > > > Also, I agree with Justin that it should fail when there are\n> > > > multiple\n> > > > SET ACCESS METHOD subcommands consistently, regardless of\n> > > > whether\n> > > > one\n> > > > is a no-op, and it should probably throw a syntax error to\n> > > > match\n> > > > SET\n> > > > TABLESPACE.\n> > > \n> > > Hmm. Okay.\n> \n> I'd still disagree with that.\n\nOK, I won't press for a change here.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Thu, 29 Jul 2021 08:55:21 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 08:55:21AM -0700, Jeff Davis wrote:\n> I see that ATExecSetTableSpace() also invokes the hook even for a no-\n> op. Should we do the same thing for setting the AM?\n\nLooking at the past, it was the intention of 05f3f9c7 to go through\nthe hook even if SET TABLESPACE does not move the relation, so you are\nright that ALTER TABLE is inconsistent to not do the same for LOGGED,\nUNLOGGED and ACCESS METHOD if all of them do nothing to trigger a\nrelation rewrite.\n\nNow, I am a bit biased about this change and if we actually need it\nfor the no-op path. If we were to do that, I think that we need to\nadd in AlteredTableInfo a way to track down if any of those\nsubcommands have been used to allow the case of rewrite == 0 to launch\nthe hook even if these are no-ops. And I am not sure if that's worth\nthe code complication for an edge case. We definitely should have a\nhook call for the case of rewrite > 0, though.\n--\nMichael",
"msg_date": "Fri, 30 Jul 2021 16:22:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Fri, 2021-07-30 at 16:22 +0900, Michael Paquier wrote:\n> Looking at the past, it was the intention of 05f3f9c7 to go through\n> the hook even if SET TABLESPACE does not move the relation, so you\n> are\n> right that ALTER TABLE is inconsistent to not do the same for LOGGED,\n> UNLOGGED and ACCESS METHOD if all of them do nothing to trigger a\n> relation rewrite.\n> \n> Now, I am a bit biased about this change and if we actually need it\n> for the no-op path. If we were to do that, I think that we need to\n> add in AlteredTableInfo a way to track down if any of those\n> subcommands have been used to allow the case of rewrite == 0 to\n> launch\n> the hook even if these are no-ops. And I am not sure if that's worth\n> the code complication for an edge case. We definitely should have a\n> hook call for the case of rewrite > 0, though.\n\nIt sounds like anything we do here should be part of a larger change to\nmake it consistent. So I'm fine with the patch you posted.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Fri, 30 Jul 2021 14:18:02 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Fri, Jul 30, 2021 at 02:18:02PM -0700, Jeff Davis wrote:\n> It sounds like anything we do here should be part of a larger change to\n> make it consistent. So I'm fine with the patch you posted.\n\nAs a matter of curiosity, I have checked how it would look to handle\nthe no-op case for the sub-commands other than SET TABLESPACE, and one\nwould need something like the attached, with a new flag for\nAlteredTableInfo. That does not really look good, but it triggers\nproperly the object access hook when SET LOGGED/UNLOGGED/ACCESS METHOD\nare no-ops, so let's just handle the case using the version from\nupthread. I'll do that at the beginning of next week.\n--\nMichael",
"msg_date": "Sat, 7 Aug 2021 19:18:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Sat, Aug 07, 2021 at 07:18:19PM +0900, Michael Paquier wrote:\n> As a matter of curiosity, I have checked how it would look to handle\n> the no-op case for the sub-commands other than SET TABLESPACE, and one\n> would need something like the attached, with a new flag for\n> AlteredTableInfo. That does not really look good, but it triggers\n> properly the object access hook when SET LOGGED/UNLOGGED/ACCESS METHOD\n> are no-ops, so let's just handle the case using the version from\n> upthread. I'll do that at the beginning of next week.\n\nSo, on a closer look, it happens that this breaks the regression tests\nof sepgsql, as the two following commands in ddl.sql cause a rewrite:\nALTER TABLE regtest_table_4 ALTER COLUMN y TYPE float;\nALTER TABLE regtest_ptable_4 ALTER COLUMN y TYPE float;\n\nI have been fighting with SE/Linux for a couple of hours to try to\nfigure out how to run our regression tests, but gave up after running\ninto various segmentation faults even after following all the steps of\nthe documentation to set up things. Perhaps that's because I just set\nup the environment with a recent Debian, I don't know. Instead of\nrunning those tests, I have fallen back to my own module and ran all\nthe SQLs of sepgsql to find out places where there are rewrites where\nI spotted those two places.\n\nOne thing I have noticed is that sepgsql-regtest.te fails to compile\nbecause /usr/share/selinux/devel/Makefile does not understand\nauth_read_passwd(). Looking at some of the SE/Linux repos, perhaps\nthis ought to be auth_read_shadow() instead to be able to work with a\nnewer version?\n\nAnyway, as the addition of this InvokeObjectPostAlterHook() is\nconsistent for a rewrite caused by SET LOGGED/UNLOGGED/ACCESS METHOD I\nhave applied the patch. I'll fix rhinoceros once it reports back the\ndiffs in output.\n--\nMichael",
"msg_date": "Tue, 10 Aug 2021 12:24:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 12:24:13PM +0900, Michael Paquier wrote:\n> So, on a closer look, it happens that this breaks the regression tests\n> of sepgsql, as the two following commands in ddl.sql cause a rewrite:\n> ALTER TABLE regtest_table_4 ALTER COLUMN y TYPE float;\n> ALTER TABLE regtest_ptable_4 ALTER COLUMN y TYPE float;\n\nrhinoceros has reported back, and these are the only two that required\nan adjustment, so fixed.\n--\nMichael",
"msg_date": "Tue, 10 Aug 2021 13:17:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: alter table set TABLE ACCESS METHOD"
}
] |
[
{
"msg_contents": "I noticed that some of the slowest cases in Joel's regex test corpus\nhad issues with back-reference matching, and along the way to fixing\nthat discovered what seems to me to be a bug in the engine's handling\nof back-references. To wit, what should happen if a back-reference\nis to a subexpression that contains constraints? A simple example is\n\n\tSELECT regexp_match('foof', '(^f)o*\\1');\n\nTo my mind, the back reference is only chartered to match the literal\ncharacters matched by the referenced subexpression. Here, since that\nexpression matches \"f\", the backref should too, and thus we should get\na match to \"foof\". Perl gives that answer, anyway; but our existing\ncode says there's no match. That's because it effectively copies the\nconstraints within the referenced subexpression, in addition to making\nthe data comparison. The \"^\" can't match where the second \"f\" is, so\nwe lose.\n\n0001 attached fixes this by stripping constraint arcs out of the NFA\nthat's applied to the backref subre tree node.\n\nNow, as to the performance issue ... if you load up the data in\n\"trouble.sql\" attached, and do\n\n\tSELECT regexp_matches(subject, pattern, 'g') FROM trouble;\n\nyou'll be waiting a good long time, even with our recent improvements.\n(Up to now I hadn't tried the 'g' flag with Joel's test cases, so\nI hadn't noticed what a problem this particular example has got.)\nThe reason for the issue is that the pattern is\n\n\t([\"'`])(?:\\\\\\1|.)*?\\1\n\nand the subject string has a mix of \" and ' quote characters. As\ncurrently implemented, our engine tries to resolve the match at\nany substring ending in either \" or ', since the NFA created for\nthe backref can match either. That leads to O(N^2) time wasted\ntrying to verify wrong matches.\n\nI realized that this could be improved by replacing the NFA/DFA\nmatch step for a backref node with a string literal match, if the\nbackreference match string is already known at the time we try\nto apply the NFA/DFA. That's not a panacea, but it helps in most\nsimple cases including this one. The way to visualize what is\nhappening is that we have a tree of binary concatenation nodes:\n\n\t concat\n\t / \\\n\t capture concat\n\t / \\\n\t other stuff backref\n\nEach concat node performs fast NFA/DFA checks on both its children\nbefore recursing to the children to make slow exact checks. When we\nrecurse to the capture node, it records the actual match substring,\nso now we know whether the capture is \" or '. Then, when we recurse\nto the lower concat node, the capture is available while it makes\nNFA/DFA checks for its two children; so it will never mistakenly\nguess that its second child matches a substring it doesn't, and\nthus it won't try to do exact checking of the \"other stuff\" on a\nmatch that's bound to fail later.\n\nSo this works as long as the tree of concat nodes is right-deep,\nwhich fortunately is the normal case. It won't help if we have\na left-deep tree:\n\n\t concat\n\t / \\\n\t concat backref\n\t / \\\n\tcapture other stuff\n\nbecause the upper concat node will do its NFA/DFA check on the backref\nnode before recursing to its left child, where the capture will occur.\nBut to get that tree, you have to have written extra parentheses:\n\n\t((capture)otherstuff)\\2\n\nI don't see a way to improve that situation, unless perhaps with\nmassive rejiggering of the regex execution engine. But 0002 attached\ndoes help a lot in the simple case.\n\n(BTW, the connection between 0001 and 0002 is that if we want to keep\nthe existing semantics that a backref enforces constraints, 0002\ndoesn't work, since it won't do that.)\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 28 Feb 2021 19:53:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Regex back-reference semantics and performance"
},
{
"msg_contents": "On Mon, Mar 1, 2021, at 01:53, Tom Lane wrote:\n>0001-fix-backref-semantics.patch\n>0002-backref-performance-hack.patch\n\nI've successfully tested both patches.\n\nOn HEAD the trouble-query took forever, I cancelled it after 23 minutes.\n\nHEAD (f5a5773a9dc4185414fe538525e20d8512c2ba35):\nSELECT regexp_matches(subject, pattern, 'g') FROM trouble;\n^CCancel request sent\nERROR: canceling statement due to user request\nTime: 1387398.764 ms (23:07.399)\n\nHEAD + 0001 + 0002:\nSELECT regexp_matches(subject, pattern, 'g') FROM trouble;\nTime: 24.943 ms\nTime: 22.217 ms\nTime: 20.250 ms\n\nVery nice!\n\nI also verified the patches gave the same result for the performance_test:\n\nSELECT\n is_match <> (subject ~ pattern) AS is_match_diff,\n captured IS DISTINCT FROM regexp_match(subject, pattern, flags) AS captured_diff,\n COUNT(*)\nFROM performance_test\nGROUP BY 1,2\nORDER BY 1,2\n;\n\nis_match_diff | captured_diff | count\n---------------+---------------+---------\nf | f | 3360068\n(1 row)\n\nNo notable timing differences:\n\nHEAD (f5a5773a9dc4185414fe538525e20d8512c2ba35)\nTime: 97016.668 ms (01:37.017)\nTime: 96945.567 ms (01:36.946)\nTime: 95261.263 ms (01:35.261)\n\nHEAD + 0001:\nTime: 97165.302 ms (01:37.165)\nTime: 96327.836 ms (01:36.328)\nTime: 96295.643 ms (01:36.296)\n\nHEAD + 0001 + 0002:\nTime: 96447.527 ms (01:36.448)\nTime: 94262.288 ms (01:34.262)\nTime: 95331.483 ms (01:35.331)\n\n/Joel\nOn Mon, Mar 1, 2021, at 01:53, Tom Lane wrote:>0001-fix-backref-semantics.patch>0002-backref-performance-hack.patchI've successfully tested both patches.On HEAD the trouble-query took forever, I cancelled it after 23 minutes.HEAD (f5a5773a9dc4185414fe538525e20d8512c2ba35):SELECT regexp_matches(subject, pattern, 'g') FROM trouble;^CCancel request sentERROR: canceling statement due to user requestTime: 1387398.764 ms (23:07.399)HEAD + 0001 + 0002:SELECT regexp_matches(subject, pattern, 'g') FROM trouble;Time: 24.943 msTime: 22.217 msTime: 20.250 msVery nice!I also verified the patches gave the same result for the performance_test:SELECT is_match <> (subject ~ pattern) AS is_match_diff, captured IS DISTINCT FROM regexp_match(subject, pattern, flags) AS captured_diff, COUNT(*)FROM performance_testGROUP BY 1,2ORDER BY 1,2;is_match_diff | captured_diff | count---------------+---------------+---------f | f | 3360068(1 row)No notable timing differences:HEAD (f5a5773a9dc4185414fe538525e20d8512c2ba35)Time: 97016.668 ms (01:37.017)Time: 96945.567 ms (01:36.946)Time: 95261.263 ms (01:35.261)HEAD + 0001:Time: 97165.302 ms (01:37.165)Time: 96327.836 ms (01:36.328)Time: 96295.643 ms (01:36.296)HEAD + 0001 + 0002:Time: 96447.527 ms (01:36.448)Time: 94262.288 ms (01:34.262)Time: 95331.483 ms (01:35.331)/Joel",
"msg_date": "Mon, 01 Mar 2021 12:13:56 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: Regex back-reference semantics and performance"
},
{
"msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> On Mon, Mar 1, 2021, at 01:53, Tom Lane wrote:\n>> 0001-fix-backref-semantics.patch\n>> 0002-backref-performance-hack.patch\n\n> I've successfully tested both patches.\n\nAgain, thanks for testing!\n\n> On HEAD the trouble-query took forever, I cancelled it after 23 minutes.\n\nYeah, I have not had the patience to run it to completion either.\n\n> No notable timing differences:\n\nI'm seeing a win of maybe 1% across the entire corpus, which isn't\nmuch but it's something. It's not too surprising that this backref\nissue is seldom hit, or we'd have had more complaints about it.\n\nBTW, I had what seemed like a great idea to improve the situation in\nthe left-deep-tree case I talked about: we could remember the places\nwhere we'd had to use the NFA to check a backreference subre, and\nthen at the point where we capture the reference string, recheck any\nprevious approximate answers, and fail the capture node's match when\nany previous backref doesn't match. Turns out this only mostly works.\nIn corner cases where the match is ambiguous, it can change the\nresults from what we got before, which I don't think is acceptable.\nBasically, that allows the backref node to have action-at-a-distance\neffects on where the earlier concat node divides a substring, which\nchanges the behavior.\n\nSo it seems this is about the best we can do for now. I'll wait\na little longer to see if anyone complains about the proposed\nsemantics change, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Mar 2021 15:22:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Regex back-reference semantics and performance"
}
] |
[
{
"msg_contents": "Hi,\n\nI found a behaviour mismatch in initdb's output between Windows and Unix in\nPostgreSQL. When we run initdb, it displays a command at the end of the\noutput which is used to run the server. It displays properly in Unix where\nwe can directly use the command to run the server. But in the case of\nWindows, the command shown in initdb's output cannot be directly used to\nrun the server since Windows uses forward slashes in the path but the\ncommand contains backward slashes.\n\nPlease refer to the sample output of initdb given below. The command\ncontains 2 paths.\n1. pg_ctl path - This path is handled properly in Unix whereas in Windows,\nthis path contains backward slashes which is a Unix complaint and it wont\nwork in Windows.\n2. data path - This path is handled properly in both Unix and Windows.\n\nC:\\Users\\Administrator\\Desktop\\Nitin_PostgreSQL\\postgresql_test\\src\\tools\\msvc>install\\bin\\initdb.exe\n> -D database\\data\n> The files belonging to this database system will be owned by user\n> \"Administrator\".\n> This user must also own the server process.\n> The database cluster will be initialized with locale \"English_United\n> States.1252\".\n> The default database encoding has accordingly been set to \"WIN1252\".\n> The default text search configuration will be set to \"english\".\n> Data page checksums are disabled.\n> creating directory database/data ... ok\n> creating subdirectories ... ok\n> selecting dynamic shared memory implementation ... windows\n> selecting default max_connections ... 100\n> selecting default shared_buffers ... 128MB\n> selecting default time zone ... US/Pacific\n> creating configuration files ... ok\n> running bootstrap script ... ok\n> performing post-bootstrap initialization ... ok\n> syncing data to disk ... ok\n> initdb: warning: enabling \"trust\" authentication for local connections\n> You can change this by editing pg_hba.conf or using the option -A, or\n> --auth-local and --auth-host, the next time you run initdb.\n> Success. You can now start the database server using:\n> install/bin/pg_ctl -D ^\"database^\\data^\" -l logfile start\n\n\nThe problem in the code is canonicalize_path() function simplifies the path\nand it also converts the native path to Unix style path while simplifying\nthe path. That is why the path shown in windows contains backslashes.\nPlease refer below piece of code for more information.\n\n> /* Get directory specification used to start initdb ... */\n\nstrlcpy(pg_ctl_path, argv[0], sizeof(pg_ctl_path));\n> canonicalize_path(pg_ctl_path);\n> get_parent_directory(pg_ctl_path);\n> /* ... and tag on pg_ctl instead */\n> join_path_components(pg_ctl_path, pg_ctl_path, \"pg_ctl\");\n\n\nIn case of a data directory path, it uses the native path. Hence there is\nno problem.\n\n> /* path to pg_ctl, properly quoted */\n> appendShellString(start_db_cmd, pg_ctl_path);\n> /* add -D switch, with properly quoted data directory */\n> appendPQExpBufferStr(start_db_cmd, \" -D \");\n> appendShellString(start_db_cmd, pgdata_native);\n\n\nThe solution to this issue is we need to convert the ' pg_ctl_path' to\nnative path before displaying. The issue is fixed in the patch attached.\n\nFollowing is the sample output after fixing the issue.\n\n Success. You can now start the database server using:\n> ^\"install^\\bin^\\pg^_ctl^\" -D ^\"database^\\data^\" -l logfile start\n\n\nNow this path can be directly used to run the server in windows.\n\nPlease share your thoughts on this. If we go ahead with this change, then\nwe need to back-patch. I would be happy to create those patches.\n\nThanks and Regards,\nNitin Jadhav",
"msg_date": "Mon, 1 Mar 2021 10:22:37 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Bug fix in initdb output"
},
{
"msg_contents": "On Mon, Mar 1, 2021 at 5:52 AM Nitin Jadhav <nitinjadhavpostgres@gmail.com>\nwrote:\n\n>\n>> Please share your thoughts on this. If we go ahead with this change,\n> then we need to back-patch. I would be happy to create those patches.\n>\n\nA full path works, even with the slashes. The commiter will take care of\nback-patching, if needed. As for HEAD at least, this LGTM.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Mon, Mar 1, 2021 at 5:52 AM Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:Please share your thoughts on this. If we go ahead with this change, then we need to back-patch. I would be happy to create those patches.A full path works, even with the slashes. The commiter will take care of back-patching, if needed. As for HEAD at least, this LGTM.Regards,Juan José Santamaría Flecha",
"msg_date": "Mon, 1 Mar 2021 14:58:35 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Bug fix in initdb output"
},
{
"msg_contents": "On 2021-Mar-01, Juan Jos� Santamar�a Flecha wrote:\n\n> On Mon, Mar 1, 2021 at 5:52 AM Nitin Jadhav <nitinjadhavpostgres@gmail.com>\n> wrote:\n> \n> >\n> >> Please share your thoughts on this. If we go ahead with this change,\n> > then we need to back-patch. I would be happy to create those patches.\n> \n> A full path works, even with the slashes. The commiter will take care of\n> back-patching, if needed. As for HEAD at least, this LGTM.\n\nI don't get it. I thought the windows API accepted both forward slashes\nand backslashes as path separators. Did you try the command and see it\nfail?\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Entristecido, Wutra (canci�n de Las Barreras)\necha a Freyr a rodar\ny a nosotros al mar\"\n\n\n",
"msg_date": "Mon, 1 Mar 2021 15:16:08 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Bug fix in initdb output"
},
{
"msg_contents": "El lun., 1 mar. 2021 19:16, Alvaro Herrera <alvherre@alvh.no-ip.org>\nescribió:\n\n>\n> I don't get it. I thought the windows API accepted both forward slashes\n> and backslashes as path separators. Did you try the command and see it\n> fail?\n>\n\nThis is not a problem with the APi, but the shell. e.g. when using a CMD:\n\n- This works:\nc:\\>c:\\Windows\\System32\\notepad.exe\nc:\\>c:/Windows/System32/notepad.exe\nc:\\>/Windows/System32/notepad.exe\n\n- This doesn't:\nc:\\>./Windows/System32/notepad.exe\n'.' is not recognized as an internal or external command,\noperable program or batch file.\nc:\\>Windows/System32/notepad.exe\n'Windows' is not recognized as an internal or external command,\noperable program or batch file.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nEl lun., 1 mar. 2021 19:16, Alvaro Herrera <alvherre@alvh.no-ip.org> escribió:\nI don't get it. I thought the windows API accepted both forward slashes\nand backslashes as path separators. Did you try the command and see it\nfail?This is not a problem with the APi, but the shell. e.g. when using a CMD:- This works:c:\\>c:\\Windows\\System32\\notepad.exec:\\>c:/Windows/System32/notepad.exec:\\>/Windows/System32/notepad.exe- This doesn't:c:\\>./Windows/System32/notepad.exe'.' is not recognized as an internal or external command,operable program or batch file.c:\\>Windows/System32/notepad.exe'Windows' is not recognized as an internal or external command,operable program or batch file.Regards,Juan José Santamaría Flecha",
"msg_date": "Mon, 1 Mar 2021 19:39:49 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Bug fix in initdb output"
},
{
"msg_contents": "On 2021-Mar-01, Juan Jos� Santamar�a Flecha wrote:\n\n> This is not a problem with the APi, but the shell. e.g. when using a CMD:\n> \n> - This works:\n> c:\\>c:\\Windows\\System32\\notepad.exe\n> c:\\>c:/Windows/System32/notepad.exe\n> c:\\>/Windows/System32/notepad.exe\n> \n> - This doesn't:\n> c:\\>./Windows/System32/notepad.exe\n> '.' is not recognized as an internal or external command,\n> operable program or batch file.\n> c:\\>Windows/System32/notepad.exe\n> 'Windows' is not recognized as an internal or external command,\n> operable program or batch file.\n\nAh, so another way to fix it would be to make the path to pg_ctl be\nabsolute?\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Mon, 1 Mar 2021 15:50:37 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Bug fix in initdb output"
},
{
"msg_contents": "On Mon, Mar 1, 2021 at 7:50 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2021-Mar-01, Juan José Santamaría Flecha wrote:\n>\n> > This is not a problem with the APi, but the shell. e.g. when using a CMD:\n> >\n> > - This works:\n> > c:\\>c:\\Windows\\System32\\notepad.exe\n> > c:\\>c:/Windows/System32/notepad.exe\n> > c:\\>/Windows/System32/notepad.exe\n> >\n> > - This doesn't:\n> > c:\\>./Windows/System32/notepad.exe\n> > '.' is not recognized as an internal or external command,\n> > operable program or batch file.\n> > c:\\>Windows/System32/notepad.exe\n> > 'Windows' is not recognized as an internal or external command,\n> > operable program or batch file.\n>\n> Ah, so another way to fix it would be to make the path to pg_ctl be\n> absolute?\n>\n\nYes, that's right. If you call initdb with an absolute path you won't see a\nproblem.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Mon, Mar 1, 2021 at 7:50 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2021-Mar-01, Juan José Santamaría Flecha wrote:\n\n> This is not a problem with the APi, but the shell. e.g. when using a CMD:\n> \n> - This works:\n> c:\\>c:\\Windows\\System32\\notepad.exe\n> c:\\>c:/Windows/System32/notepad.exe\n> c:\\>/Windows/System32/notepad.exe\n> \n> - This doesn't:\n> c:\\>./Windows/System32/notepad.exe\n> '.' is not recognized as an internal or external command,\n> operable program or batch file.\n> c:\\>Windows/System32/notepad.exe\n> 'Windows' is not recognized as an internal or external command,\n> operable program or batch file.\n\nAh, so another way to fix it would be to make the path to pg_ctl be\nabsolute?Yes, that's right. If you call initdb with an absolute path you won't see a problem.Regards,Juan José Santamaría Flecha",
"msg_date": "Mon, 1 Mar 2021 19:57:51 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Bug fix in initdb output"
},
{
"msg_contents": "On 2021-Mar-01, Juan Jos� Santamar�a Flecha wrote:\n\n> On Mon, Mar 1, 2021 at 7:50 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n\n> > Ah, so another way to fix it would be to make the path to pg_ctl be\n> > absolute?\n> \n> Yes, that's right. If you call initdb with an absolute path you won't see a\n> problem.\n\nSo, is make_native_path a better fix than make_absolute_path? (I find\nit pretty surprising that initdb shows a relative path, but maybe that's\njust me.)\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"How strange it is to find the words \"Perl\" and \"saner\" in such close\nproximity, with no apparent sense of irony. I doubt that Larry himself\ncould have managed it.\" (ncm, http://lwn.net/Articles/174769/)\n\n\n",
"msg_date": "Mon, 1 Mar 2021 17:09:29 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Bug fix in initdb output"
},
{
"msg_contents": "On Mon, Mar 1, 2021 at 9:09 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2021-Mar-01, Juan José Santamaría Flecha wrote:\n>\n> > On Mon, Mar 1, 2021 at 7:50 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> > wrote:\n>\n> > > Ah, so another way to fix it would be to make the path to pg_ctl be\n> > > absolute?\n> >\n> > Yes, that's right. If you call initdb with an absolute path you won't\n> see a\n> > problem.\n>\n> So, is make_native_path a better fix than make_absolute_path? (I find\n> it pretty surprising that initdb shows a relative path, but maybe that's\n> just me.)\n>\n\nUhm, now that you point it out, an absolute path would make the message\nmore consistent and reusable.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Mon, Mar 1, 2021 at 9:09 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2021-Mar-01, Juan José Santamaría Flecha wrote:\n\n> On Mon, Mar 1, 2021 at 7:50 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n\n> > Ah, so another way to fix it would be to make the path to pg_ctl be\n> > absolute?\n> \n> Yes, that's right. If you call initdb with an absolute path you won't see a\n> problem.\n\nSo, is make_native_path a better fix than make_absolute_path? (I find\nit pretty surprising that initdb shows a relative path, but maybe that's\njust me.)Uhm, now that you point it out, an absolute path would make the message more consistent and reusable.Regards,Juan José Santamaría Flecha",
"msg_date": "Mon, 1 Mar 2021 21:49:25 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Bug fix in initdb output"
},
{
"msg_contents": "On 2021-Mar-01, Juan Jos� Santamar�a Flecha wrote:\n\n> Uhm, now that you point it out, an absolute path would make the message\n> more consistent and reusable.\n\nWell. This code was introduced in a00c58314745, with discussion at\nhttp://postgr.es/m/CAHeEsBeAe1FeBypT3E8R1ZVZU0e8xv3A-7BHg6bEOi=jZny2Uw@mail.gmail.com\nwhich did not touch on the point of the pg_ctl path being relative or\nabsolute. The previous decision to use relative seems to have been made\nhere in commit ee814b4511ec, which was backed by this discussion\nhttps://www.postgresql.org/message-id/flat/200411020134.52513.peter_e%40gmx.net\n\nSo I'm not sure that anybody would love me if I change it again to\nabsolute.\n\n-- \n�lvaro Herrera Valdivia, Chile\nVoy a acabar con todos los humanos / con los humanos yo acabar�\nvoy a acabar con todos (bis) / con todos los humanos acabar� �acabar�! (Bender)\n\n\n",
"msg_date": "Mon, 1 Mar 2021 18:18:41 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Bug fix in initdb output"
},
{
"msg_contents": "On Mon, Mar 1, 2021 at 10:18 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2021-Mar-01, Juan José Santamaría Flecha wrote:\n>\n> > Uhm, now that you point it out, an absolute path would make the message\n> > more consistent and reusable.\n>\n> Well. This code was introduced in a00c58314745, with discussion at\n>\n> http://postgr.es/m/CAHeEsBeAe1FeBypT3E8R1ZVZU0e8xv3A-7BHg6bEOi=jZny2Uw@mail.gmail.com\n> which did not touch on the point of the pg_ctl path being relative or\n> absolute. The previous decision to use relative seems to have been made\n> here in commit ee814b4511ec, which was backed by this discussion\n>\n> https://www.postgresql.org/message-id/flat/200411020134.52513.peter_e%40gmx.net\n>\n> So I'm not sure that anybody would love me if I change it again to\n> absolute.\n>\n\nFor me it is a +1 for the change to absolute. Let's see if more people want\nto weigh in on the matter.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Mon, Mar 1, 2021 at 10:18 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2021-Mar-01, Juan José Santamaría Flecha wrote:\n\n> Uhm, now that you point it out, an absolute path would make the message\n> more consistent and reusable.\n\nWell. This code was introduced in a00c58314745, with discussion at\nhttp://postgr.es/m/CAHeEsBeAe1FeBypT3E8R1ZVZU0e8xv3A-7BHg6bEOi=jZny2Uw@mail.gmail.com\nwhich did not touch on the point of the pg_ctl path being relative or\nabsolute. The previous decision to use relative seems to have been made\nhere in commit ee814b4511ec, which was backed by this discussion\nhttps://www.postgresql.org/message-id/flat/200411020134.52513.peter_e%40gmx.net\n\nSo I'm not sure that anybody would love me if I change it again to\nabsolute.For me it is a +1 for the change to absolute. Let's see if more people want to weigh in on the matter.Regards,Juan José Santamaría Flecha",
"msg_date": "Tue, 2 Mar 2021 01:28:57 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Bug fix in initdb output"
},
{
"msg_contents": "On Tue, Mar 02, 2021 at 01:28:57AM +0100, Juan José Santamaría Flecha wrote:\n> For me it is a +1 for the change to absolute. Let's see if more people want\n> to weigh in on the matter.\n\nFWIW, I don't think that it is a good idea to come back to this\ndecision for *nix platforms, so I would let it as-is, and use relative\npaths if initdb is called using a relative path.\n\nThe number of people using a relative path for Windows initialization\nsounds rather limited to me. However, if you can write something that\nmakes the path printed compatible for a WIN32 terminal while keeping\nthe behavior consistent with other platforms, people would have no\nobjections to that.\n--\nMichael",
"msg_date": "Tue, 2 Mar 2021 10:23:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Bug fix in initdb output"
},
{
"msg_contents": ">\n> FWIW, I don't think that it is a good idea to come back to this\n> decision for *nix platforms, so I would let it as-is, and use relative\n> paths if initdb is called using a relative path.\n\n\nThe command to be displayed either in absolute path or relative path\ndepends on the way the user is calling initdb. I agree with the above\ncomment to keep this behaviour as-is, that is if the initdb is called using\nrelative path, then it displays relative path otherwise absolute path.\n\n\n> However, if you can write something that\n> makes the path printed compatible for a WIN32 terminal while keeping\n> the behavior consistent with other platforms, people would have no\n> objections to that.\n\nI feel the patch attached above handles this scenario.\n\nThanks and Regards,\nNitin Jadhav\n\nOn Tue, Mar 2, 2021 at 6:53 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Mar 02, 2021 at 01:28:57AM +0100, Juan José Santamaría Flecha\n> wrote:\n> > For me it is a +1 for the change to absolute. Let's see if more people\n> want\n> > to weigh in on the matter.\n>\n> FWIW, I don't think that it is a good idea to come back to this\n> decision for *nix platforms, so I would let it as-is, and use relative\n> paths if initdb is called using a relative path.\n>\n> The number of people using a relative path for Windows initialization\n> sounds rather limited to me. However, if you can write something that\n> makes the path printed compatible for a WIN32 terminal while keeping\n> the behavior consistent with other platforms, people would have no\n> objections to that.\n> --\n> Michael\n>\n\nFWIW, I don't think that it is a good idea to come back to thisdecision for *nix platforms, so I would let it as-is, and use relativepaths if initdb is called using a relative path.The command to be displayed either in absolute path or relative path depends on the way the user is calling initdb. I agree with the above comment to keep this behaviour as-is, that is if the initdb is called using relative path, then it displays relative path otherwise absolute path. However, if you can write something thatmakes the path printed compatible for a WIN32 terminal while keepingthe behavior consistent with other platforms, people would have noobjections to that.I feel the patch attached above handles this scenario.Thanks and Regards,Nitin JadhavOn Tue, Mar 2, 2021 at 6:53 AM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Mar 02, 2021 at 01:28:57AM +0100, Juan José Santamaría Flecha wrote:\n> For me it is a +1 for the change to absolute. Let's see if more people want\n> to weigh in on the matter.\n\nFWIW, I don't think that it is a good idea to come back to this\ndecision for *nix platforms, so I would let it as-is, and use relative\npaths if initdb is called using a relative path.\n\nThe number of people using a relative path for Windows initialization\nsounds rather limited to me. However, if you can write something that\nmakes the path printed compatible for a WIN32 terminal while keeping\nthe behavior consistent with other platforms, people would have no\nobjections to that.\n--\nMichael",
"msg_date": "Tue, 2 Mar 2021 14:07:12 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Bug fix in initdb output"
},
{
"msg_contents": "On 2021-Mar-02, Nitin Jadhav wrote:\n\n> > FWIW, I don't think that it is a good idea to come back to this\n> > decision for *nix platforms, so I would let it as-is, and use\n> > relative paths if initdb is called using a relative path.\n> \n> The command to be displayed either in absolute path or relative path\n> depends on the way the user is calling initdb. I agree with the above\n> comment to keep this behaviour as-is, that is if the initdb is called\n> using relative path, then it displays relative path otherwise absolute\n> path.\n\nYeah.\n\n> > However, if you can write something that makes the path printed\n> > compatible for a WIN32 terminal while keeping the behavior\n> > consistent with other platforms, people would have no objections to\n> > that.\n> \n> I feel the patch attached above handles this scenario.\n\nAgreed. I'll push the original patch then. Thanks everybody for the\ndiscussion.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Porque francamente, si para saber manejarse a uno mismo hubiera que\nrendir examen... �Qui�n es el machito que tendr�a carnet?\" (Mafalda)\n\n\n",
"msg_date": "Tue, 2 Mar 2021 11:32:48 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Bug fix in initdb output"
},
{
"msg_contents": "\nOn 3/2/21 9:32 AM, Alvaro Herrera wrote:\n> On 2021-Mar-02, Nitin Jadhav wrote:\n>\n>>> FWIW, I don't think that it is a good idea to come back to this\n>>> decision for *nix platforms, so I would let it as-is, and use\n>>> relative paths if initdb is called using a relative path.\n>> The command to be displayed either in absolute path or relative path\n>> depends on the way the user is calling initdb. I agree with the above\n>> comment to keep this behaviour as-is, that is if the initdb is called\n>> using relative path, then it displays relative path otherwise absolute\n>> path.\n> Yeah.\n>\n>>> However, if you can write something that makes the path printed\n>>> compatible for a WIN32 terminal while keeping the behavior\n>>> consistent with other platforms, people would have no objections to\n>>> that.\n>> I feel the patch attached above handles this scenario.\n> Agreed. I'll push the original patch then. Thanks everybody for the\n> discussion.\n>\n\n\nI'm late to the party on this which is my fault, I had my head down on\nother stuff. But I just noticed this commit. Unfortunately the commit\nmessage and this thread contain suggestions that aren't true. How do I\nknow? Because the buildfarm has for years called pg_ctl via cmd.exe with\na relative path with forward slashes. here's the relevant perl code that\nruns on all platforms:\n\n\n chdir($installdir);\n my $cmd =\n qq{\"bin/pg_ctl\" -D data-$locale -l logfile -w start >startlog\n2>&1};\n system($cmd);\n\n\nNote that the pg_ctl path is quoted, and those quotes are passed through\nto cmd.exe. That's what makes it work. It's possibly not worth changing\nit now, but if anything that's the change that should have been made here.\n\n\nJust wanted that on the record in case people got this wrong idea that\nyou can't use forward slashes when calling a program in cmd.exe.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 21 Mar 2021 17:28:03 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Bug fix in initdb output"
},
{
"msg_contents": "On Sun, Mar 21, 2021 at 10:28 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> Note that the pg_ctl path is quoted, and those quotes are passed through\n> to cmd.exe. That's what makes it work. It's possibly not worth changing\n> it now, but if anything that's the change that should have been made here.\n>\n> The OP claimed that the printed command was not working 'as-is', which is\na valid complaint.\n\nQuoting the command seems like a complete answer for this, as it will solve\nproblems with spaces and such for both Windows and Unix-like systems.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Sun, Mar 21, 2021 at 10:28 PM Andrew Dunstan <andrew@dunslane.net> wrote:\nNote that the pg_ctl path is quoted, and those quotes are passed through\nto cmd.exe. That's what makes it work. It's possibly not worth changing\nit now, but if anything that's the change that should have been made here.The OP claimed that the printed command was not working 'as-is', which is a valid complaint.Quoting the command seems like a complete answer for this, as it will solve problems with spaces and such for both Windows and Unix-like systems.Regards,Juan José Santamaría Flecha",
"msg_date": "Mon, 22 Mar 2021 09:36:46 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Bug fix in initdb output"
},
{
"msg_contents": "\nOn 3/22/21 4:36 AM, Juan José Santamaría Flecha wrote:\n>\n> On Sun, Mar 21, 2021 at 10:28 PM Andrew Dunstan <andrew@dunslane.net\n> <mailto:andrew@dunslane.net>> wrote:\n>\n>\n> Note that the pg_ctl path is quoted, and those quotes are passed\n> through\n> to cmd.exe. That's what makes it work. It's possibly not worth\n> changing\n> it now, but if anything that's the change that should have been\n> made here.\n>\n> The OP claimed that the printed command was not working 'as-is', which\n> is a valid complaint.\n>\n> Quoting the command seems like a complete answer for this, as it will\n> solve problems with spaces and such for both Windows and Unix-like\n> systems.\n>\n>\n\nLooking into this more closely, we are calling appendShellString() which\nis designed to ensure that we call commands via system() cleanly and\nsecurely. But we're not calling system() here. All we're doing is to\nprint a message. The caret-escaped message is horribly ugly IMNSHO. Can\nwe see if we can get something less ugly than this?:\n\n Success. You can now start the database server using:\n\n ^\"bin^\\\\pg^_ctl^\" -D data-C -l logfile start\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 22 Mar 2021 08:47:38 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Bug fix in initdb output"
}
] |
[
{
"msg_contents": "Full use of a custom data type with postgres_fdw currently requires the\ntype be maintained in both the local and remote databases. `CREATE\nFOREIGN TABLE` does not check declared types against the remote table,\nbut declaring e.g. a remote enum to be local text works only partway, as\nseen here. A simple select query against alpha_items returns the enum\nvalues as text; however, *filtering* on the column yields an error.\n\ncreate database alpha;\ncreate database beta;\n\n\\c alpha\n\ncreate type itemtype as enum ('one', 'two', 'three');\ncreate table items (\n id serial not null primary key,\n type itemtype not null\n);\ninsert into items (type) values ('one'), ('one'), ('two');\n\n\\c beta\n\ncreate extension postgres_fdw;\ncreate server alpha foreign data wrapper postgres_fdw options (dbname 'alpha', host 'localhost', port '5432');\ncreate user mapping for postgres server alpha options (user 'postgres');\n\ncreate foreign table alpha_items (\n id int,\n type text\n) server alpha options (table_name 'items');\nselect * from alpha_items; -- ok\nselect * from alpha_items where type = 'one';\n\nERROR: operator does not exist: public.itemtype = text\nHINT: No operator matches the given name and argument types. You might need to add explicit type casts.\nCONTEXT: remote SQL command: SELECT id, type FROM public.items WHERE ((type = 'one'::text))\n\nThe attached changeset adds a new boolean option for postgres_fdw\nforeign table columns, `use_local_type`. When true, ColumnRefs for the\nrelevant attribute will be deparsed with a cast to the type defined in\n`CREATE FOREIGN TABLE`.\n\ncreate foreign table alpha_items (\n id int,\n type text options (use_local_type 'true')\n) server alpha options (table_name 'items');\nselect * from alpha_items where type = 'one'; -- succeeds\n\nThis builds and checks, with a new regression test and documentation.",
"msg_date": "Mon, 01 Mar 2021 02:24:01 -0500",
"msg_from": "\"Dian M Fay\" <dian.m.fay@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] postgres-fdw: column option to override foreign types"
},
{
"msg_contents": "On Mon, Mar 1, 2021 at 12:59 PM Dian M Fay <dian.m.fay@gmail.com> wrote:\n>\n> Full use of a custom data type with postgres_fdw currently requires the\n> type be maintained in both the local and remote databases. `CREATE\n> FOREIGN TABLE` does not check declared types against the remote table,\n> but declaring e.g. a remote enum to be local text works only partway, as\n> seen here. A simple select query against alpha_items returns the enum\n> values as text; however, *filtering* on the column yields an error.\n>\n> create database alpha;\n> create database beta;\n>\n> \\c alpha\n>\n> create type itemtype as enum ('one', 'two', 'three');\n> create table items (\n> id serial not null primary key,\n> type itemtype not null\n> );\n> insert into items (type) values ('one'), ('one'), ('two');\n>\n> \\c beta\n>\n> create extension postgres_fdw;\n> create server alpha foreign data wrapper postgres_fdw options (dbname 'alpha', host 'localhost', port '5432');\n> create user mapping for postgres server alpha options (user 'postgres');\n>\n> create foreign table alpha_items (\n> id int,\n> type text\n> ) server alpha options (table_name 'items');\n\npostgres_fdw assumes that the local type declared is semantically same\nas the remote type. Ideally the enum should also be declared locally\nand used to declare type's datatype. See how to handle UDTs in\npostgres_fdw at\nhttps://stackoverflow.com/questions/37734170/can-the-foreign-data-wrapper-fdw-postgres-handle-the-geometry-data-type-of-postg\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 2 Mar 2021 17:20:10 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] postgres-fdw: column option to override foreign types"
},
{
"msg_contents": "On Tue Mar 2, 2021 at 6:50 AM EST, Ashutosh Bapat wrote:\n> On Mon, Mar 1, 2021 at 12:59 PM Dian M Fay <dian.m.fay@gmail.com> wrote:\n> >\n> > Full use of a custom data type with postgres_fdw currently requires the\n> > type be maintained in both the local and remote databases. `CREATE\n> > FOREIGN TABLE` does not check declared types against the remote table,\n> > but declaring e.g. a remote enum to be local text works only partway, as\n> > seen here. A simple select query against alpha_items returns the enum\n> > values as text; however, *filtering* on the column yields an error.\n>\n> postgres_fdw assumes that the local type declared is semantically same\n> as the remote type. Ideally the enum should also be declared locally\n> and used to declare type's datatype. See how to handle UDTs in\n> postgres_fdw at\n> https://stackoverflow.com/questions/37734170/can-the-foreign-data-wrapper-fdw-postgres-handle-the-geometry-data-type-of-postg\n\nI'm aware, and the reason for this change is that I think it's annoying\nto declare and maintain the type on the local server for the sole\npurpose of accommodating a read-only foreign table that effectively\ntreats it like text anyway. The real scenario that prompted it is a\ntickets table with status, priority, category, etc. enums. We don't have\nplans to modify them any time soon, but if we do it's got to be\ncoordinated and deployed across two databases, all so we can use what\nmight as well be a text column in a single WHERE clause. Since foreign\ntables can be defined over subsets of columns, reordered, and names\nchanged, a little opt-in flexibility with types too doesn't seem\nmisplaced.\n\nNote that currently, postgres_fdw will strip casts on the WHERE column:\n`where type::text = 'one'` becomes `where ((type = 'one'::text))` (the\nvalue is cast separately). Making it respect those is another option,\nbut I thought including it in column configuration would be less\nsurprising to users who aren't aware of the difference between the local\nand remote tables.\n\n\n",
"msg_date": "Tue, 02 Mar 2021 08:34:50 -0500",
"msg_from": "\"Dian M Fay\" <dian.m.fay@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] postgres-fdw: column option to override foreign types"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nHi,\r\n\r\nThanks for the patch.\r\n\r\nI am afraid I will have to :-1: this patch. Of course it is possible that I am wrong,\r\nso please correct me if you, or any other reviewers, think so.\r\n\r\nThe problem that is intended to be solved, upon closer inspection seems to be a\r\nconscious design decision rather than a problem. Even if I am wrong there, I am\r\nnot certain that the proposed patch covers all the bases with respect to collations,\r\nbuild-in types, shipability etc for simple expressions, and covers any of more\r\ncomplicated expressions all together. \r\n\r\nAs for the scenario which prompted the patch, you wrote, quote:\r\n\r\nThe real scenario that prompted it is a\r\ntickets table with status, priority, category, etc. enums. We don't have\r\nplans to modify them any time soon, but if we do it's got to be\r\ncoordinated and deployed across two databases, all so we can use what\r\nmight as well be a text column in a single WHERE clause. Since foreign\r\ntables can be defined over subsets of columns, reordered, and names\r\nchanged, a little opt-in flexibility with types too doesn't seem\r\nmisplaced. \r\n\r\nend quote.\r\n\r\nI will add that creating a view on the remote server with type flexibility that\r\nyou wish and then create foreign tables against the view, might address your\r\nproblem.\r\n\r\nRespectfully,\r\n//Georgios",
"msg_date": "Thu, 04 Mar 2021 14:28:46 +0000",
"msg_from": "Georgios Kokolatos <gkokolatos@protonmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] postgres-fdw: column option to override foreign types"
},
{
"msg_contents": "On 3/4/21 9:28 AM, Georgios Kokolatos wrote:\n> \n> I am afraid I will have to :-1: this patch. Of course it is possible that I am wrong,\n> so please correct me if you, or any other reviewers, think so.\n\nI'm inclined to agree and it seems like a view on the source server is a \ngood compromise and eliminates the maintenance concerns.\n\nI'm going to mark this as Waiting on Author for now, but will close it \non March 11 if there are no arguments in support.\n\nDian, perhaps you have another angle you'd like to try?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 4 Mar 2021 11:48:48 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] postgres-fdw: column option to override foreign types"
},
{
"msg_contents": "On Thu Mar 4, 2021 at 9:28 AM EST, Georgios Kokolatos wrote:\n> I am afraid I will have to :-1: this patch. Of course it is possible\n> that I am wrong,\n> so please correct me if you, or any other reviewers, think so.\n>\n> The problem that is intended to be solved, upon closer inspection\n> seems\n> to be a\n> conscious design decision rather than a problem. Even if I am wrong\n> there, I am\n> not certain that the proposed patch covers all the bases with respect\n> to\n> collations,\n> build-in types, shipability etc for simple expressions, and covers any\n> of more\n> complicated expressions all together.\n\nThanks for reviewing it!\n\nI see room for interpretation in the design here, although I have\nadmittedly not been looking at it for very long. `CREATE/ALTER FOREIGN\nTABLE` take the user at their word about types, which only map 1:1 for a\nforeign Postgres server anyway. In make_tuple_from_result_row, incoming\nvalues start as strings until they're converted to their target types --\nagain, with no guarantee that those types match those on the remote\nserver. The docs recommend types match exactly and note the sorts of\nthings that can go wrong, but there's no enforcement; either what you've\ncooked up works or it doesn't. And in fact, declaring local text for a\nremote enum seems to work quite well.... right up until you try to\nreference it in the `WHERE` clause.\n\nEnum::text seems like a safe and potentially standardizable case for\npostgres_fdw. As implemented, the patch goes beyond that, but it's\nopt-in and the docs already warn about consequences. I haven't tested it\nacross collations, but right now that seems like something to look into\nif the idea survives the next few messages.\n\n> I will add that creating a view on the remote server with type\n> flexibility that\n> you wish and then create foreign tables against the view, might\n> address\n> your\n> problem.\n\nA view would address the immediate issue of the types, but itself\nrequires additional maintenance if/when the underlying table's schema\nchanges (even `SELECT *` is expanded into the current column definitions\nat creation). I think it's better than copying the types, because it\nmoves the extra work of keeping local and remote synchronized to a\n*table* modification as opposed to a *type* modification, in which\nlatter case it's much easier to forget about dependents. But I'd prefer\nto avoid extra work anywhere!\n\n\n",
"msg_date": "Thu, 04 Mar 2021 14:29:47 -0500",
"msg_from": "\"Dian M Fay\" <dian.m.fay@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] postgres-fdw: column option to override foreign types"
},
{
"msg_contents": "\"Dian M Fay\" <dian.m.fay@gmail.com> writes:\n> On Thu Mar 4, 2021 at 9:28 AM EST, Georgios Kokolatos wrote:\n>> I am afraid I will have to :-1: this patch.\n\n> I see room for interpretation in the design here, although I have\n> admittedly not been looking at it for very long. `CREATE/ALTER FOREIGN\n> TABLE` take the user at their word about types, which only map 1:1 for a\n> foreign Postgres server anyway.\n\nRight.\n\n> In make_tuple_from_result_row, incoming\n> values start as strings until they're converted to their target types --\n> again, with no guarantee that those types match those on the remote\n> server.\n\nThe data conversion itself provides a little bit of security --- for\ninstance, converting 'foobar' to int or timestamp will fail. It's\nnot bulletproof, but on the other hand there are indeed situations\nwhere you don't want to declare the column locally with exactly the\ntype the remote server is using, so trying to be bulletproof would\nbe counterproductive.\n\nI am not, however, any more impressed than the other respondents with\nthe solution you've proposed. For one thing, this can only help if\nthe local type is known to the remote server, which seems to eliminate\nfifty per cent of the use-cases for intentional differences in type.\n(That is, isn't it equally as plausible that the local type is an\nenum you didn't bother making on the remote side?) But a bigger issue\nis that shipping\n\tWHERE foreigncol::text = 'one'::text\nto the remote server is not a nice solution even if it works. It will,\nfor example, defeat use of a normal index on foreigncol. It'd likely\nbe just as inefficient for remote joins.\n\nWhat'd be better, if we could do it, is to ship the clause in\nthe form\n\tWHERE foreigncol = 'one'\nthat is, instead of plastering a cast on the Var, try to not put\nany explicit cast on the constant. That fixes your original use\ncase better than what you've proposed, and I think it might be\npossible to do it unconditionally instead of needing a hacky\ncolumn property to enable it. The reason this could be okay\nis that it seems reasonable for postgres_fdw to rely on the\ncore parser's heuristic that an unknown-type literal is the\nsame type as what it's being compared to. So, if we are trying\nto deparse something of the form \"foreigncol operator constant\",\nand the foreigncol and constant are of the same type locally,\nwe could leave off the cast on the constant. (There might need\nto be some restrictions about the operator taking those types\nnatively with no cast, not sure; also this doesn't apply to\nconstants that are going to be printed as non-string literals.)\n\nSlipping this heuristic into the code structure of deparse.c\nmight be rather messy, though. I've not looked at just how\nto implement it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Mar 2021 16:28:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] postgres-fdw: column option to override foreign types"
},
{
"msg_contents": "> What'd be better, if we could do it, is to ship the clause in\n> the form\n> WHERE foreigncol = 'one'\n> that is, instead of plastering a cast on the Var, try to not put\n> any explicit cast on the constant. That fixes your original use\n> case better than what you've proposed, and I think it might be\n> possible to do it unconditionally instead of needing a hacky\n> column property to enable it. The reason this could be okay\n> is that it seems reasonable for postgres_fdw to rely on the\n> core parser's heuristic that an unknown-type literal is the\n> same type as what it's being compared to. So, if we are trying\n> to deparse something of the form \"foreigncol operator constant\",\n> and the foreigncol and constant are of the same type locally,\n> we could leave off the cast on the constant. (There might need\n> to be some restrictions about the operator taking those types\n> natively with no cast, not sure; also this doesn't apply to\n> constants that are going to be printed as non-string literals.)\n>\n> Slipping this heuristic into the code structure of deparse.c\n> might be rather messy, though. I've not looked at just how\n> to implement it.\n\nThis doesn't look too bad from here, at least so far. The attached\nchange adds a new const_showtype field to the deparse_expr_cxt, and\npasses that instead of the hardcoded 0 to deparseConst. deparseOpExpr\nmodifies const_showtype if both sides of a binary operation are text,\nand resets it to 0 after the recursion.\n\nI restricted it to text-only after seeing a regression test fail: while\ndeparsing `percentile_cont(c2/10::numeric)`, c2, an integer column, is a\nFuncExpr with a numeric return type. That matches the numeric 10, and\nwithout the explicit cast, integer-division-related havoc ensues. I\ndon't know why it's a FuncExpr, and I don't know why it's not an int,\nbut the constant is definitely a non-string, in any case.\n\nIn the course of testing, I discovered that the @@ text-search operator\nworks against textified enums on my stock 13.1 server (a \"proper\" enum\ncolumn yields \"operator does not exist\"). I'm rather wary of actually\ntrying to depend on that behavior, although it seems probably-safe in\nthe same character set and collation.",
"msg_date": "Sun, 07 Mar 2021 02:37:53 -0500",
"msg_from": "\"Dian M Fay\" <dian.m.fay@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] postgres-fdw: column option to override foreign types"
},
{
"msg_contents": "On Sun Mar 7, 2021 at 2:37 AM EST, Dian M Fay wrote:\n> > What'd be better, if we could do it, is to ship the clause in\n> > the form\n> > WHERE foreigncol = 'one'\n> > that is, instead of plastering a cast on the Var, try to not put\n> > any explicit cast on the constant. That fixes your original use\n> > case better than what you've proposed, and I think it might be\n> > possible to do it unconditionally instead of needing a hacky\n> > column property to enable it. The reason this could be okay\n> > is that it seems reasonable for postgres_fdw to rely on the\n> > core parser's heuristic that an unknown-type literal is the\n> > same type as what it's being compared to. So, if we are trying\n> > to deparse something of the form \"foreigncol operator constant\",\n> > and the foreigncol and constant are of the same type locally,\n> > we could leave off the cast on the constant. (There might need\n> > to be some restrictions about the operator taking those types\n> > natively with no cast, not sure; also this doesn't apply to\n> > constants that are going to be printed as non-string literals.)\n> >\n> > Slipping this heuristic into the code structure of deparse.c\n> > might be rather messy, though. I've not looked at just how\n> > to implement it.\n>\n> This doesn't look too bad from here, at least so far. The attached\n> change adds a new const_showtype field to the deparse_expr_cxt, and\n> passes that instead of the hardcoded 0 to deparseConst. deparseOpExpr\n> modifies const_showtype if both sides of a binary operation are text,\n> and resets it to 0 after the recursion.\n>\n> I restricted it to text-only after seeing a regression test fail: while\n> deparsing `percentile_cont(c2/10::numeric)`, c2, an integer column, is a\n> FuncExpr with a numeric return type. That matches the numeric 10, and\n> without the explicit cast, integer-division-related havoc ensues. I\n> don't know why it's a FuncExpr, and I don't know why it's not an int,\n> but the constant is definitely a non-string, in any case.\n>\n> In the course of testing, I discovered that the @@ text-search operator\n> works against textified enums on my stock 13.1 server (a \"proper\" enum\n> column yields \"operator does not exist\"). I'm rather wary of actually\n> trying to depend on that behavior, although it seems probably-safe in\n> the same character set and collation.\n\nhello again! My second version of this change (suppressing the cast\nentirely as Tom suggested) seemed to slip under the radar back in March\nand then other matters intervened. I'm still interested in making it\nhappen, though, and now that we're out of another commitfest it seems\nlike a good time to bring it back up. Here's a rebased patch to start.",
"msg_date": "Wed, 04 Aug 2021 22:38:24 -0400",
"msg_from": "\"Dian M Fay\" <dian.m.fay@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] postgres_fdw: suppress explicit casts in text:text\n comparisons (was: column option to override foreign types)"
},
{
"msg_contents": "\"Dian M Fay\" <dian.m.fay@gmail.com> writes:\n> [ 0001-Suppress-explicit-casts-of-text-constants-in-postgre.patch ]\n\nI took a quick look at this. The restriction to type text seems like\nvery obviously a hack rather than something we actually want; wouldn't\nit mean we fail to act in a large fraction of the cases where we'd\nlike to suppress the cast?\n\nA second problem is that I don't think the business with a const_showtype\ncontext field is safe at all. As you've implemented it here, it would\naffect the entire RHS tree, including constants far down inside complex\nexpressions that have nothing to do with the top-level semantics.\n(I didn't look closely, but I wonder if the regression failure you\nmentioned is associated with that.)\n\nI think that we only want to suppress the cast in cases where\n(1) the constant is directly an operand of the operator we're\nexpecting the remote parser to use its same-type heuristic for, and\n(2) the constant will be deparsed as a string literal. (If it's\ndeparsed as a number, boolean, etc, then it won't be initially\nUNKNOWN, so that heuristic won't be applied.)\n\nNow point 1 means that we don't really need to mess with keeping\nstate in the recursion context. If we've determined at the level\nof the OpExpr that we can do this, including checking that the\nRHS operand IsA(Const), then we can just invoke deparseConst() on\nit directly instead of recursing via deparseExpr().\n\nMeanwhile, I suspect that point 2 might be best checked within\ndeparseConst() itself, as that contains both the decision and the\nmechanism about how the Const will be printed. So that suggests\nthat we should invent a new showtype code telling deparseConst()\nto act this way, and then supply that code directly when we\ninvoke deparseConst directly from deparseOpExpr.\n\nBTW, don't we also want to be able to optimize cases where the Const\nis on the LHS rather than the RHS?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 05 Sep 2021 18:43:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] postgres_fdw: suppress explicit casts in text:text\n comparisons (was: column option to override foreign types)"
},
{
"msg_contents": "On Sun Sep 5, 2021 at 6:43 PM EDT, Tom Lane wrote:\n> \"Dian M Fay\" <dian.m.fay@gmail.com> writes:\n> > [ 0001-Suppress-explicit-casts-of-text-constants-in-postgre.patch ]\n>\n> I took a quick look at this. The restriction to type text seems like\n> very obviously a hack rather than something we actually want; wouldn't\n> it mean we fail to act in a large fraction of the cases where we'd\n> like to suppress the cast?\n>\n> A second problem is that I don't think the business with a\n> const_showtype\n> context field is safe at all. As you've implemented it here, it would\n> affect the entire RHS tree, including constants far down inside complex\n> expressions that have nothing to do with the top-level semantics.\n> (I didn't look closely, but I wonder if the regression failure you\n> mentioned is associated with that.)\n>\n> I think that we only want to suppress the cast in cases where\n> (1) the constant is directly an operand of the operator we're\n> expecting the remote parser to use its same-type heuristic for, and\n> (2) the constant will be deparsed as a string literal. (If it's\n> deparsed as a number, boolean, etc, then it won't be initially\n> UNKNOWN, so that heuristic won't be applied.)\n>\n> Now point 1 means that we don't really need to mess with keeping\n> state in the recursion context. If we've determined at the level\n> of the OpExpr that we can do this, including checking that the\n> RHS operand IsA(Const), then we can just invoke deparseConst() on\n> it directly instead of recursing via deparseExpr().\n>\n> Meanwhile, I suspect that point 2 might be best checked within\n> deparseConst() itself, as that contains both the decision and the\n> mechanism about how the Const will be printed. So that suggests\n> that we should invent a new showtype code telling deparseConst()\n> to act this way, and then supply that code directly when we\n> invoke deparseConst directly from deparseOpExpr.\n>\n> BTW, don't we also want to be able to optimize cases where the Const\n> is on the LHS rather than the RHS?\n>\n> regards, tom lane\n\nThanks Tom, that makes way more sense! I've attached a new patch which\ntests operands and makes sure one side is a Const before feeding it to\ndeparseConst with a new showtype code, -2. The one regression is gone,\nbut I've left a couple of test output discrepancies for now which\nshowcase lost casts on the following predicates:\n\n* date(c5) = '1970-01-17'::date\n* ctid = '(0,2)'::tid\n\nThese aren't exactly failures -- both implicit string comparisons work\njust fine -- but I don't know Postgres well enough to be sure that\nthat's true more generally. I did try checking that the non-Const member\nof the predicate is a Var; that left the date cast alone, since date(c5)\nis a FuncExpr, but obviously can't do anything about the tid.\n\nThere's also an interesting case where `val::text LIKE 'foo'` works when\nval is an enum column in the local table, and breaks, castless, with an\noperator mismatch when it's altered to text: Postgres' statement parser\nrecognizes the cast as redundant and creates a Var node instead of a\nRelabelType (as it will for, say, `val::varchar(10)`) before the FDW is\neven in the picture. It's a little discomfiting, but I suppose a certain\nlevel of \"caveat emptor\" entails when disregarding foreign types.\n\n> (val as enum on local and remote)\n> explain verbose select * from test where (val::text) like 'foo';\n> \n> Foreign Scan on public.test (cost=100.00..169.06 rows=8 width=28)\n> Output: id, val, on_day, ts, ts2\n> Filter: ((test.val)::text ~~ 'foo'::text)\n> Remote SQL: SELECT id, val, on_day, ts, ts2 FROM public.test\n>\n> (val as local text, remote enum)\n> explain verbose select * from test where (val::text) like 'foo';\n> \n> Foreign Scan on public.test (cost=100.00..122.90 rows=5 width=56)\n> Output: id, val, on_day, ts, ts2\n> Remote SQL: SELECT id, val, on_day, ts, ts2 FROM public.test WHERE ((val ~~ 'foo'))\n>\n> explain verbose select * from test where (val::varchar(10)) like 'foo';\n>\n> Foreign Scan on public.test (cost=100.00..125.46 rows=5 width=56)\n> Output: id, val, on_day, ts, ts2\n> Remote SQL: SELECT id, val, on_day, ts, ts2 FROM public.test WHERE ((val::character varying(10) ~~ 'foo'))\n\nOutside that, deparseConst also contains a note about keeping the code\nin sync with the parser (make_const in particular); from what I could\ntell, I don't think there's anything in this that necessitates changes\nthere.",
"msg_date": "Sun, 24 Oct 2021 01:10:52 -0400",
"msg_from": "\"Dian M Fay\" <dian.m.fay@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] postgres_fdw: suppress explicit casts in text:text\n comparisons (was: column option to override foreign types)"
},
{
"msg_contents": "\"Dian M Fay\" <dian.m.fay@gmail.com> writes:\n> Thanks Tom, that makes way more sense! I've attached a new patch which\n> tests operands and makes sure one side is a Const before feeding it to\n> deparseConst with a new showtype code, -2. The one regression is gone,\n> but I've left a couple of test output discrepancies for now which\n> showcase lost casts on the following predicates:\n\n> * date(c5) = '1970-01-17'::date\n> * ctid = '(0,2)'::tid\n\n> These aren't exactly failures -- both implicit string comparisons work\n> just fine -- but I don't know Postgres well enough to be sure that\n> that's true more generally.\n\nThese seem fine to me. The parser heuristic that we're relying on\nacts at the level of the operator --- it doesn't really care whether\nthe other input argument is a simple Var or not.\n\nNote that we're *not* doing an \"implicit string comparison\" in either\ncase. The point here is that the remote parser will resolve the\nunknown-type literal as being the same type as the other operator input,\nthat is date or tid in these two cases.\n\nThat being the goal, I think you don't have the logic right at all,\neven if it happens to accidentally work in the tested cases. We\ncan only drop the cast if it's a binary operator and the two inputs\nare of the same type. Testing \"leftType == form->oprleft\" is pretty\nclose to a no-op, because the input will have been coerced to the\noperator's expected type. And the code as you had it could do\nindefensible things with a unary operator. (It's probably hard to\nget here with a unary operator applied to a constant, but I'm not\nsure it's impossible.)\n\nAttached is a rewrite that does what I think we want to do, and\nalso adds comments because there weren't any.\n\nNow that I've looked this over I'm starting to feel uncomfortable\nagain, because we can't actually be quite sure about how the remote\nparser's heuristic will act. What we're checking is that leftType\nand rightType match, but that condition is applied to the inputs\n*after implicit type coercion to the operator's input types*.\nWe can't be entirely sure about what our parser saw to begin with.\nPerhaps it'd be a good idea to strip any implicit coercions on\nthe non-Const input before checking its type. I'm not sure how\nmuch that helps though. For one thing, by the time this code\nsees the expression, eval_const_expressions could have collapsed\ncoercion steps in a way that obscures how it looked originally.\nFor another thing, in the cases we're interested in, it's kind of\na stretch to suppose that implicit coercions applied locally are\na good model of the way things will look to the remote parser.\n\nSo I'm feeling a bit itchy. I'm still willing to push forward\nwith this, but I won't be terribly surprised if it breaks cases\nthat ought to work and we end up having to revert it.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 02 Nov 2021 18:39:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] postgres_fdw: suppress explicit casts in text:text\n comparisons (was: column option to override foreign types)"
},
{
"msg_contents": "I wrote:\n> Now that I've looked this over I'm starting to feel uncomfortable\n> again, because we can't actually be quite sure about how the remote\n> parser's heuristic will act.\n\nActually ... we could make that a lot safer by insisting that the\nother input be a plain Var, which'd necessarily be a column of the\nforeign table. That would still cover most cases of practical\ninterest, I think, and it would remove any question of whether\nimplicit coercions had snuck in. It's more restrictive than I'd\nreally like, but I think it's less likely to cause problems.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Nov 2021 19:10:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] postgres_fdw: suppress explicit casts in text:text\n comparisons (was: column option to override foreign types)"
},
{
"msg_contents": "On Tue Nov 2, 2021 at 7:10 PM EDT, Tom Lane wrote:\n> I wrote:\n> > Now that I've looked this over I'm starting to feel uncomfortable\n> > again, because we can't actually be quite sure about how the remote\n> > parser's heuristic will act.\n>\n> Actually ... we could make that a lot safer by insisting that the\n> other input be a plain Var, which'd necessarily be a column of the\n> foreign table. That would still cover most cases of practical\n> interest, I think, and it would remove any question of whether\n> implicit coercions had snuck in. It's more restrictive than I'd\n> really like, but I think it's less likely to cause problems.\n\nHere's v6! I started with restricting cast suppression to Const-Var\ncomparisons as you suggested. A few tests did regress (relative to the\nunrestricted version) right out of the gate with comparisons to varchar\ncolumns, since those become RelabelType nodes instead of Vars. After\nreading the notes on RelabelType in primnodes.h, I *think* that that\n\"dummy\" coercion is distinct from the operator input type coercion\nyou're talking about here:\n\n> What we're checking is that leftType and rightType match, but that\n> condition is applied to the inputs *after implicit type coercion to\n> the operator's input types*. We can't be entirely sure about what our\n> parser saw to begin with. Perhaps it'd be a good idea to strip any\n> implicit coercions on the non-Const input before checking its type.\n\nI allowed RelabelTypes over Vars to suppress casts as well. It's working\nfor me so far and the varchar comparison tests are back to passing, sans\ncasts.",
"msg_date": "Sun, 07 Nov 2021 18:31:39 -0500",
"msg_from": "\"Dian M Fay\" <dian.m.fay@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] postgres_fdw: suppress explicit casts in text:text\n comparisons (was: column option to override foreign types)"
},
{
"msg_contents": "\"Dian M Fay\" <dian.m.fay@gmail.com> writes:\n> On Tue Nov 2, 2021 at 7:10 PM EDT, Tom Lane wrote:\n>> Actually ... we could make that a lot safer by insisting that the\n>> other input be a plain Var, which'd necessarily be a column of the\n>> foreign table. That would still cover most cases of practical\n>> interest, I think, and it would remove any question of whether\n>> implicit coercions had snuck in. It's more restrictive than I'd\n>> really like, but I think it's less likely to cause problems.\n\n> I allowed RelabelTypes over Vars to suppress casts as well. It's working\n> for me so far and the varchar comparison tests are back to passing, sans\n> casts.\n\nUm. I doubt that that's any safer than the v5 patch. As an example,\ncasting between int4 and oid is just a RelabelType, but the comparison\nsemantics change completely (signed vs. unsigned); so there's not a\ngood reason to think this is constraining things more than v5 did.\n\nIt might be better if you'd further restricted the structure to be only\nCOERCE_IMPLICIT_CAST RelabelTypes, since we don't normally make casts\nimplicit if they significantly change semantics. Also, this'd ensure\nthat the operand printed for the remote server is just a bare Var\n(cf. deparseRelabelType). But even with that I'm feeling antsy about\nwhether this will allow any semantic surprises.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Nov 2021 16:50:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] postgres_fdw: suppress explicit casts in text:text\n comparisons (was: column option to override foreign types)"
},
{
"msg_contents": "On Mon Nov 8, 2021 at 4:50 PM EST, Tom Lane wrote:\n> Um. I doubt that that's any safer than the v5 patch. As an example,\n> casting between int4 and oid is just a RelabelType, but the comparison\n> semantics change completely (signed vs. unsigned); so there's not a\n> good reason to think this is constraining things more than v5 did.\n>\n> It might be better if you'd further restricted the structure to be only\n> COERCE_IMPLICIT_CAST RelabelTypes, since we don't normally make casts\n> implicit if they significantly change semantics. Also, this'd ensure\n> that the operand printed for the remote server is just a bare Var\n> (cf. deparseRelabelType). But even with that I'm feeling antsy about\n> whether this will allow any semantic surprises.\n\nI've split the suppression for RelabelTypes with implicit cast check\ninto a second patch over the core v7 change. As far as testing goes, \\dC\nlists implicit casts, but most of those I've tried seem to wind up\ndeparsing as Vars. I've been able to manifest RelabelTypes with varchar,\ncidr, and remote char to local varchar, but that's about it. Any ideas\nfor validating it further, off the top of your head?",
"msg_date": "Wed, 10 Nov 2021 23:58:44 -0500",
"msg_from": "\"Dian M Fay\" <dian.m.fay@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] postgres_fdw: suppress explicit casts in text:text\n comparisons (was: column option to override foreign types)"
},
{
"msg_contents": "\"Dian M Fay\" <dian.m.fay@gmail.com> writes:\n> I've split the suppression for RelabelTypes with implicit cast check\n> into a second patch over the core v7 change. As far as testing goes, \\dC\n> lists implicit casts, but most of those I've tried seem to wind up\n> deparsing as Vars. I've been able to manifest RelabelTypes with varchar,\n> cidr, and remote char to local varchar, but that's about it. Any ideas\n> for validating it further, off the top of your head?\n\nI thought about this some more and realized exactly why I wanted to\nrestrict the change to cases where the other side is a plain foreign Var:\nthat way, if anything surprising happens, we can blame it directly on the\nuser having declared a local column with a different type from the\nremote column.\n\nThat being the case, I took a closer look at deparseVar and realized that\nwe can't simply check \"IsA(node, Var)\": some Vars in the expression can\nbelong to local tables. We need to verify that the Var is one that will\nprint as a remote column reference.\n\nSo that leads me to v8, attached. I think we are getting there.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 11 Nov 2021 15:36:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] postgres_fdw: suppress explicit casts in text:text\n comparisons (was: column option to override foreign types)"
},
{
"msg_contents": "On Thu Nov 11, 2021 at 3:36 PM EST, Tom Lane wrote:\n> I thought about this some more and realized exactly why I wanted to\n> restrict the change to cases where the other side is a plain foreign\n> Var: that way, if anything surprising happens, we can blame it\n> directly on the user having declared a local column with a different\n> type from the remote column.\n>\n> That being the case, I took a closer look at deparseVar and realized\n> that we can't simply check \"IsA(node, Var)\": some Vars in the\n> expression can belong to local tables. We need to verify that the Var\n> is one that will print as a remote column reference.\n\nEminently reasonable all around! `git apply` insisted that the v8 patch\ndidn't (apply, that is), but `patch -p1` liked it fine. I've put it\nthrough a few paces and it seems good; what needs to happen next?\n\n\n",
"msg_date": "Thu, 11 Nov 2021 20:07:14 -0500",
"msg_from": "\"Dian M Fay\" <dian.m.fay@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] postgres_fdw: suppress explicit casts in text:text\n comparisons (was: column option to override foreign types)"
},
{
"msg_contents": "\"Dian M Fay\" <dian.m.fay@gmail.com> writes:\n> Eminently reasonable all around! `git apply` insisted that the v8 patch\n> didn't (apply, that is), but `patch -p1` liked it fine. I've put it\n> through a few paces and it seems good; what needs to happen next?\n\nI don't see anything else to do except shove it out into the light\nof day and see what happens. Hence, pushed.\n\nAs I remarked in the commit message:\n\n>> One point that I (tgl) remain slightly uncomfortable with is that we\n>> will ignore an implicit RelabelType when deciding if the non-Const input\n>> is a plain Var. That makes it a little squishy to argue that the remote\n>> should resolve the Const as being of the same type as its Var, because\n>> then our Const is not the same type as our Var. However, if we don't do\n>> that, then this hack won't work as desired if the user chooses to use\n>> varchar rather than text to represent some remote column. That seems\n>> useful, so do it like this for now. We might have to give up the\n>> RelabelType-ignoring bit if any problems surface.\n\nI think we can await complaints before doing more, but I wanted that\nbit on record for anyone perusing the archives.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Nov 2021 11:54:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] postgres_fdw: suppress explicit casts in text:text\n comparisons (was: column option to override foreign types)"
}
] |
[
{
"msg_contents": "Hi,\n\nThere is (to my knowledge) no direct way to get the `CREATE DATABASE`\nand assorted `GRANT foo ON DATABASE` etc. commands out of a pg_dump\nwithout having to edit the TOC or filter the SQL output with e.g. grep.\n\nIt is not part of pg_dumpall -g, and if one uses pg_dump / pg_dumpall -s\n-C, one gets all definitions for all database objects.\n\nSo I propose a small additional option --create-only, which only dumps\nthe create-related commands, e.g.:\n\npostgres=# CREATE DATABASE test;\nCREATE DATABASE\npostgres=# CREATE USER test;\nCREATE ROLE\npostgres=# GRANT CONNECT ON DATABASE test TO test;\nGRANT\npostgres=# \\q\npostgres@kohn:~$ pg_dump --create-only -p 65432 -d test -h /tmp | egrep -v '^($|--|SET)' \nSELECT pg_catalog.set_config('search_path', '', false);\nCREATE DATABASE test WITH TEMPLATE = template0 ENCODING = 'UTF8' LOCALE = 'de_DE.UTF-8';\nALTER DATABASE test OWNER TO postgres;\n\\connect test\nSELECT pg_catalog.set_config('search_path', '', false);\nGRANT CONNECT ON DATABASE test TO test;\npostgres@kohn:~$\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB Mönchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz",
"msg_date": "Mon, 01 Mar 2021 11:12:49 +0100",
"msg_from": "Michael Banck <michael.banck@credativ.de>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add --create-only option to pg_dump/pg_dumpall"
},
{
"msg_contents": "On 01.03.21 11:12, Michael Banck wrote:\n> postgres@kohn:~$ pg_dump --create-only -p 65432 -d test -h /tmp | egrep -v '^($|--|SET)'\n> SELECT pg_catalog.set_config('search_path', '', false);\n> CREATE DATABASE test WITH TEMPLATE = template0 ENCODING = 'UTF8' LOCALE = 'de_DE.UTF-8';\n> ALTER DATABASE test OWNER TO postgres;\n> \\connect test\n> SELECT pg_catalog.set_config('search_path', '', false);\n> GRANT CONNECT ON DATABASE test TO test;\n\nI find this option name confusing, because evidently it prints out \nthings that are not CREATE commands. For example, an intuitive idea of \n\"create only\" might be to omit GRANT commands.\n\n\n",
"msg_date": "Wed, 3 Mar 2021 18:21:21 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --create-only option to pg_dump/pg_dumpall"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nHi\r\n\r\nI have tried the patch and the new option is able to control the contents of pg_dump outputs to include only create db related commands. I also agree that the option name is a little misleading to the user so I would suggest instead of using \"create-only\", you can say something maybe like \"createdb-only\" because this option only applies to CREATE DATABASE related commands, not CREATE TABLE or other objects. In the help menu, you can then elaborate more that this option \"dump only the commands related to create database like ALTER, GRANT..etc\"\r\n\r\nCary Huang\r\n-------------\r\nHighGo Software Inc. (Canada)\r\ncary.huang@highgo.ca\r\nwww.highgo.ca",
"msg_date": "Mon, 29 Mar 2021 17:59:00 +0000",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --create-only option to pg_dump/pg_dumpall"
},
{
"msg_contents": "Hi,\n\nAm Montag, den 29.03.2021, 17:59 +0000 schrieb Cary Huang:\n> I have tried the patch and the new option is able to control the\n> contents of pg_dump outputs to include only create db related\n> commands. \n\nThanks for testing!\n\n> I also agree that the option name is a little misleading to the user\n> so I would suggest instead of using \"create-only\", you can say\n> something maybe like \"createdb-only\" because this option only applies\n> to CREATE DATABASE related commands, not CREATE TABLE or other\n> objects. In the help menu, you can then elaborate more that this\n> option \"dump only the commands related to create database like ALTER,\n> GRANT..etc\"\n\nWell I have to say I agree with Peter that the option name I came up\nwith is pretty confusing, not sure createdb-only is much better as it\nalso includes GRANTs etc.\n\nI think from a technical POV it's useful as it closes a gap between\npg_dumpall -g and pg_dump -Fc $DATABASE in my opinion, without having to\npotentially schema-dump and filter out a large number of database\nobjects.\n\nAnybody else have some opinions on what to call this best? Maybe just a\nshort option and some explanatory text in --help along with it?\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB Mönchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n\n",
"msg_date": "Tue, 30 Mar 2021 18:02:22 +0200",
"msg_from": "Michael Banck <michael.banck@credativ.de>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add --create-only option to pg_dump/pg_dumpall"
},
{
"msg_contents": "On Tue, Mar 30, 2021 at 6:02 PM Michael Banck <michael.banck@credativ.de> wrote:\n>\n> Hi,\n>\n> Am Montag, den 29.03.2021, 17:59 +0000 schrieb Cary Huang:\n> > I have tried the patch and the new option is able to control the\n> > contents of pg_dump outputs to include only create db related\n> > commands.\n>\n> Thanks for testing!\n>\n> > I also agree that the option name is a little misleading to the user\n> > so I would suggest instead of using \"create-only\", you can say\n> > something maybe like \"createdb-only\" because this option only applies\n> > to CREATE DATABASE related commands, not CREATE TABLE or other\n> > objects. In the help menu, you can then elaborate more that this\n> > option \"dump only the commands related to create database like ALTER,\n> > GRANT..etc\"\n>\n> Well I have to say I agree with Peter that the option name I came up\n> with is pretty confusing, not sure createdb-only is much better as it\n> also includes GRANTs etc.\n>\n> I think from a technical POV it's useful as it closes a gap between\n> pg_dumpall -g and pg_dump -Fc $DATABASE in my opinion, without having to\n> potentially schema-dump and filter out a large number of database\n> objects.\n>\n> Anybody else have some opinions on what to call this best? Maybe just a\n> short option and some explanatory text in --help along with it?\n\nMaybe --database-globals or something like that?\n\nOther than the name (which might be influenced by this), shouldn't\nthis functionality be in pg_restore as well? That is, if I make a\npg_dump in custom format, I would want to be able to extract that\ninformation from the dump as well?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 8 Apr 2021 11:46:38 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --create-only option to pg_dump/pg_dumpall"
},
{
"msg_contents": "Hi,\n\nI have reviewed and tested the patch. Following are a few comments.\n\n1.\nThe main objective of this patch is to get the dump which consists of SQLs\nrelated to\nCREATEDB only. I have tested the patch and it generates a proper dump file.\nBut my\nconcern is, it should execute the code which is necessary. But I see that\nthe code\nis preparing some data which we may not dump. So I feel we should avoid\nexecuting\nsuch flows. Please correct me if I am wrong.\n\n2.\n> > I also agree that the option name is a little misleading to the user\n> > so I would suggest instead of using \"create-only\", you can say\n> > something maybe like \"createdb-only\" because this option only applies\n> > to CREATE DATABASE related commands, not CREATE TABLE or other\n> > objects. In the help menu, you can then elaborate more that this\n> > option \"dump only the commands related to create database like ALTER,\n> > GRANT..etc\"\n>\n> Well I have to say I agree with Peter that the option name I came up\n> with is pretty confusing, not sure createdb-only is much better as it\n> also includes GRANTs etc.\n\nI agree with Cary that we should name this as 'createdb-only' and provide a\nbrief\ndescription in help.\n\n3.\n if (!plainText)\n dopt.outputCreateDB = 1;\n\n+ if (dopt.outputCreateDBOnly)\n+ dopt.outputCreateDB = 1;\n+\n\n'dopt.outputCreateDBOnly' if block can be merged with '!plainText' if block.\n\n4.\n static int binary_upgrade = 0;\n static int column_inserts = 0;\n+static int create_only = 0;\n static int disable_dollar_quoting = 0;\n\nThe variable 'create_only' should be changed to 'createdb_only' to match\nwith\nsimilar variable used in pg_dump.c.\n\nThanks and Regards,\nNitin Jadhav\n\nOn Tue, Mar 30, 2021 at 9:32 PM Michael Banck <michael.banck@credativ.de>\nwrote:\n\n> Hi,\n>\n> Am Montag, den 29.03.2021, 17:59 +0000 schrieb Cary Huang:\n> > I have tried the patch and the new option is able to control the\n> > contents of pg_dump outputs to include only create db related\n> > commands.\n>\n> Thanks for testing!\n>\n> > I also agree that the option name is a little misleading to the user\n> > so I would suggest instead of using \"create-only\", you can say\n> > something maybe like \"createdb-only\" because this option only applies\n> > to CREATE DATABASE related commands, not CREATE TABLE or other\n> > objects. In the help menu, you can then elaborate more that this\n> > option \"dump only the commands related to create database like ALTER,\n> > GRANT..etc\"\n>\n> Well I have to say I agree with Peter that the option name I came up\n> with is pretty confusing, not sure createdb-only is much better as it\n> also includes GRANTs etc.\n>\n> I think from a technical POV it's useful as it closes a gap between\n> pg_dumpall -g and pg_dump -Fc $DATABASE in my opinion, without having to\n> potentially schema-dump and filter out a large number of database\n> objects.\n>\n> Anybody else have some opinions on what to call this best? Maybe just a\n> short option and some explanatory text in --help along with it?\n>\n>\n> Michael\n>\n> --\n> Michael Banck\n> Projektleiter / Senior Berater\n> Tel.: +49 2166 9901-171\n> Fax: +49 2166 9901-100\n> Email: michael.banck@credativ.de\n>\n> credativ GmbH, HRB Mönchengladbach 12080\n> USt-ID-Nummer: DE204566209\n> Trompeterallee 108, 41189 Mönchengladbach\n> Geschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer\n>\n> Unser Umgang mit personenbezogenen Daten unterliegt\n> folgenden Bestimmungen: https://www.credativ.de/datenschutz\n>\n>\n>\n>\n\nHi,I have reviewed and tested the patch. Following are a few comments.1. The main objective of this patch is to get the dump which consists of SQLs related to CREATEDB only. I have tested the patch and it generates a proper dump file. But myconcern is, it should execute the code which is necessary. But I see that the code is preparing some data which we may not dump. So I feel we should avoid executingsuch flows. Please correct me if I am wrong.2. > > I also agree that the option name is a little misleading to the user> > so I would suggest instead of using \"create-only\", you can say> > something maybe like \"createdb-only\" because this option only applies> > to CREATE DATABASE related commands, not CREATE TABLE or other> > objects. In the help menu, you can then elaborate more that this> > option \"dump only the commands related to create database like ALTER,> > GRANT..etc\">> Well I have to say I agree with Peter that the option name I came up> with is pretty confusing, not sure createdb-only is much better as it> also includes GRANTs etc.I agree with Cary that we should name this as 'createdb-only' and provide a brief description in help.3. if (!plainText) dopt.outputCreateDB = 1; + if (dopt.outputCreateDBOnly)+ dopt.outputCreateDB = 1;+'dopt.outputCreateDBOnly' if block can be merged with '!plainText' if block.4. static int binary_upgrade = 0; static int column_inserts = 0;+static int create_only = 0; static int disable_dollar_quoting = 0;The variable 'create_only' should be changed to 'createdb_only' to match withsimilar variable used in pg_dump.c.Thanks and Regards,Nitin JadhavOn Tue, Mar 30, 2021 at 9:32 PM Michael Banck <michael.banck@credativ.de> wrote:Hi,\n\nAm Montag, den 29.03.2021, 17:59 +0000 schrieb Cary Huang:\n> I have tried the patch and the new option is able to control the\n> contents of pg_dump outputs to include only create db related\n> commands. \n\nThanks for testing!\n\n> I also agree that the option name is a little misleading to the user\n> so I would suggest instead of using \"create-only\", you can say\n> something maybe like \"createdb-only\" because this option only applies\n> to CREATE DATABASE related commands, not CREATE TABLE or other\n> objects. In the help menu, you can then elaborate more that this\n> option \"dump only the commands related to create database like ALTER,\n> GRANT..etc\"\n\nWell I have to say I agree with Peter that the option name I came up\nwith is pretty confusing, not sure createdb-only is much better as it\nalso includes GRANTs etc.\n\nI think from a technical POV it's useful as it closes a gap between\npg_dumpall -g and pg_dump -Fc $DATABASE in my opinion, without having to\npotentially schema-dump and filter out a large number of database\nobjects.\n\nAnybody else have some opinions on what to call this best? Maybe just a\nshort option and some explanatory text in --help along with it?\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB Mönchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz",
"msg_date": "Fri, 9 Apr 2021 19:04:58 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --create-only option to pg_dump/pg_dumpall"
},
{
"msg_contents": "> On 9 Apr 2021, at 15:34, Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:\n\n> I have reviewed and tested the patch. Following are a few comments.\n\nThis review has gone unanswered since April, has been WoA since early April and\nthe patch no longer applies. I'm marking this Returned with Feedback, a new\nversion can be submitted for a future CF once the issues have been resolved.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 13 Sep 2021 15:26:43 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add --create-only option to pg_dump/pg_dumpall"
}
] |
[
{
"msg_contents": "Hello, some improvements in the prepared statements would facilitate\ntheir use from applications:\n\n- Use of table and column names in prepared statements.\n\n Example: select # from # where # = ?;\n\n- Use of arrays in prepared statements.\n\n Example: select # from article where id in (?);\n\n # = author,title\n ? = 10,24,45\n\nBest regards.\nAlejandro Sánchez.\n\n\n\n",
"msg_date": "Mon, 1 Mar 2021 15:20:31 +0100",
"msg_from": "Alejandro =?ISO-8859-1?Q?S=E1nchez?= <alex@nexttypes.com>",
"msg_from_op": true,
"msg_subject": "Improvements in prepared statements"
},
{
"msg_contents": "Hi\n\npo 1. 3. 2021 v 15:20 odesílatel Alejandro Sánchez <alex@nexttypes.com>\nnapsal:\n\n> Hello, some improvements in the prepared statements would facilitate\n> their use from applications:\n>\n> - Use of table and column names in prepared statements.\n>\n> Example: select # from # where # = ?;\n>\n> - Use of arrays in prepared statements.\n>\n> Example: select # from article where id in (?);\n>\n> # = author,title\n> ? = 10,24,45\n>\n\nThe server side prepared statements are based on reusing execution plans.\nYou cannot reuse execution plans if you change table, or column. This is\nthe reason why SQL identifiers are immutable in prepared statements. There\nare client side prepared statements - JDBC does it. There it is possible.\nBut it is impossible on the server side. Prepared statements are like a\ncompiled program. You can change parameters, variables - but you cannot\nchange the program.\n\nRegards\n\nPavel\n\n\n\n\n>\n> Best regards.\n> Alejandro Sánchez.\n>\n>\n>\n>\n\nHipo 1. 3. 2021 v 15:20 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:Hello, some improvements in the prepared statements would facilitate\ntheir use from applications:\n\n- Use of table and column names in prepared statements.\n\n Example: select # from # where # = ?;\n\n- Use of arrays in prepared statements.\n\n Example: select # from article where id in (?);\n\n # = author,title\n ? = 10,24,45The server side prepared statements are based on reusing execution plans. You cannot reuse execution plans if you change table, or column. This is the reason why SQL identifiers are immutable in prepared statements. There are client side prepared statements - JDBC does it. There it is possible. But it is impossible on the server side. Prepared statements are like a compiled program. You can change parameters, variables - but you cannot change the program.RegardsPavel \n\nBest regards.\nAlejandro Sánchez.",
"msg_date": "Mon, 1 Mar 2021 15:31:43 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improvements in prepared statements"
},
{
"msg_contents": "Hello, as far as I know it is not done in JDBC, in many frameworks it\nis.Although the execution plans cannot be reused it would be\nsomethingvery useful. It is included in a lot of frameworks and is a\nrecurrentquestion in database forums. It would be nice if it was\nincluded in plain SQL.\nBest regards.Alejandro Sánchez.\nEl lun, 01-03-2021 a las 15:31 +0100, Pavel Stehule escribió:\n> Hi\n> \n> po 1. 3. 2021 v 15:20 odesílatel Alejandro Sánchez <\n> alex@nexttypes.com> napsal:\n> > Hello, some improvements in the prepared statements would\n> > facilitate\n> > \n> > their use from applications:\n> > \n> > \n> > \n> > - Use of table and column names in prepared statements.\n> > \n> > \n> > \n> > Example: select # from # where # = ?;\n> > \n> > \n> > \n> > - Use of arrays in prepared statements.\n> > \n> > \n> > \n> > Example: select # from article where id in (?);\n> > \n> > \n> > \n> > # = author,title\n> > \n> > ? = 10,24,45\n> \n> The server side prepared statements are based on reusing execution\n> plans. You cannot reuse execution plans if you change table, or\n> column. This is the reason why SQL identifiers are immutable in\n> prepared statements. There are client side prepared statements - JDBC\n> does it. There it is possible. But it is impossible on the server\n> side. Prepared statements are like a compiled program. You can change\n> parameters, variables - but you cannot change the program.\n> \n> Regards\n> Pavel\n> \n> \n> \n> > \n> > Best regards.\n> > \n> > Alejandro Sánchez.\n> > \n> > \n> > \n> > \n> > \n> > \n> > \n\nHello, as far as I know it is not done in JDBC, in many frameworks it is.Although the execution plans cannot be reused it would be somethingvery useful. It is included in a lot of frameworks and is a recurrentquestion in database forums. It would be nice if it was included in plain SQL.Best regards.Alejandro Sánchez.El lun, 01-03-2021 a las 15:31 +0100, Pavel Stehule escribió:Hipo 1. 3. 2021 v 15:20 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:Hello, some improvements in the prepared statements would facilitate\ntheir use from applications:\n\n- Use of table and column names in prepared statements.\n\n Example: select # from # where # = ?;\n\n- Use of arrays in prepared statements.\n\n Example: select # from article where id in (?);\n\n # = author,title\n ? = 10,24,45The server side prepared statements are based on reusing execution plans. You cannot reuse execution plans if you change table, or column. This is the reason why SQL identifiers are immutable in prepared statements. There are client side prepared statements - JDBC does it. There it is possible. But it is impossible on the server side. Prepared statements are like a compiled program. You can change parameters, variables - but you cannot change the program.RegardsPavel \n\nBest regards.\nAlejandro Sánchez.",
"msg_date": "Mon, 1 Mar 2021 16:39:20 +0100",
"msg_from": "Alejandro =?ISO-8859-1?Q?S=E1nchez?= <alex@nexttypes.com>",
"msg_from_op": true,
"msg_subject": "Re: Improvements in prepared statements"
},
{
"msg_contents": "po 1. 3. 2021 v 16:39 odesílatel Alejandro Sánchez <alex@nexttypes.com>\nnapsal:\n\n> Hello, as far as I know it is not done in JDBC, in many frameworks it is.\n>\n> Although the execution plans cannot be reused it would be something\n>\n> very useful. It is included in a lot of frameworks and is a recurrent <https://www.google.com/search?client=firefox-b-e&biw=1016&bih=475&sxsrf=ALeKk03ixEtdOsWcDWjkGcmo_MaTxdKWqw%3A1614613001966&ei=CQo9YKmzOtHlgwfCxoyoCQ&q=prepared+statement+table+name&oq=prepared+statement+table+name&gs_lcp=Cgdnd3Mtd2l6EAMyCwgAELADEMsBEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQBxAeEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQBxAeEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQCBAeEIsDUABYAGDUyQRoAXAAeACAAegMiAHoDJIBAzgtMZgBAKoBB2d3cy13aXrIAQq4AQHAAQE&sclient=gws-wiz&ved=0ahUKEwjp27mTto_vAhXR8uAKHUIjA5U4FBDh1QMIDA&uact=5>\n>\n> question in database forums <https://www.google.com/search?client=firefox-b-e&biw=1016&bih=475&sxsrf=ALeKk03ixEtdOsWcDWjkGcmo_MaTxdKWqw%3A1614613001966&ei=CQo9YKmzOtHlgwfCxoyoCQ&q=prepared+statement+table+name&oq=prepared+statement+table+name&gs_lcp=Cgdnd3Mtd2l6EAMyCwgAELADEMsBEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQBxAeEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQBxAeEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQCBAeEIsDUABYAGDUyQRoAXAAeACAAegMiAHoDJIBAzgtMZgBAKoBB2d3cy13aXrIAQq4AQHAAQE&sclient=gws-wiz&ved=0ahUKEwjp27mTto_vAhXR8uAKHUIjA5U4FBDh1QMIDA&uact=5>. It would be nice if it was included in plain\n>\n> SQL.\n>\n>\nI am very sceptical about it. What benefit do you expect? When you cannot\nreuse an execution plan, then there is not any benefit of this. Then you\ndon't need prepared statements, and all this API is useless. So some\nquestions are frequent and don't mean necessity to redesign. The developers\njust miss the fundamental knowledge of database technology.\n\nRegards\n\nPavel\n\n\n> Best regards.\n>\n> Alejandro Sánchez.\n>\n>\n> El lun, 01-03-2021 a las 15:31 +0100, Pavel Stehule escribió:\n>\n> Hi\n>\n> po 1. 3. 2021 v 15:20 odesílatel Alejandro Sánchez <alex@nexttypes.com>\n> napsal:\n>\n> Hello, some improvements in the prepared statements would facilitate\n> their use from applications:\n>\n> - Use of table and column names in prepared statements.\n>\n> Example: select # from # where # = ?;\n>\n> - Use of arrays in prepared statements.\n>\n> Example: select # from article where id in (?);\n>\n> # = author,title\n> ? = 10,24,45\n>\n>\n> The server side prepared statements are based on reusing execution plans.\n> You cannot reuse execution plans if you change table, or column. This is\n> the reason why SQL identifiers are immutable in prepared statements. There\n> are client side prepared statements - JDBC does it. There it is possible.\n> But it is impossible on the server side. Prepared statements are like a\n> compiled program. You can change parameters, variables - but you cannot\n> change the program.\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n> Best regards.\n> Alejandro Sánchez.\n>\n>\n>\n>\n\npo 1. 3. 2021 v 16:39 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:Hello, as far as I know it is not done in JDBC, in many frameworks it is.Although the execution plans cannot be reused it would be somethingvery useful. It is included in a lot of frameworks and is a recurrentquestion in database forums. It would be nice if it was included in plain SQL.I am very sceptical about it. What benefit do you expect? When you cannot reuse an execution plan, then there is not any benefit of this. Then you don't need prepared statements, and all this API is useless. So some questions are frequent and don't mean necessity to redesign. The developers just miss the fundamental knowledge of database technology. RegardsPavelBest regards.Alejandro Sánchez.El lun, 01-03-2021 a las 15:31 +0100, Pavel Stehule escribió:Hipo 1. 3. 2021 v 15:20 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:Hello, some improvements in the prepared statements would facilitate\ntheir use from applications:\n\n- Use of table and column names in prepared statements.\n\n Example: select # from # where # = ?;\n\n- Use of arrays in prepared statements.\n\n Example: select # from article where id in (?);\n\n # = author,title\n ? = 10,24,45The server side prepared statements are based on reusing execution plans. You cannot reuse execution plans if you change table, or column. This is the reason why SQL identifiers are immutable in prepared statements. There are client side prepared statements - JDBC does it. There it is possible. But it is impossible on the server side. Prepared statements are like a compiled program. You can change parameters, variables - but you cannot change the program.RegardsPavel \n\nBest regards.\nAlejandro Sánchez.",
"msg_date": "Mon, 1 Mar 2021 16:46:20 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improvements in prepared statements"
},
{
"msg_contents": "The benefit is ease of use. One of the great advantages of prepared\nstatements is nothaving to concatenate strings. The use of arrays would\nalso be very useful. \nquery(\"select \" + column1 + \",\" + column2 from \" \" + table + \" where id\nin (?), ids);\nVS\nquery(\"select # from # where id in (?)\", columns, table, ids);\nAnd it doesn't have to be done with prepared statements, it can just be\nanother SQL syntax.\nEl lun, 01-03-2021 a las 16:46 +0100, Pavel Stehule escribió:\n> po 1. 3. 2021 v 16:39 odesílatel Alejandro Sánchez <\n> alex@nexttypes.com> napsal:\n> > Hello, as far as I know it is not done in JDBC, in many frameworks\n> > it is.Although the execution plans cannot be reused it would be\n> > somethingvery useful. It is included in a lot of frameworks and is\n> > a recurrentquestion in database forums. It would be nice if it was\n> > included in plain SQL.\n> \n> I am very sceptical about it. What benefit do you expect? When you\n> cannot reuse an execution plan, then there is not any benefit of\n> this. Then you don't need prepared statements, and all this API is\n> useless. So some questions are frequent and don't mean necessity to\n> redesign. The developers just miss the fundamental knowledge of\n> database technology. \n> \n> Regards\n> Pavel\n> \n> > Best regards.Alejandro Sánchez.\n> > El lun, 01-03-2021 a las 15:31 +0100, Pavel Stehule escribió:\n> > > Hi\n> > > \n> > > po 1. 3. 2021 v 15:20 odesílatel Alejandro Sánchez <\n> > > alex@nexttypes.com> napsal:\n> > > > Hello, some improvements in the prepared statements would\n> > > > facilitate\n> > > > \n> > > > their use from applications:\n> > > > \n> > > > \n> > > > \n> > > > - Use of table and column names in prepared statements.\n> > > > \n> > > > \n> > > > \n> > > > Example: select # from # where # = ?;\n> > > > \n> > > > \n> > > > \n> > > > - Use of arrays in prepared statements.\n> > > > \n> > > > \n> > > > \n> > > > Example: select # from article where id in (?);\n> > > > \n> > > > \n> > > > \n> > > > # = author,title\n> > > > \n> > > > ? = 10,24,45\n> > > \n> > > The server side prepared statements are based on reusing\n> > > execution plans. You cannot reuse execution plans if you change\n> > > table, or column. This is the reason why SQL identifiers are\n> > > immutable in prepared statements. There are client side prepared\n> > > statements - JDBC does it. There it is possible. But it is\n> > > impossible on the server side. Prepared statements are like a\n> > > compiled program. You can change parameters, variables - but you\n> > > cannot change the program.\n> > > \n> > > Regards\n> > > Pavel\n> > > \n> > > \n> > > \n> > > > \n> > > > Best regards.\n> > > > \n> > > > Alejandro Sánchez.\n> > > > \n> > > > \n> > > > \n> > > > \n> > > > \n> > > > \n> > > > \n\nThe benefit is ease of use. One of the great advantages of prepared statements is nothaving to concatenate strings. The use of arrays would also be very useful. query(\"select \" + column1 + \",\" + column2 from \" \" + table + \" where id in (?), ids);VSquery(\"select # from # where id in (?)\", columns, table, ids);And it doesn't have to be done with prepared statements, it can just be another SQL syntax.El lun, 01-03-2021 a las 16:46 +0100, Pavel Stehule escribió:po 1. 3. 2021 v 16:39 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:Hello, as far as I know it is not done in JDBC, in many frameworks it is.Although the execution plans cannot be reused it would be somethingvery useful. It is included in a lot of frameworks and is a recurrentquestion in database forums. It would be nice if it was included in plain SQL.I am very sceptical about it. What benefit do you expect? When you cannot reuse an execution plan, then there is not any benefit of this. Then you don't need prepared statements, and all this API is useless. So some questions are frequent and don't mean necessity to redesign. The developers just miss the fundamental knowledge of database technology. RegardsPavelBest regards.Alejandro Sánchez.El lun, 01-03-2021 a las 15:31 +0100, Pavel Stehule escribió:Hipo 1. 3. 2021 v 15:20 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:Hello, some improvements in the prepared statements would facilitate\ntheir use from applications:\n\n- Use of table and column names in prepared statements.\n\n Example: select # from # where # = ?;\n\n- Use of arrays in prepared statements.\n\n Example: select # from article where id in (?);\n\n # = author,title\n ? = 10,24,45The server side prepared statements are based on reusing execution plans. You cannot reuse execution plans if you change table, or column. This is the reason why SQL identifiers are immutable in prepared statements. There are client side prepared statements - JDBC does it. There it is possible. But it is impossible on the server side. Prepared statements are like a compiled program. You can change parameters, variables - but you cannot change the program.RegardsPavel \n\nBest regards.\nAlejandro Sánchez.",
"msg_date": "Mon, 1 Mar 2021 17:08:17 +0100",
"msg_from": "Alejandro =?ISO-8859-1?Q?S=E1nchez?= <alex@nexttypes.com>",
"msg_from_op": true,
"msg_subject": "Re: Improvements in prepared statements"
},
{
"msg_contents": "po 1. 3. 2021 v 17:08 odesílatel Alejandro Sánchez <alex@nexttypes.com>\nnapsal:\n\n> The benefit is ease of use. One of the great advantages of prepared statements is not\n>\n> having to concatenate strings. The use of arrays would also be very useful.\n>\n>\n> query(\"select \" + column1 + \",\" + column2 from \" \" + table + \" where id in (?), ids);\n>\n>\n> VS\n>\n>\n> query(\"select # from # where id in (?)\", columns, table, ids);\n>\n>\n> And it doesn't have to be done with prepared statements, it can just be another SQL syntax.\n>\n>\nThis is not too strong an argument - any language (and Database API) has\nnecessary functionality now. Just you should use it.\n\nYou can use fprintf in php, format in plpgsql, String.Format in C#, Java,\n...\n\nRegards\n\nPavel\n\n\n\n\n\n> El lun, 01-03-2021 a las 16:46 +0100, Pavel Stehule escribió:\n>\n>\n>\n> po 1. 3. 2021 v 16:39 odesílatel Alejandro Sánchez <alex@nexttypes.com>\n> napsal:\n>\n> Hello, as far as I know it is not done in JDBC, in many frameworks it is.\n>\n> Although the execution plans cannot be reused it would be something\n>\n> very useful.\n>\n> It is included in a lot of frameworks and is\n>\n> a recurrent\n>\n>\n> <https://www.google.com/search?client=firefox-b-e&biw=1016&bih=475&sxsrf=ALeKk03ixEtdOsWcDWjkGcmo_MaTxdKWqw%3A1614613001966&ei=CQo9YKmzOtHlgwfCxoyoCQ&q=prepared+statement+table+name&oq=prepared+statement+table+name&gs_lcp=Cgdnd3Mtd2l6EAMyCwgAELADEMsBEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQBxAeEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQBxAeEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQCBAeEIsDUABYAGDUyQRoAXAAeACAAegMiAHoDJIBAzgtMZgBAKoBB2d3cy13aXrIAQq4AQHAAQE&sclient=gws-wiz&ved=0ahUKEwjp27mTto_vAhXR8uAKHUIjA5U4FBDh1QMIDA&uact=5>\n>\n> question in database forums\n>\n>\n> <https://www.google.com/search?client=firefox-b-e&biw=1016&bih=475&sxsrf=ALeKk03ixEtdOsWcDWjkGcmo_MaTxdKWqw%3A1614613001966&ei=CQo9YKmzOtHlgwfCxoyoCQ&q=prepared+statement+table+name&oq=prepared+statement+table+name&gs_lcp=Cgdnd3Mtd2l6EAMyCwgAELADEMsBEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQBxAeEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQBxAeEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQCBAeEIsDUABYAGDUyQRoAXAAeACAAegMiAHoDJIBAzgtMZgBAKoBB2d3cy13aXrIAQq4AQHAAQE&sclient=gws-wiz&ved=0ahUKEwjp27mTto_vAhXR8uAKHUIjA5U4FBDh1QMIDA&uact=5>\n>\n> . I\n>\n> t\n>\n> would be nice if it was included in plain\n>\n> SQL.\n>\n>\n>\n> I am very sceptical about it. What benefit do you expect? When you cannot\n> reuse an execution plan, then there is not any benefit of this. Then you\n> don't need prepared statements, and all this API is useless. So some\n> questions are frequent and don't mean necessity to redesign. The developers\n> just miss the fundamental knowledge of database technology.\n>\n> Regards\n>\n> Pavel\n>\n>\n> Best regards.\n>\n> Alejandro Sánchez.\n>\n>\n> El lun, 01-03-2021 a las 15:31 +0100, Pavel Stehule escribió:\n>\n> Hi\n>\n> po 1. 3. 2021 v 15:20 odesílatel Alejandro Sánchez <alex@nexttypes.com>\n> napsal:\n>\n> Hello, some improvements in the prepared statements would facilitate\n> their use from applications:\n>\n> - Use of table and column names in prepared statements.\n>\n> Example: select # from # where # = ?;\n>\n> - Use of arrays in prepared statements.\n>\n> Example: select # from article where id in (?);\n>\n> # = author,title\n> ? = 10,24,45\n>\n>\n> The server side prepared statements are based on reusing execution plans.\n> You cannot reuse execution plans if you change table, or column. This is\n> the reason why SQL identifiers are immutable in prepared statements. There\n> are client side prepared statements - JDBC does it. There it is possible.\n> But it is impossible on the server side. Prepared statements are like a\n> compiled program. You can change parameters, variables - but you cannot\n> change the program.\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n> Best regards.\n> Alejandro Sánchez.\n>\n>\n>\n>\n\npo 1. 3. 2021 v 17:08 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:The benefit is ease of use. One of the great advantages of prepared statements is nothaving to concatenate strings. The use of arrays would also be very useful. query(\"select \" + column1 + \",\" + column2 from \" \" + table + \" where id in (?), ids);VSquery(\"select # from # where id in (?)\", columns, table, ids);And it doesn't have to be done with prepared statements, it can just be another SQL syntax.This is not too strong an argument - any language (and Database API) has necessary functionality now. Just you should use it.You can use fprintf in php, format in plpgsql, String.Format in C#, Java, ...RegardsPavelEl lun, 01-03-2021 a las 16:46 +0100, Pavel Stehule escribió:po 1. 3. 2021 v 16:39 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:Hello, as far as I know it is not done in JDBC, in many frameworks it is.Although the execution plans cannot be reused it would be somethingvery useful. It is included in a lot of frameworks and is a recurrentquestion in database forums. It would be nice if it was included in plain SQL.I am very sceptical about it. What benefit do you expect? When you cannot reuse an execution plan, then there is not any benefit of this. Then you don't need prepared statements, and all this API is useless. So some questions are frequent and don't mean necessity to redesign. The developers just miss the fundamental knowledge of database technology. RegardsPavelBest regards.Alejandro Sánchez.El lun, 01-03-2021 a las 15:31 +0100, Pavel Stehule escribió:Hipo 1. 3. 2021 v 15:20 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:Hello, some improvements in the prepared statements would facilitate\ntheir use from applications:\n\n- Use of table and column names in prepared statements.\n\n Example: select # from # where # = ?;\n\n- Use of arrays in prepared statements.\n\n Example: select # from article where id in (?);\n\n # = author,title\n ? = 10,24,45The server side prepared statements are based on reusing execution plans. You cannot reuse execution plans if you change table, or column. This is the reason why SQL identifiers are immutable in prepared statements. There are client side prepared statements - JDBC does it. There it is possible. But it is impossible on the server side. Prepared statements are like a compiled program. You can change parameters, variables - but you cannot change the program.RegardsPavel \n\nBest regards.\nAlejandro Sánchez.",
"msg_date": "Mon, 1 Mar 2021 17:15:02 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improvements in prepared statements"
},
{
"msg_contents": "po 1. 3. 2021 v 17:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n>\n>\n> po 1. 3. 2021 v 17:08 odesílatel Alejandro Sánchez <alex@nexttypes.com>\n> napsal:\n>\n>> The benefit is ease of use. One of the great advantages of prepared statements is not\n>>\n>> having to concatenate strings. The use of arrays would also be very useful.\n>>\n>>\n>> query(\"select \" + column1 + \",\" + column2 from \" \" + table + \" where id in (?), ids);\n>>\n>>\n>>\nThe argument with arrays is not good. You can work with arrays just on\nbinary level, that is more effective. But just you should use operator =\nANY() instead IN.\n\nRegards\n\nPavel\n\n\n\n> VS\n>>\n>>\n>> query(\"select # from # where id in (?)\", columns, table, ids);\n>>\n>>\n>> And it doesn't have to be done with prepared statements, it can just be another SQL syntax.\n>>\n>>\n> This is not too strong an argument - any language (and Database API) has\n> necessary functionality now. Just you should use it.\n>\n> You can use fprintf in php, format in plpgsql, String.Format in C#, Java,\n> ...\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n>> El lun, 01-03-2021 a las 16:46 +0100, Pavel Stehule escribió:\n>>\n>>\n>>\n>> po 1. 3. 2021 v 16:39 odesílatel Alejandro Sánchez <alex@nexttypes.com>\n>> napsal:\n>>\n>> Hello, as far as I know it is not done in JDBC, in many frameworks it is.\n>>\n>> Although the execution plans cannot be reused it would be something\n>>\n>> very useful.\n>>\n>> It is included in a lot of frameworks and is\n>>\n>> a recurrent\n>>\n>>\n>> <https://www.google.com/search?client=firefox-b-e&biw=1016&bih=475&sxsrf=ALeKk03ixEtdOsWcDWjkGcmo_MaTxdKWqw%3A1614613001966&ei=CQo9YKmzOtHlgwfCxoyoCQ&q=prepared+statement+table+name&oq=prepared+statement+table+name&gs_lcp=Cgdnd3Mtd2l6EAMyCwgAELADEMsBEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQBxAeEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQBxAeEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQCBAeEIsDUABYAGDUyQRoAXAAeACAAegMiAHoDJIBAzgtMZgBAKoBB2d3cy13aXrIAQq4AQHAAQE&sclient=gws-wiz&ved=0ahUKEwjp27mTto_vAhXR8uAKHUIjA5U4FBDh1QMIDA&uact=5>\n>>\n>> question in database forums\n>>\n>>\n>> <https://www.google.com/search?client=firefox-b-e&biw=1016&bih=475&sxsrf=ALeKk03ixEtdOsWcDWjkGcmo_MaTxdKWqw%3A1614613001966&ei=CQo9YKmzOtHlgwfCxoyoCQ&q=prepared+statement+table+name&oq=prepared+statement+table+name&gs_lcp=Cgdnd3Mtd2l6EAMyCwgAELADEMsBEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQBxAeEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQBxAeEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQCBAeEIsDUABYAGDUyQRoAXAAeACAAegMiAHoDJIBAzgtMZgBAKoBB2d3cy13aXrIAQq4AQHAAQE&sclient=gws-wiz&ved=0ahUKEwjp27mTto_vAhXR8uAKHUIjA5U4FBDh1QMIDA&uact=5>\n>>\n>> . I\n>>\n>> t\n>>\n>> would be nice if it was included in plain\n>>\n>> SQL.\n>>\n>>\n>>\n>> I am very sceptical about it. What benefit do you expect? When you cannot\n>> reuse an execution plan, then there is not any benefit of this. Then you\n>> don't need prepared statements, and all this API is useless. So some\n>> questions are frequent and don't mean necessity to redesign. The developers\n>> just miss the fundamental knowledge of database technology.\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>> Best regards.\n>>\n>> Alejandro Sánchez.\n>>\n>>\n>> El lun, 01-03-2021 a las 15:31 +0100, Pavel Stehule escribió:\n>>\n>> Hi\n>>\n>> po 1. 3. 2021 v 15:20 odesílatel Alejandro Sánchez <alex@nexttypes.com>\n>> napsal:\n>>\n>> Hello, some improvements in the prepared statements would facilitate\n>> their use from applications:\n>>\n>> - Use of table and column names in prepared statements.\n>>\n>> Example: select # from # where # = ?;\n>>\n>> - Use of arrays in prepared statements.\n>>\n>> Example: select # from article where id in (?);\n>>\n>> # = author,title\n>> ? = 10,24,45\n>>\n>>\n>> The server side prepared statements are based on reusing execution plans.\n>> You cannot reuse execution plans if you change table, or column. This is\n>> the reason why SQL identifiers are immutable in prepared statements. There\n>> are client side prepared statements - JDBC does it. There it is possible.\n>> But it is impossible on the server side. Prepared statements are like a\n>> compiled program. You can change parameters, variables - but you cannot\n>> change the program.\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>\n>>\n>>\n>> Best regards.\n>> Alejandro Sánchez.\n>>\n>>\n>>\n>>\n\npo 1. 3. 2021 v 17:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:po 1. 3. 2021 v 17:08 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:The benefit is ease of use. One of the great advantages of prepared statements is nothaving to concatenate strings. The use of arrays would also be very useful. query(\"select \" + column1 + \",\" + column2 from \" \" + table + \" where id in (?), ids);The argument with arrays is not good. You can work with arrays just on binary level, that is more effective. But just you should use operator = ANY() instead IN. RegardsPavel VSquery(\"select # from # where id in (?)\", columns, table, ids);And it doesn't have to be done with prepared statements, it can just be another SQL syntax.This is not too strong an argument - any language (and Database API) has necessary functionality now. Just you should use it.You can use fprintf in php, format in plpgsql, String.Format in C#, Java, ...RegardsPavelEl lun, 01-03-2021 a las 16:46 +0100, Pavel Stehule escribió:po 1. 3. 2021 v 16:39 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:Hello, as far as I know it is not done in JDBC, in many frameworks it is.Although the execution plans cannot be reused it would be somethingvery useful. It is included in a lot of frameworks and is a recurrentquestion in database forums. It would be nice if it was included in plain SQL.I am very sceptical about it. What benefit do you expect? When you cannot reuse an execution plan, then there is not any benefit of this. Then you don't need prepared statements, and all this API is useless. So some questions are frequent and don't mean necessity to redesign. The developers just miss the fundamental knowledge of database technology. RegardsPavelBest regards.Alejandro Sánchez.El lun, 01-03-2021 a las 15:31 +0100, Pavel Stehule escribió:Hipo 1. 3. 2021 v 15:20 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:Hello, some improvements in the prepared statements would facilitate\ntheir use from applications:\n\n- Use of table and column names in prepared statements.\n\n Example: select # from # where # = ?;\n\n- Use of arrays in prepared statements.\n\n Example: select # from article where id in (?);\n\n # = author,title\n ? = 10,24,45The server side prepared statements are based on reusing execution plans. You cannot reuse execution plans if you change table, or column. This is the reason why SQL identifiers are immutable in prepared statements. There are client side prepared statements - JDBC does it. There it is possible. But it is impossible on the server side. Prepared statements are like a compiled program. You can change parameters, variables - but you cannot change the program.RegardsPavel \n\nBest regards.\nAlejandro Sánchez.",
"msg_date": "Mon, 1 Mar 2021 17:21:18 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improvements in prepared statements"
},
{
"msg_contents": "It is a matter of taste. I think this functionality would be better in\nSQLand be the same for all languages without the need to use string\nfunctions.\nEl lun, 01-03-2021 a las 17:15 +0100, Pavel Stehule escribió:\n> po 1. 3. 2021 v 17:08 odesílatel Alejandro Sánchez <\n> alex@nexttypes.com> napsal:\n> > The benefit is ease of use. One of the great advantages of prepared\n> > statements is nothaving to concatenate strings. The use of arrays\n> > would also be very useful. \n> > query(\"select \" + column1 + \",\" + column2 from \" \" + table + \"\n> > where id in (?), ids);\n> > VS\n> > query(\"select # from # where id in (?)\", columns, table, ids);\n> > \n> > And it doesn't have to be done with prepared statements, it can\n> > just be another SQL syntax.\n> \n> This is not too strong an argument - any language (and Database API)\n> has necessary functionality now. Just you should use it.\n> \n> You can use fprintf in php, format in plpgsql, String.Format in C#,\n> Java, ...\n> \n> Regards\n> \n> Pavel\n> \n> \n> \n> \n> > El lun, 01-03-2021 a las 16:46 +0100, Pavel Stehule escribió:\n> > > po 1. 3. 2021 v 16:39 odesílatel Alejandro Sánchez <\n> > > alex@nexttypes.com> napsal:\n> > > > Hello, as far as I know it is not done in JDBC, in many\n> > > > frameworks it is.Although the execution plans cannot be reused\n> > > > it would be somethingvery useful. It is included in a lot of\n> > > > frameworks and is a recurrentquestion in database forums. It\n> > > > would be nice if it was included in plain SQL.\n> > > \n> > > I am very sceptical about it. What benefit do you expect? When\n> > > you cannot reuse an execution plan, then there is not any benefit\n> > > of this. Then you don't need prepared statements, and all this\n> > > API is useless. So some questions are frequent and don't mean\n> > > necessity to redesign. The developers just miss the fundamental\n> > > knowledge of database technology. \n> > > \n> > > Regards\n> > > Pavel\n> > > \n> > > > Best regards.Alejandro Sánchez.\n> > > > El lun, 01-03-2021 a las 15:31 +0100, Pavel Stehule escribió:\n> > > > > Hi\n> > > > > \n> > > > > po 1. 3. 2021 v 15:20 odesílatel Alejandro Sánchez <\n> > > > > alex@nexttypes.com> napsal:\n> > > > > > Hello, some improvements in the prepared statements would\n> > > > > > facilitate\n> > > > > > \n> > > > > > their use from applications:\n> > > > > > \n> > > > > > \n> > > > > > \n> > > > > > - Use of table and column names in prepared statements.\n> > > > > > \n> > > > > > \n> > > > > > \n> > > > > > Example: select # from # where # = ?;\n> > > > > > \n> > > > > > \n> > > > > > \n> > > > > > - Use of arrays in prepared statements.\n> > > > > > \n> > > > > > \n> > > > > > \n> > > > > > Example: select # from article where id in (?);\n> > > > > > \n> > > > > > \n> > > > > > \n> > > > > > # = author,title\n> > > > > > \n> > > > > > ? = 10,24,45\n> > > > > \n> > > > > The server side prepared statements are based on reusing\n> > > > > execution plans. You cannot reuse execution plans if you\n> > > > > change table, or column. This is the reason why SQL\n> > > > > identifiers are immutable in prepared statements. There are\n> > > > > client side prepared statements - JDBC does it. There it is\n> > > > > possible. But it is impossible on the server side. Prepared\n> > > > > statements are like a compiled program. You can change\n> > > > > parameters, variables - but you cannot change the program.\n> > > > > \n> > > > > Regards\n> > > > > Pavel\n> > > > > \n> > > > > \n> > > > > \n> > > > > > \n> > > > > > Best regards.\n> > > > > > \n> > > > > > Alejandro Sánchez.\n> > > > > > \n> > > > > > \n> > > > > > \n> > > > > > \n> > > > > > \n> > > > > > \n> > > > > > \n\nIt is a matter of taste. I think this functionality would be better in SQLand be the same for all languages without the need to use string functions.El lun, 01-03-2021 a las 17:15 +0100, Pavel Stehule escribió:po 1. 3. 2021 v 17:08 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:The benefit is ease of use. One of the great advantages of prepared statements is nothaving to concatenate strings. The use of arrays would also be very useful. query(\"select \" + column1 + \",\" + column2 from \" \" + table + \" where id in (?), ids);VSquery(\"select # from # where id in (?)\", columns, table, ids);And it doesn't have to be done with prepared statements, it can just be another SQL syntax.This is not too strong an argument - any language (and Database API) has necessary functionality now. Just you should use it.You can use fprintf in php, format in plpgsql, String.Format in C#, Java, ...RegardsPavelEl lun, 01-03-2021 a las 16:46 +0100, Pavel Stehule escribió:po 1. 3. 2021 v 16:39 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:Hello, as far as I know it is not done in JDBC, in many frameworks it is.Although the execution plans cannot be reused it would be somethingvery useful. It is included in a lot of frameworks and is a recurrentquestion in database forums. It would be nice if it was included in plain SQL.I am very sceptical about it. What benefit do you expect? When you cannot reuse an execution plan, then there is not any benefit of this. Then you don't need prepared statements, and all this API is useless. So some questions are frequent and don't mean necessity to redesign. The developers just miss the fundamental knowledge of database technology. RegardsPavelBest regards.Alejandro Sánchez.El lun, 01-03-2021 a las 15:31 +0100, Pavel Stehule escribió:Hipo 1. 3. 2021 v 15:20 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:Hello, some improvements in the prepared statements would facilitate\ntheir use from applications:\n\n- Use of table and column names in prepared statements.\n\n Example: select # from # where # = ?;\n\n- Use of arrays in prepared statements.\n\n Example: select # from article where id in (?);\n\n # = author,title\n ? = 10,24,45The server side prepared statements are based on reusing execution plans. You cannot reuse execution plans if you change table, or column. This is the reason why SQL identifiers are immutable in prepared statements. There are client side prepared statements - JDBC does it. There it is possible. But it is impossible on the server side. Prepared statements are like a compiled program. You can change parameters, variables - but you cannot change the program.RegardsPavel \n\nBest regards.\nAlejandro Sánchez.",
"msg_date": "Mon, 1 Mar 2021 17:26:10 +0100",
"msg_from": "Alejandro =?ISO-8859-1?Q?S=E1nchez?= <alex@nexttypes.com>",
"msg_from_op": true,
"msg_subject": "Re: Improvements in prepared statements"
},
{
"msg_contents": "Hi\n\npo 1. 3. 2021 v 17:26 odesílatel Alejandro Sánchez <alex@nexttypes.com>\nnapsal:\n\n> It is a matter of taste. I think this functionality would be better in SQL\n>\n> and be the same for all languages without the need to use string functions.\n>\n>\nYou can try to implement it, and send a patch. But I think a) it will be\njust string concatenation, and then it is surely useless, or b) you should\nintroduce a new parser, because current parser need to know SQL identifiers\nimmediately. But anyway - anybody can write your opinion here. From me - I\ndon't like this idea.\n\nRegards\n\nPavel\n\n\n\n\n>\n> El lun, 01-03-2021 a las 17:15 +0100, Pavel Stehule escribió:\n>\n>\n>\n> po 1. 3. 2021 v 17:08 odesílatel Alejandro Sánchez <alex@nexttypes.com>\n> napsal:\n>\n> The benefit is ease of use. O\n>\n> ne of the great advantages of prepared statements is not\n>\n> having to concatenate strings. The use of arrays would also be very useful.\n>\n>\n> query(\"select \" + column1 + \",\" + column2 from \" \" + table + \" where id in (?), ids);\n>\n>\n> VS\n>\n>\n> query(\"select # from # where id in (?)\", columns, table, ids);\n>\n>\n> And it doesn't have to be done with prepared statements, it can just be another SQL syntax.\n>\n>\n> This is not too strong an argument - any language (and Database API) has\n> necessary functionality now. Just you should use it.\n>\n> You can use fprintf in php, format in plpgsql, String.Format in C#, Java,\n> ...\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n> El lun, 01-03-2021 a las 16:46 +0100, Pavel Stehule escribió:\n>\n>\n>\n> po 1. 3. 2021 v 16:39 odesílatel Alejandro Sánchez <alex@nexttypes.com>\n> napsal:\n>\n> Hello, as far as I know it is not done in JDBC, in many frameworks it is.\n>\n> Although the execution plans cannot be reused it would be something\n>\n> very useful.\n>\n> It is included in a lot of frameworks and is\n>\n> a recurrent\n>\n>\n> <https://www.google.com/search?client=firefox-b-e&biw=1016&bih=475&sxsrf=ALeKk03ixEtdOsWcDWjkGcmo_MaTxdKWqw%3A1614613001966&ei=CQo9YKmzOtHlgwfCxoyoCQ&q=prepared+statement+table+name&oq=prepared+statement+table+name&gs_lcp=Cgdnd3Mtd2l6EAMyCwgAELADEMsBEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQBxAeEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQBxAeEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQCBAeEIsDUABYAGDUyQRoAXAAeACAAegMiAHoDJIBAzgtMZgBAKoBB2d3cy13aXrIAQq4AQHAAQE&sclient=gws-wiz&ved=0ahUKEwjp27mTto_vAhXR8uAKHUIjA5U4FBDh1QMIDA&uact=5>\n>\n> question in database forums\n>\n>\n> <https://www.google.com/search?client=firefox-b-e&biw=1016&bih=475&sxsrf=ALeKk03ixEtdOsWcDWjkGcmo_MaTxdKWqw%3A1614613001966&ei=CQo9YKmzOtHlgwfCxoyoCQ&q=prepared+statement+table+name&oq=prepared+statement+table+name&gs_lcp=Cgdnd3Mtd2l6EAMyCwgAELADEMsBEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQBxAeEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQBxAeEIsDMgwIABCwAxAHEB4QiwMyDAgAELADEAcQHhCLAzIMCAAQsAMQCBAeEIsDUABYAGDUyQRoAXAAeACAAegMiAHoDJIBAzgtMZgBAKoBB2d3cy13aXrIAQq4AQHAAQE&sclient=gws-wiz&ved=0ahUKEwjp27mTto_vAhXR8uAKHUIjA5U4FBDh1QMIDA&uact=5>\n>\n> . I\n>\n> t\n>\n> would be nice if it was included in plain\n>\n> SQL.\n>\n>\n>\n> I am very sceptical about it. What benefit do you expect? When you cannot\n> reuse an execution plan, then there is not any benefit of this. Then you\n> don't need prepared statements, and all this API is useless. So some\n> questions are frequent and don't mean necessity to redesign. The developers\n> just miss the fundamental knowledge of database technology.\n>\n> Regards\n>\n> Pavel\n>\n>\n> Best regards.\n>\n> Alejandro Sánchez.\n>\n>\n> El lun, 01-03-2021 a las 15:31 +0100, Pavel Stehule escribió:\n>\n> Hi\n>\n> po 1. 3. 2021 v 15:20 odesílatel Alejandro Sánchez <alex@nexttypes.com>\n> napsal:\n>\n> Hello, some improvements in the prepared statements would facilitate\n> their use from applications:\n>\n> - Use of table and column names in prepared statements.\n>\n> Example: select # from # where # = ?;\n>\n> - Use of arrays in prepared statements.\n>\n> Example: select # from article where id in (?);\n>\n> # = author,title\n> ? = 10,24,45\n>\n>\n> The server side prepared statements are based on reusing execution plans.\n> You cannot reuse execution plans if you change table, or column. This is\n> the reason why SQL identifiers are immutable in prepared statements. There\n> are client side prepared statements - JDBC does it. There it is possible.\n> But it is impossible on the server side. Prepared statements are like a\n> compiled program. You can change parameters, variables - but you cannot\n> change the program.\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n> Best regards.\n> Alejandro Sánchez.\n>\n>\n>\n>\n\nHipo 1. 3. 2021 v 17:26 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:It is a matter of taste. I think this functionality would be better in SQLand be the same for all languages without the need to use string functions.You can try to implement it, and send a patch. But I think a) it will be just string concatenation, and then it is surely useless, or b) you should introduce a new parser, because current parser need to know SQL identifiers immediately. But anyway - anybody can write your opinion here. From me - I don't like this idea. RegardsPavel El lun, 01-03-2021 a las 17:15 +0100, Pavel Stehule escribió:po 1. 3. 2021 v 17:08 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:The benefit is ease of use. One of the great advantages of prepared statements is nothaving to concatenate strings. The use of arrays would also be very useful. query(\"select \" + column1 + \",\" + column2 from \" \" + table + \" where id in (?), ids);VSquery(\"select # from # where id in (?)\", columns, table, ids);And it doesn't have to be done with prepared statements, it can just be another SQL syntax.This is not too strong an argument - any language (and Database API) has necessary functionality now. Just you should use it.You can use fprintf in php, format in plpgsql, String.Format in C#, Java, ...RegardsPavelEl lun, 01-03-2021 a las 16:46 +0100, Pavel Stehule escribió:po 1. 3. 2021 v 16:39 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:Hello, as far as I know it is not done in JDBC, in many frameworks it is.Although the execution plans cannot be reused it would be somethingvery useful. It is included in a lot of frameworks and is a recurrentquestion in database forums. It would be nice if it was included in plain SQL.I am very sceptical about it. What benefit do you expect? When you cannot reuse an execution plan, then there is not any benefit of this. Then you don't need prepared statements, and all this API is useless. So some questions are frequent and don't mean necessity to redesign. The developers just miss the fundamental knowledge of database technology. RegardsPavelBest regards.Alejandro Sánchez.El lun, 01-03-2021 a las 15:31 +0100, Pavel Stehule escribió:Hipo 1. 3. 2021 v 15:20 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:Hello, some improvements in the prepared statements would facilitate\ntheir use from applications:\n\n- Use of table and column names in prepared statements.\n\n Example: select # from # where # = ?;\n\n- Use of arrays in prepared statements.\n\n Example: select # from article where id in (?);\n\n # = author,title\n ? = 10,24,45The server side prepared statements are based on reusing execution plans. You cannot reuse execution plans if you change table, or column. This is the reason why SQL identifiers are immutable in prepared statements. There are client side prepared statements - JDBC does it. There it is possible. But it is impossible on the server side. Prepared statements are like a compiled program. You can change parameters, variables - but you cannot change the program.RegardsPavel \n\nBest regards.\nAlejandro Sánchez.",
"msg_date": "Mon, 1 Mar 2021 17:35:34 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improvements in prepared statements"
},
{
"msg_contents": "I have already implemented this in my Java project with some kind of\nSQL preprocessor. I leave the idea here in case more people are\ninterested and PostgreSQL developers findit convenient to include it.\nIt is just string concatenation but it is a sintactic sugar very useful\nin any SQL application. \nEl lun, 01-03-2021 a las 17:35 +0100, Pavel Stehule escribió:\n> Hi\n> \n> po 1. 3. 2021 v 17:26 odesílatel Alejandro Sánchez <\n> alex@nexttypes.com> napsal:\n> > It is a matter of taste. I think this functionality would be better\n> > in SQLand be the same for all languages without the need to use\n> > string functions.\n> \n> You can try to implement it, and send a patch. But I think a) it will\n> be just string concatenation, and then it is surely useless, or b)\n> you should introduce a new parser, because current parser need to\n> know SQL identifiers immediately. But anyway - anybody can write\n> your opinion here. From me - I don't like this idea. \n> \n> Regards\n> Pavel\n\nI have already implemented this in my Java project with some kind of SQL preprocessor. I leave the idea here in case more people are interested and PostgreSQL developers findit convenient to include it. It is just string concatenation but it is a sintactic sugar very useful in any SQL application. El lun, 01-03-2021 a las 17:35 +0100, Pavel Stehule escribió:Hipo 1. 3. 2021 v 17:26 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:It is a matter of taste. I think this functionality would be better in SQLand be the same for all languages without the need to use string functions.You can try to implement it, and send a patch. But I think a) it will be just string concatenation, and then it is surely useless, or b) you should introduce a new parser, because current parser need to know SQL identifiers immediately. But anyway - anybody can write your opinion here. From me - I don't like this idea. RegardsPavel",
"msg_date": "Mon, 1 Mar 2021 18:18:07 +0100",
"msg_from": "Alejandro =?ISO-8859-1?Q?S=E1nchez?= <alex@nexttypes.com>",
"msg_from_op": true,
"msg_subject": "Re: Improvements in prepared statements"
},
{
"msg_contents": "Using any() has the disadvantage that in JDBC it is necessaryto create\nan array with connection.createArrayOf() and indicatethe type of the\narray, which complicates automation.\nWith statement.setObject() you can pass any type of parameter.JDBC\ncould add a method that doesn't need the array type.\nString sql = \"select author from article where id = any(?)\";\t\t\ntry (PreparedStatement statement = connection.prepareStatement(sql)) {\t\n\t\t\tstatement.setArray(1,\nconnection.createArrayOf(\"varchar\", \t\tnew String[] {\"home\",\n\"system\"}));}\nVS\nquery(\"select author from article where id = any(?)\", new String[]\n{\"home\", \"system\"});\nEl lun, 01-03-2021 a las 17:21 +0100, Pavel Stehule escribió:\n> po 1. 3. 2021 v 17:15 odesílatel Pavel Stehule <\n> pavel.stehule@gmail.com> napsal:\n> > po 1. 3. 2021 v 17:08 odesílatel Alejandro Sánchez <\n> > alex@nexttypes.com> napsal:\n> > > The benefit is ease of use. One of the great advantages of\n> > > prepared statements is nothaving to concatenate strings. The use\n> > > of arrays would also be very useful. \n> > > query(\"select \" + column1 + \",\" + column2 from \" \" + table + \"\n> > > where id in (?), ids);\n> \n> The argument with arrays is not good. You can work with arrays just\n> on binary level, that is more effective. But just you should use\n> operator = ANY() instead IN. \n> \n> Regards\n> Pavel\n\nUsing any() has the disadvantage that in JDBC it is necessaryto create an array with connection.createArrayOf() and indicatethe type of the array, which complicates automation.With statement.setObject() you can pass any type of parameter.JDBC could add a method that doesn't need the array type.String sql = \"select author from article where id = any(?)\"; try (PreparedStatement statement = connection.prepareStatement(sql)) { \tstatement.setArray(1, connection.createArrayOf(\"varchar\", \t\tnew String[] {\"home\", \"system\"}));}VSquery(\"select author from article where id = any(?)\", new String[] {\"home\", \"system\"});El lun, 01-03-2021 a las 17:21 +0100, Pavel Stehule escribió:po 1. 3. 2021 v 17:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:po 1. 3. 2021 v 17:08 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:The benefit is ease of use. One of the great advantages of prepared statements is nothaving to concatenate strings. The use of arrays would also be very useful. query(\"select \" + column1 + \",\" + column2 from \" \" + table + \" where id in (?), ids);The argument with arrays is not good. You can work with arrays just on binary level, that is more effective. But just you should use operator = ANY() instead IN. RegardsPavel",
"msg_date": "Mon, 1 Mar 2021 21:35:02 +0100",
"msg_from": "Alejandro =?ISO-8859-1?Q?S=E1nchez?= <alex@nexttypes.com>",
"msg_from_op": true,
"msg_subject": "Re: Improvements in prepared statements"
},
{
"msg_contents": "po 1. 3. 2021 v 21:35 odesílatel Alejandro Sánchez <alex@nexttypes.com>\nnapsal:\n\n> Using any() has the disadvantage that in JDBC it is necessary\n>\n> to create an array with connection.createArrayOf() and indicate\n>\n> the type of the array, which complicates automation.\n>\n>\n> With statement.setObject() you can pass any type of parameter.\n>\n> JDBC could add a method that doesn't need the array type.\n>\n>\n> String sql = \"select author from article where id = any(?)\";\n> try (PreparedStatement statement = connection.prepareStatement(sql)) {\n> statement.setArray(1, connection.createArrayOf(\"varchar\",\n> new String[] {\"home\", \"system\"}));\n> }\n>\n> VS\n>\n> query(\"select author from article where id = any(?)\", new String[]\n> {\"home\", \"system\"});\n>\n\nCan be, but this is a client side issue. It is about design of client side\nAPI.\n\nPavel\n\n\n\n> El lun, 01-03-2021 a las 17:21 +0100, Pavel Stehule escribió:\n>\n>\n>\n> po 1. 3. 2021 v 17:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\n> napsal:\n>\n>\n>\n> po 1. 3. 2021 v 17:08 odesílatel Alejandro Sánchez <alex@nexttypes.com>\n> napsal:\n>\n> The benefit is ease of use. O\n>\n> ne of the great advantages of prepared statements is not\n>\n> having to concatenate strings. The use of arrays would also be very useful.\n>\n>\n> query(\"select \" + column1 + \",\" + column2 from \" \" + table + \" where id in (?), ids);\n>\n>\n>\n>\n> The argument with arrays is not good. You can work with arrays just on\n> binary level, that is more effective. But just you should use operator =\n> ANY() instead IN.\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n\npo 1. 3. 2021 v 21:35 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:Using any() has the disadvantage that in JDBC it is necessaryto create an array with connection.createArrayOf() and indicatethe type of the array, which complicates automation.With statement.setObject() you can pass any type of parameter.JDBC could add a method that doesn't need the array type.String sql = \"select author from article where id = any(?)\"; try (PreparedStatement statement = connection.prepareStatement(sql)) { \tstatement.setArray(1, connection.createArrayOf(\"varchar\", \t\tnew String[] {\"home\", \"system\"}));}VSquery(\"select author from article where id = any(?)\", new String[] {\"home\", \"system\"});Can be, but this is a client side issue. It is about design of client side API.Pavel El lun, 01-03-2021 a las 17:21 +0100, Pavel Stehule escribió:po 1. 3. 2021 v 17:15 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:po 1. 3. 2021 v 17:08 odesílatel Alejandro Sánchez <alex@nexttypes.com> napsal:The benefit is ease of use. One of the great advantages of prepared statements is nothaving to concatenate strings. The use of arrays would also be very useful. query(\"select \" + column1 + \",\" + column2 from \" \" + table + \" where id in (?), ids);The argument with arrays is not good. You can work with arrays just on binary level, that is more effective. But just you should use operator = ANY() instead IN. RegardsPavel",
"msg_date": "Mon, 1 Mar 2021 22:05:08 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improvements in prepared statements"
}
] |
[
{
"msg_contents": "Hi!\n\nI have primary server on port 55942 and two standbys on 55943 and 55944. \nThen use connection string like \n\"postgresql://127.0.0.1:55942,127.0.0.1:55943,127.0.0.1:55944/postgres\" \nto connect to the servers via psql.\n\nEverything works perfectly no matter how many attempts to connect I do.\nBut when I stop primary server, very rarely I do get an error \"received \ninvalid response to SSL negotiation\" from psql. I got this error when I \ntried to make massive connects/disconnects test and it's unlikely to \nreproduce manually without thousands of connections sequentially with no \nintentional delay in between.\n\nThe problem present only on Linux, MacOS works fine.\n\nAs far as I understand this particular problem is because of postgresql \ngets \"zero\" (i.e. 0) byte in SSLok in \nPQconnectPoll@src/interfaces/libpq/fe-connect.c. This lead to select \n\"else\" branch with described error message. This may be fixed by \nhandling zero byte as 'E' byte. But I'm not sure if it's good solution, \nsince I don't know why fe-connect gets an zero byte at all.\n\nI consider it's worth to correct this. This error is rare but it's \nreally odd to notice this unexpected and wrong behavior.\n\n---\nBest regards,\nMaxim Orlov.",
"msg_date": "Mon, 01 Mar 2021 17:22:27 +0300",
"msg_from": "Maxim Orlov <m.orlov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "SSL negotiation error on massive connect/disconnect"
},
{
"msg_contents": "On 2021-03-01 17:22, Maxim Orlov wrote:\n> Hi!\n> \n> I have primary server on port 55942 and two standbys on 55943 and\n> 55944. Then use connection string like\n> \"postgresql://127.0.0.1:55942,127.0.0.1:55943,127.0.0.1:55944/postgres\"\n> to connect to the servers via psql.\n> \n> Everything works perfectly no matter how many attempts to connect I do.\n> But when I stop primary server, very rarely I do get an error\n> \"received invalid response to SSL negotiation\" from psql. I got this\n> error when I tried to make massive connects/disconnects test and it's\n> unlikely to reproduce manually without thousands of connections\n> sequentially with no intentional delay in between.\n> \n> The problem present only on Linux, MacOS works fine.\n> \n> As far as I understand this particular problem is because of\n> postgresql gets \"zero\" (i.e. 0) byte in SSLok in\n> PQconnectPoll@src/interfaces/libpq/fe-connect.c. This lead to select\n> \"else\" branch with described error message. This may be fixed by\n> handling zero byte as 'E' byte. But I'm not sure if it's good\n> solution, since I don't know why fe-connect gets an zero byte at all.\n> \n> I consider it's worth to correct this. This error is rare but it's\n> really odd to notice this unexpected and wrong behavior.\n> \n> ---\n> Best regards,\n> Maxim Orlov.\n\nCorrection. Previous patch was wrong. The proper one is here.\n\n---\nBest regards,\nMaxim Orlov.",
"msg_date": "Fri, 09 Apr 2021 13:56:05 +0300",
"msg_from": "Maxim Orlov <m.orlov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: SSL negotiation error on massive connect/disconnect"
}
] |
[
{
"msg_contents": "Hackers,\n\nThe 2019-03 commitfest is now in progress. It's a big one as usual.\n\nNeeds review: 213.\nWaiting on Author: 21.\nReady for Committer: 28.\nCommitted: 29.\nWithdrawn: 3.\nTotal: 294.\n\nIf you are a patch author please check http://commitfest.cputube.org to \nbe sure your patch still applies, compiles, and passes tests.\n\nPlease be sure to review patches of equal or greater complexity to the \npatches you submit. This only works well if everyone participates.\n\nHappy coding!\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Mon, 1 Mar 2021 10:30:17 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": true,
"msg_subject": "2019-03 CF now in progress"
},
{
"msg_contents": "Could I suggest to update the CF APP to allow:\n| Target version: 15\n\nAlso, I wanted to suggest that toward the end of the devel cycle, that\ncommitters set/unset target version to allow more focused review effort.\nAnd so it's not a total surprise when something isn't progressed.\nAnd as a simple way for a committer to mark an \"intent to commit\" so it's not a\nsurprise when something *is* committed.\n\nMost of my patches are currently marked v14, but it'd be fine to kick them to\nlater.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 1 Mar 2021 17:19:56 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF now in progress"
},
{
"msg_contents": "Hi Justin,\n\nOn 3/1/21 6:19 PM, Justin Pryzby wrote:\n> Could I suggest to update the CF APP to allow:\n> | Target version: 15\n\nI don't have permission to add target versions (or at least I can't find \nit in the admin interface) so hopefully Michael or Magnus can do it.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 2 Mar 2021 07:53:25 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": true,
"msg_subject": "Re: 2019-03 CF now in progress"
},
{
"msg_contents": "On Tue, Mar 2, 2021 at 1:53 PM David Steele <david@pgmasters.net> wrote:\n>\n> Hi Justin,\n>\n> On 3/1/21 6:19 PM, Justin Pryzby wrote:\n> > Could I suggest to update the CF APP to allow:\n> > | Target version: 15\n>\n> I don't have permission to add target versions (or at least I can't find\n> it in the admin interface) so hopefully Michael or Magnus can do it.\n\nDone.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Tue, 2 Mar 2021 18:45:01 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF now in progress"
},
{
"msg_contents": "On 3/1/21 10:30 AM, David Steele wrote:\n> Hackers,\n> \n> The 2019-03 commitfest is now in progress. It's a big one as usual.\n> \n> Needs review: 213.\n> Waiting on Author: 21.\n> Ready for Committer: 28.\n> Committed: 29.\n> Withdrawn: 3.\n> Total: 294.\n\nWe are now halfway through the 2021-03 commitfest, though historically \nthis CF goes a bit long.\n\nHere is the updated summary:\n\nNeeds review: 152. (-61)\nWaiting on Author: 42. (+21)\nReady for Committer: 26. (-2)\nCommitted: 55. (+26)\nMoved to next CF: 5. (+5)\nReturned with Feedback: 4. (+4)\nRejected: 1. (+1)\nWithdrawn: 9. (+6)\nTotal: 294.\n\nIn short, 42 of 262 entries open at the beginning of the CF have been \nclosed (16%).\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Mon, 15 Mar 2021 10:24:53 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": true,
"msg_subject": "Re: 2019-03 CF now in progress"
},
{
"msg_contents": "On 3/15/21 10:24 AM, David Steele wrote:\n> On 3/1/21 10:30 AM, David Steele wrote:\n>> Hackers,\n>>\n>> The 2019-03 commitfest is now in progress. It's a big one as usual.\n>>\n>> Needs review: 213.\n>> Waiting on Author: 21.\n>> Ready for Committer: 28.\n>> Committed: 29.\n>> Withdrawn: 3.\n>> Total: 294.\n> \n> We are now halfway through the 2021-03 commitfest, though historically \n> this CF goes a bit long.\n> \n> Here is the updated summary:\n> \n> Needs review: 152. (-61)\n> Waiting on Author: 42. (+21)\n> Ready for Committer: 26. (-2)\n> Committed: 55. (+26)\n> Moved to next CF: 5. (+5)\n> Returned with Feedback: 4. (+4)\n> Rejected: 1. (+1)\n> Withdrawn: 9. (+6)\n> Total: 294.\n> \n> In short, 42 of 262 entries open at the beginning of the CF have been \n> closed (16%).\n\nThe 2021-03 CF is now over.\n\nHere is the updated summary:\n\nNeeds review: 80 (-72)\nWaiting on Author: 34 (-8)\nReady for Committer: 14 (-12)\nCommitted: 120 (+65)\nMoved to next CF: 16 (+11)\nWithdrawn: 14 (+5)\nRejected: 1\nReturned with Feedback: 15 (+11)\nTotal: 294\n\nOverall, 118 of 262 entries were closed during this commitfest (45%). \nThat includes 91 patches committed since March 1, which is pretty \nfantastic. Thank you to everyone, especially the committers, for your \nhard work!\n\nToday and tomorrow I'll be checking the Waiting on Author patches to \ndetermine which should be moved to the next CF and which should be \nreturned with feedback. The rest will be moved to the next CF when this \none is closed.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 8 Apr 2021 09:49:20 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": true,
"msg_subject": "Re: 2019-03 CF now in progress"
},
{
"msg_contents": "On Thu, 8 Apr 2021 at 15:49, David Steele <david@pgmasters.net> wrote:\n>\n> On 3/15/21 10:24 AM, David Steele wrote:\n> > On 3/1/21 10:30 AM, David Steele wrote:\n> >> Hackers,\n> >>\n> >> The 2019-03 commitfest is now in progress. It's a big one as usual.\n> >>\n> >> Needs review: 213.\n> >> Waiting on Author: 21.\n> >> Ready for Committer: 28.\n> >> Committed: 29.\n> >> Withdrawn: 3.\n> >> Total: 294.\n> >\n> > We are now halfway through the 2021-03 commitfest, though historically\n> > this CF goes a bit long.\n> >\n> > Here is the updated summary:\n> >\n> > Needs review: 152. (-61)\n> > Waiting on Author: 42. (+21)\n> > Ready for Committer: 26. (-2)\n> > Committed: 55. (+26)\n> > Moved to next CF: 5. (+5)\n> > Returned with Feedback: 4. (+4)\n> > Rejected: 1. (+1)\n> > Withdrawn: 9. (+6)\n> > Total: 294.\n> >\n> > In short, 42 of 262 entries open at the beginning of the CF have been\n> > closed (16%).\n>\n> The 2021-03 CF is now over.\n>\n> Here is the updated summary:\n>\n> Needs review: 80 (-72)\n> Waiting on Author: 34 (-8)\n> Ready for Committer: 14 (-12)\n> Committed: 120 (+65)\n> Moved to next CF: 16 (+11)\n> Withdrawn: 14 (+5)\n> Rejected: 1\n> Returned with Feedback: 15 (+11)\n> Total: 294\n>\n> Overall, 118 of 262 entries were closed during this commitfest (45%).\n> That includes 91 patches committed since March 1, which is pretty\n> fantastic. Thank you to everyone, especially the committers, for your\n> hard work!\n\nThanks for all of your great work!\n\nIf my memory serves correctly from a statistics thread from 2020, I\nbelieve that this a new record, at least with regards to number of\ncommitted patches registered to one CF.\n\n> Today and tomorrow I'll be checking the Waiting on Author patches to\n> determine which should be moved to the next CF and which should be\n> returned with feedback. The rest will be moved to the next CF when this\n> one is closed.\n\nI noticed that there is an old patch stuck at 'Needs review' in CF\n2020-09. Could you also update the state of that patch, because I\ndon't think I am the right person to do that.\n\nWith regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Thu, 8 Apr 2021 16:12:19 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF now in progress"
},
{
"msg_contents": "On 4/8/21 10:12 AM, Matthias van de Meent wrote:\n> \n> I noticed that there is an old patch stuck at 'Needs review' in CF\n> 2020-09. Could you also update the state of that patch, because I\n> don't think I am the right person to do that.\n\nGood catch, thanks. I have reset it to RwF.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 8 Apr 2021 10:23:10 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": true,
"msg_subject": "Re: 2019-03 CF now in progress"
},
{
"msg_contents": "On Thu, Apr 08, 2021 at 09:49:20AM -0400, David Steele wrote:\n> Overall, 118 of 262 entries were closed during this commitfest (45%). That\n> includes 91 patches committed since March 1, which is pretty fantastic.\n> Thank you to everyone, especially the committers, for your hard work!\n\nI think that the biggest thanks here goes to you, for cleaning the CF\nentries completely. So, thanks!\n--\nMichael",
"msg_date": "Fri, 9 Apr 2021 10:10:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: 2019-03 CF now in progress"
},
{
"msg_contents": "From: David Steele <david@pgmasters.net>\r\n> Overall, 118 of 262 entries were closed during this commitfest (45%).\r\n> That includes 91 patches committed since March 1, which is pretty\r\n> fantastic. Thank you to everyone, especially the committers, for your\r\n> hard work!\r\n\r\nThe number of committed patches in the last CF is record-breaking in recent years:\r\n\r\nPG 14: 124\r\nPG 13: 90\r\nPG 12: 100\r\nPG 11: 117\r\nPG 10: 116\r\nPG 9.6: 74\r\nPG 9.5: 102\r\n\r\nI take off my hat to the hard work of committers and CFM. OTOH, there seems to be many useful-looking features pending as ready for committer. I look forward to seeing them committed early in PG 15.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\r\n",
"msg_date": "Fri, 9 Apr 2021 01:30:13 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: 2019-03 CF now in progress"
}
] |
[
{
"msg_contents": "It looks like we are unnecessarily instructing our usiers to vacuum their\ndatabases in single-user mode when just vacuuming would be enough.\n\nWe should fix the error message to be less misleading.\n\n== The story\n\nI think most of us have at some time seen the following message, if not in their\nown database, then at some client.\n\nERROR: database is not accepting commands to avoid wraparound data\nloss in database \"<dbname>\"\nHINT: Stop the postmaster and vacuum that database in single-user mode.\nYou might also need to commit or roll back old prepared transactions.\n\nBut \"vacuum that database in single-user mode\" is the very last thing\none wants to\ndo, because\n\n* it is single-user mode, so nothing else works ...\n* CHECKPOINTs are not running, so all the WAL segments can not be rotated and\n reused\n* Replication does not work, so after vacuum is done and database is started\n in normal mode, there is huge backlog to replicate\n* pg_stat_progress_vacuum is not available so you have no idea when the\n command is going to complete\n* VACUUM VERBOSE isn't - there is absolutely no output from single-user mode\n vacuum with or without VERBOSE, so you *really* do not know what is going on\n and how much progress is made (if you are locky you can guess something from\n IO and CPU monitoring, but it is inexact at best )\n\nWhen I started looking at improving the situation I discovered, that there\nalready is no need to run VACUUM in single user mode in any currently supported\nPostgreSQL version as you can run VACUUM perfectly well when the wraparound\nprotection is active.\n\nIt worked in all PG versions from v9.6 to v13.\n\nI also tested v 8.3 as this is where we added virtual transactions, but there\nVACUUM really fails to run successfully without single-user mode..\n\nSo my proposal is to change the error message [*] to something that does not\nsuggest the single-user mode as the requirement for running VACUUM.\nAlso COMMIT PREPARED still works ok in this situation.\n\nSingle-user mode still may be needed in case one needs to drop a\nreplication slot\nor something similar.\n\n[*] The message is in src/backend/access/transam/varsup.c around line 120\n\n=== How to test\n\nThe following instructions let you run into wraparound in about an hour,\ndepending on your setup (was 1.2 hours on my laptop)\n\n==== First, set some flags\n\nTo allow PREPARE TRANSACTION to block VACUUM cleanup\n\n```\nalter system set max_prepared_transactions = 10;\n```\n\nAlso set *_min_messages to errors, unless you want to get 10M of\nWARNINGs (~4GB) to\nlogs and the same amount sent to client, slowing down the last 10M transactions\nsignificantly.\n\n```\nalter system set log_min_messages = error;\nalter system set client_min_messages = error;\n```\n\n==== Restart the system to activate the settings\n\n==== Block Vacuum from cleaning up transactions\n\nCreate a database `wraptest` and connect to it, then\n\n```\ncreate table t (i int);\n\nBEGIN;\ninsert into t values(1);\nPREPARE TRANSACTION 'trx_id_pin';\n```\n\nNow you have a prepared transaction, which makes sure that even well-tuned\nautovacuum does not prevent running into the wraparound protection.\n\n```\n[local]:5096 hannu@wraptest=# SELECT * FROM pg_prepared_xacts;\n transaction | gid | prepared | owner | database\n-------------+------------+-------------------------------+-------+----------\n 593 | trx_id_pin | 2021-03-01 08:57:27.024777+01 | hannu | wraptest\n(1 row)\n```\n\n==== Create a function to consume transaction ids as fast as possible:\n\n```\nCREATE OR REPLACE FUNCTION trx_eater(n int) RETURNS void\nLANGUAGE plpgsql\nAS $plpgsql$\nBEGIN\n FOR i IN 0..n LOOP\n BEGIN\n INSERT INTO t values(i);\n EXCEPTION WHEN OTHERS THEN\n RAISE; -- raise it again, so that we actually err out on wraparound\n END;\n END LOOP;\nEND;\n$plpgsql$;\n```\n\n==== Use pgbench to drive this function\n\nMake a pgbench command file\n\n$ echo 'select trx_eater(100000);' > trx_eater.pgbench\n\nand start pgbench to run this function in a few backends in parallel\n\n$ pgbench -c 16 -T 20000 -P 60 -n wraptest -f trx_eater.pgbench\n\n==== Wait 1-2 hours\n\nIn about an hour or two this should error out with\n\nERROR: database is not accepting commands to avoid wraparound data\nloss in database \"postgres\"\nHINT: Stop the postmaster and vacuum that database in single-user mode.\nYou might also need to commit or roll back old prepared transactions.\n\nAfter this just do\n\nCOMMIT PREPARED 'trx_id_pin';\n\n==== Verify that VACUUM still works\n\nto release the blocket 2PC transaction and you can verify yourself that\n\n* you can run VACUUM on any table, and\n* Autovacuum is working, and will eventually clear up the situation\n\nIf you have not tuned autovacuum_vacuum_cost_* at all, especially in earlier\nversions where it is 20ms by default the autovacuum-started vacuum is\nrunning really\nslowly, and it will take about 8 hours to clean up the table, but this\ncan be sped up\nif you set autovacuum_vacuum_cost_delay=0 and then either restart the database\nor just kill the vacuum process after reloading flags. After this it\nshould complete in\n15-30 min, after which the database is available for writes again.\n\nCheers,\nHannu Krosing\n\n\n",
"msg_date": "Mon, 1 Mar 2021 16:32:23 +0100",
"msg_from": "Hannu Krosing <hannuk@google.com>",
"msg_from_op": true,
"msg_subject": "We should stop telling users to \"vacuum that database in single-user\n mode\""
},
{
"msg_contents": "On Mon, Mar 01, 2021 at 04:32:23PM +0100, Hannu Krosing wrote:\n> It looks like we are unnecessarily instructing our usiers to vacuum their\n> databases in single-user mode when just vacuuming would be enough.\n\n> When I started looking at improving the situation I discovered, that there\n> already is no need to run VACUUM in single user mode in any currently supported\n> PostgreSQL version as you can run VACUUM perfectly well when the wraparound\n> protection is active.\n\nA comment in SetTransactionIdLimit() says, \"VACUUM requires an XID if it\ntruncates at wal_level!=minimal.\" Hence, I think plain VACUUM will fail some\nof the time; VACUUM (TRUNCATE false) should be reliable. In general, I like\nyour goal of replacing painful error message advice with less-painful advice.\nAt the same time, it's valuable for the advice to reliably get the user out of\nthe bad situation.\n\n\n",
"msg_date": "Mon, 1 Mar 2021 18:39:54 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: We should stop telling users to \"vacuum that database in\n single-user mode\""
},
{
"msg_contents": "On Tue, 2 Mar 2021 at 04:32, Hannu Krosing <hannuk@google.com> wrote:\n>\n> It looks like we are unnecessarily instructing our usiers to vacuum their\n> databases in single-user mode when just vacuuming would be enough.\n>\n> We should fix the error message to be less misleading.\n\nIt would be good to change the message as it's pretty outdated. Back\nin 8ad3965a1 (2005) when the message was added, SELECT and VACUUM\nwould have called GetNewTransactionId(). That's no longer the case.\nWe only do that when we actually need an XID.\n\nHowever, I wonder if it's worth going a few steps further to try and\nreduce the chances of that message being seen in the first place.\nMaybe it's worth considering ditching any (auto)vacuum cost limits for\nany table which is within X transaction from wrapping around.\nLikewise for \"VACUUM;\" when the database's datfrozenxid is getting\ndangerously high.\n\nSuch \"emergency\" vacuums could be noted in the auto-vacuum log and\nNOTICEd or WARNING sent to the user during manual VACUUMs. Maybe the\nvalue of X could be xidStopLimit minus a hundred million or so.\n\nI have seen it happen that an instance has a vacuum_cost_limit set and\nsomeone did start the database in single-user mode, per the advice of\nthe error message only to find that the VACUUM took a very long time\ndue to the restrictive cost limit. I struggle to imagine why anyone\nwouldn't want the vacuum to run as quickly as possible in that\nsituation.\n\n(Ideally, the speed of auto-vacuum would be expressed as a percentage\nof time spent working vs sleeping rather than an absolute speed\nlimit... that way, faster servers would get faster vacuums, assuming\nthe same settings. Vacuums may also get more work done per unit of\ntime during offpeak, which seems like it might be a thing that people\nmight want.)\n\nDavid\n\n\n",
"msg_date": "Tue, 2 Mar 2021 19:51:40 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: We should stop telling users to \"vacuum that database in\n single-user mode\""
},
{
"msg_contents": "On Tue, Mar 2, 2021 at 7:52 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 2 Mar 2021 at 04:32, Hannu Krosing <hannuk@google.com> wrote:\n> >\n> > It looks like we are unnecessarily instructing our usiers to vacuum their\n> > databases in single-user mode when just vacuuming would be enough.\n> >\n> > We should fix the error message to be less misleading.\n>\n> It would be good to change the message as it's pretty outdated. Back\n> in 8ad3965a1 (2005) when the message was added, SELECT and VACUUM\n> would have called GetNewTransactionId(). That's no longer the case.\n> We only do that when we actually need an XID.\n>\n> However, I wonder if it's worth going a few steps further to try and\n> reduce the chances of that message being seen in the first place.\n> Maybe it's worth considering ditching any (auto)vacuum cost limits for\n> any table which is within X transaction from wrapping around.\n> Likewise for \"VACUUM;\" when the database's datfrozenxid is getting\n> dangerously high.\n>\n> Such \"emergency\" vacuums could be noted in the auto-vacuum log and\n> NOTICEd or WARNING sent to the user during manual VACUUMs. Maybe the\n> value of X could be xidStopLimit minus a hundred million or so.\n>\n> I have seen it happen that an instance has a vacuum_cost_limit set and\n> someone did start the database in single-user mode, per the advice of\n> the error message only to find that the VACUUM took a very long time\n> due to the restrictive cost limit. I struggle to imagine why anyone\n> wouldn't want the vacuum to run as quickly as possible in that\n> situation.\n\nMultiple instances running on the same hardware and only one of them\nbeing in trouble?\n\nBut it would probably be worthwhile throwing a WARNING if vacuum is\nrun with cost delay enabled in single user mode -- so that the user is\nat least aware of the choice (and can cancel and try again). Maybe\neven a warning directly when starting up a single user session, to let\nthem know?\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Tue, 2 Mar 2021 13:12:07 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: We should stop telling users to \"vacuum that database in\n single-user mode\""
},
{
"msg_contents": "On Wed, 3 Mar 2021 at 01:12, Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Tue, Mar 2, 2021 at 7:52 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > I have seen it happen that an instance has a vacuum_cost_limit set and\n> > someone did start the database in single-user mode, per the advice of\n> > the error message only to find that the VACUUM took a very long time\n> > due to the restrictive cost limit. I struggle to imagine why anyone\n> > wouldn't want the vacuum to run as quickly as possible in that\n> > situation.\n>\n> Multiple instances running on the same hardware and only one of them\n> being in trouble?\n\nYou might be right. I'm not saying it's a great idea but thought it\nwas worth considering.\n\nWe could turn to POLA and ask; what would you be more surprised at; 1)\nYour database suddenly using more I/O than it had been previously, or;\n2) Your database no longer accepting DML.\n\nDavid\n\n\n",
"msg_date": "Wed, 3 Mar 2021 10:07:14 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: We should stop telling users to \"vacuum that database in\n single-user mode\""
},
{
"msg_contents": "On Tue, Mar 2, 2021 at 10:07 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Wed, 3 Mar 2021 at 01:12, Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> > On Tue, Mar 2, 2021 at 7:52 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > > I have seen it happen that an instance has a vacuum_cost_limit set and\n> > > someone did start the database in single-user mode, per the advice of\n> > > the error message only to find that the VACUUM took a very long time\n> > > due to the restrictive cost limit. I struggle to imagine why anyone\n> > > wouldn't want the vacuum to run as quickly as possible in that\n> > > situation.\n> >\n> > Multiple instances running on the same hardware and only one of them\n> > being in trouble?\n>\n> You might be right. I'm not saying it's a great idea but thought it\n> was worth considering.\n>\n> We could turn to POLA and ask; what would you be more surprised at; 1)\n> Your database suddenly using more I/O than it had been previously, or;\n> 2) Your database no longer accepting DML.\n\nI think we misunderstand each other. I meant this only as a comment\nabout the idea of ignoring the cost limit in single user mode -- that\nis, it's a reason to *want* vacuum to not run as quickly as possible\nin single user mode. I should've trimmed the email better.\n\nI agree with your other idea, that of kicking in a more aggressive\nautovacuum if it's not dealing with things fast enough. Maybe even on\nan incremental way - that is run with the default, then at another\nthreshold drop them to half, and at yet another threshold drop them to\n0. I agree that pretty much anything is better than forcing the user\ninto single user mode.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 3 Mar 2021 09:44:15 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: We should stop telling users to \"vacuum that database in\n single-user mode\""
},
{
"msg_contents": "On Wed, 3 Mar 2021 at 21:44, Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Tue, Mar 2, 2021 at 10:07 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > On Wed, 3 Mar 2021 at 01:12, Magnus Hagander <magnus@hagander.net> wrote:\n> > >\n> > > On Tue, Mar 2, 2021 at 7:52 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > > > I have seen it happen that an instance has a vacuum_cost_limit set and\n> > > > someone did start the database in single-user mode, per the advice of\n> > > > the error message only to find that the VACUUM took a very long time\n> > > > due to the restrictive cost limit. I struggle to imagine why anyone\n> > > > wouldn't want the vacuum to run as quickly as possible in that\n> > > > situation.\n> > >\n> > > Multiple instances running on the same hardware and only one of them\n> > > being in trouble?\n> >\n> > You might be right. I'm not saying it's a great idea but thought it\n> > was worth considering.\n> >\n> > We could turn to POLA and ask; what would you be more surprised at; 1)\n> > Your database suddenly using more I/O than it had been previously, or;\n> > 2) Your database no longer accepting DML.\n>\n> I think we misunderstand each other. I meant this only as a comment\n> about the idea of ignoring the cost limit in single user mode -- that\n> is, it's a reason to *want* vacuum to not run as quickly as possible\n> in single user mode. I should've trimmed the email better.\n\nI meant to ignore the cost limits if we're within a hundred million or\nso of the stopLimit. Per what Hannu mentioned, there does not seem to\nbe a great need with current versions of PostgreSQL to restart in the\ninstance in single-user mode. VACUUM still works once we're beyond the\nstopLimit. It's just commands that need to generate a new XID that'll\nfail with the error message mentioned by Hannu.\n\n> I agree with your other idea, that of kicking in a more aggressive\n> autovacuum if it's not dealing with things fast enough. Maybe even on\n> an incremental way - that is run with the default, then at another\n> threshold drop them to half, and at yet another threshold drop them to\n> 0. I agree that pretty much anything is better than forcing the user\n> into single user mode.\n\nOK cool. I wondered if it should be reduced incrementally or just\nswitch off the cost limit completely once we're beyond\nShmemVariableCache->xidStopLimit. If we did want it to be incremental\nthen if we had say ShmemVariableCache->xidFastVacLimit, which was\nabout 100 million xids before xidStopLimit, then the code could adjust\nthe sleep delay down by the percentage through we are from\nxidFastVacLimit to xidStopLimit.\n\nHowever, if we want to keep adjusting the sleep delay then we need to\nmake that work for vacuums that are running already. We don't want to\ncall ReadNextTransactionId() too often, but maybe if we did it once\nper 10 seconds worth of vacuum_delay_point()s. That way we'd never do\nit for vacuums already going at full speed.\n\nDavid\n\n\n",
"msg_date": "Wed, 3 Mar 2021 23:33:03 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: We should stop telling users to \"vacuum that database in\n single-user mode\""
},
{
"msg_contents": "On Wed, Mar 3, 2021 at 11:33 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Wed, 3 Mar 2021 at 21:44, Magnus Hagander <magnus@hagander.net> wrote:\n...\n> > I think we misunderstand each other. I meant this only as a comment\n> > about the idea of ignoring the cost limit in single user mode -- that\n> > is, it's a reason to *want* vacuum to not run as quickly as possible\n> > in single user mode. I should've trimmed the email better.\n>\n> I meant to ignore the cost limits if we're within a hundred million or\n> so of the stopLimit. Per what Hannu mentioned, there does not seem to\n> be a great need with current versions of PostgreSQL to restart in the\n> instance in single-user mode. VACUUM still works once we're beyond the\n> stopLimit. It's just commands that need to generate a new XID that'll\n> fail with the error message mentioned by Hannu.\n\nI am investigating a possibility of introducing a special \"Restricted\nMaintenance\nMode\" to let admin mitigate after xidStopLimit, maybe for another 0.5M txids,\nby doing things like\n\n* dropping an index - to make vacuum faster\n* dropping a table - sometimes it is better to drop a table in order to get the\n production database functional again instead of waiting hours for the vacuum\n to finish.\n And then later restore it from backup or maybe access it from a read-only\n clone of the database via FDW.\n* drop a stale replication slot which is holding back vacuum\n\nTo make sure that this will not accidentally just move xidStopLimit to 0.5M for\nusers who run main workloads as a superuser (they do exists!) this mode should\nbe restricted to\n* only superuser\n* only a subset of commands / functions\n* be heavily throttled to avoid running out of TXIDs, maybe 1-10 xids per second\n* maybe require also setting a GUC to be very explicit\n\n> > I agree with your other idea, that of kicking in a more aggressive\n> > autovacuum if it's not dealing with things fast enough. Maybe even on\n> > an incremental way - that is run with the default, then at another\n> > threshold drop them to half, and at yet another threshold drop them to\n> > 0. I agree that pretty much anything is better than forcing the user\n> > into single user mode.\n>\n> OK cool. I wondered if it should be reduced incrementally or just\n> switch off the cost limit completely once we're beyond\n> ShmemVariableCache->xidStopLimit.\n\nAbrupt change is something that is more likely to make the user/DBA notice\nthat something is going on. I have even been thinking about deliberate\nthrottling to make the user notice / pay attention.\n\n> If we did want it to be incremental\n> then if we had say ShmemVariableCache->xidFastVacLimit, which was\n> about 100 million xids before xidStopLimit, then the code could adjust\n> the sleep delay down by the percentage through we are from\n> xidFastVacLimit to xidStopLimit.\n>\n> However, if we want to keep adjusting the sleep delay then we need to\n> make that work for vacuums that are running already. We don't want to\n> call ReadNextTransactionId() too often, but maybe if we did it once\n> per 10 seconds worth of vacuum_delay_point()s. That way we'd never do\n> it for vacuums already going at full speed.\n\nThere are already samples of this in code, for example the decision to\nforce-start disabled autovacuum is considered after every 64k transactions.\n\nThere is a related item in https://commitfest.postgresql.org/32/2983/ .\nWhen that gets done, we could drive the adjustments from autovacuum.c by\nadding the remaining XID range adjustment to existing worker delay adjust\nmechanisms in autovac_balance_cost() and signalling the autovacuum\nbackend to run the adjustment every few seconds once we are in the danger\nzone.\n\nCheers\nHannu\n\n\n",
"msg_date": "Wed, 3 Mar 2021 13:10:31 +0100",
"msg_from": "Hannu Krosing <hannuk@google.com>",
"msg_from_op": true,
"msg_subject": "Re: We should stop telling users to \"vacuum that database in\n single-user mode\""
},
{
"msg_contents": "On Wed, Mar 3, 2021 at 7:10 AM Hannu Krosing <hannuk@google.com> wrote:\n> On Wed, Mar 3, 2021 at 11:33 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > On Wed, 3 Mar 2021 at 21:44, Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> > I meant to ignore the cost limits if we're within a hundred million or\n> > so of the stopLimit. Per what Hannu mentioned, there does not seem to\n> > be a great need with current versions of PostgreSQL to restart in the\n> > instance in single-user mode. VACUUM still works once we're beyond the\n> > stopLimit. It's just commands that need to generate a new XID that'll\n> > fail with the error message mentioned by Hannu.\n>\n> I am investigating a possibility of introducing a special \"Restricted\n> Maintenance\n> Mode\" to let admin mitigate after xidStopLimit, maybe for another 0.5M txids,\n> by doing things like\n>\n> * dropping an index - to make vacuum faster\n> * dropping a table - sometimes it is better to drop a table in order to get the\n> production database functional again instead of waiting hours for the vacuum\n> to finish.\n> And then later restore it from backup or maybe access it from a read-only\n> clone of the database via FDW.\n> * drop a stale replication slot which is holding back vacuum\n>\n\nI've talked with a few people about modifying wraparound and xid\nemergency vacuums to be more efficient, ie. run them without indexes,\nand possibly some other options. That seems like low-hanging fruit if\nnot already a thing.\n\n> To make sure that this will not accidentally just move xidStopLimit to 0.5M for\n> users who run main workloads as a superuser (they do exists!) this mode should\n> be restricted to\n> * only superuser\n> * only a subset of commands / functions\n> * be heavily throttled to avoid running out of TXIDs, maybe 1-10 xids per second\n> * maybe require also setting a GUC to be very explicit\n>\n> > > I agree with your other idea, that of kicking in a more aggressive\n> > > autovacuum if it's not dealing with things fast enough. Maybe even on\n> > > an incremental way - that is run with the default, then at another\n> > > threshold drop them to half, and at yet another threshold drop them to\n> > > 0. I agree that pretty much anything is better than forcing the user\n> > > into single user mode.\n> >\n> > OK cool. I wondered if it should be reduced incrementally or just\n> > switch off the cost limit completely once we're beyond\n> > ShmemVariableCache->xidStopLimit.\n>\n> Abrupt change is something that is more likely to make the user/DBA notice\n> that something is going on. I have even been thinking about deliberate\n> throttling to make the user notice / pay attention.\n>\n\nI worry that we're walking down the path of trying to find \"clever\"\nsolutions in a situation where the variety of production environments\n(and therefore the right way to handle this issue) is nearly endless.\nThat said... I think at the point we're talking about, subtly is not\nan absolute requirement... if people were paying attention they'd have\nnoticed autovacuum for wrap-around running or warnings in the logs; at\nsome point you do need to be a bit in your face that there is a real\npossibility of disaster around the corner.\n\n> > If we did want it to be incremental\n> > then if we had say ShmemVariableCache->xidFastVacLimit, which was\n> > about 100 million xids before xidStopLimit, then the code could adjust\n> > the sleep delay down by the percentage through we are from\n> > xidFastVacLimit to xidStopLimit.\n> >\n> > However, if we want to keep adjusting the sleep delay then we need to\n> > make that work for vacuums that are running already. We don't want to\n> > call ReadNextTransactionId() too often, but maybe if we did it once\n> > per 10 seconds worth of vacuum_delay_point()s. That way we'd never do\n> > it for vacuums already going at full speed.\n>\n> There are already samples of this in code, for example the decision to\n> force-start disabled autovacuum is considered after every 64k transactions.\n>\n> There is a related item in https://commitfest.postgresql.org/32/2983/ .\n> When that gets done, we could drive the adjustments from autovacuum.c by\n> adding the remaining XID range adjustment to existing worker delay adjust\n> mechanisms in autovac_balance_cost() and signalling the autovacuum\n> backend to run the adjustment every few seconds once we are in the danger\n> zone.\n>\n\nThat patch certainly looks interesting; many many times I've had to\nhave people kick off manual vacuums to use more i/o and kill the\nwrap-around vacuum. Reading the discussion there, I wonder if we\nshould think about weighting the most urgent vacuum at the expense of\nother potential autovacuums, although I feel like they often come in\nbunches in these scenarios.\n\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Wed, 3 Mar 2021 10:36:35 -0500",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: We should stop telling users to \"vacuum that database in\n single-user mode\""
},
{
"msg_contents": "On 2021-Mar-02, David Rowley wrote:\n\n> However, I wonder if it's worth going a few steps further to try and\n> reduce the chances of that message being seen in the first place.\n> Maybe it's worth considering ditching any (auto)vacuum cost limits for\n> any table which is within X transaction from wrapping around.\n> Likewise for \"VACUUM;\" when the database's datfrozenxid is getting\n> dangerously high.\n\nYeah, I like this kind of approach, and I think one change we can do\nthat can have a very large effect is to disable index cleanup when in\nemergency situations. That way, the XID limit is advanced as much as\npossible with as little effort as possible; once the system is back in\nnormal conditions, indexes can be cleaned up at a leisurely pace.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"El destino baraja y nosotros jugamos\" (A. Schopenhauer)\n\n\n",
"msg_date": "Fri, 5 Mar 2021 15:45:41 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: We should stop telling users to \"vacuum that database in\n single-user mode\""
}
] |
[
{
"msg_contents": "Hi,\n\nI suggest adding a new function, regexp_positions(),\nwhich works exactly like regexp_matches(),\nexcept it returns int4range[] start/end positions for *where* the matches occurs.\n\nI first thought I could live without this function,\nand just get the positions using strpos(),\nbut as Andreas Karlsson kindly helped me understand,\nthat naive idea doesn't always work.\n\nAndreas provided this pedagogic example\nto demonstrate the problem:\n\nSELECT regexp_matches('caaabaaa', '(?<=b)(a+)', 'g');\nregexp_matches\n----------------\n{aaa}\n(1 row)\n\nIf we would try to use strpos() to find the position,\nbased on the returned matched substring,\nwe would get the wrong answer:\n\nSELECT strpos('caaabaaa','aaa');\nstrpos\n--------\n 2\n(1 row)\n\nSure, there is \"aaa\" at position 2,\nbut that's not where the match occurred,\nsince the (?<=b) means \"positive lookbehind of the character b\",\nso the match actually occurred at position 6,\nwhere there is a \"b\" before the \"aaa\".\n\nUsing regexp_positions(), we can now get the correct answer:\n\nSELECT regexp_positions('caaabaaa', '(?<=b)(a+)', 'g');\nregexp_positions\n------------------\n{\"[6,9)\"}\n(1 row)\n\nSome more examples from the regress/sql/strings.sql,\nshowing both regexp_matches() and regexp_positions()\nfor the same examples, as they both return the same structure,\nbut with different types:\n\nSELECT regexp_matches('foobarbequebazilbarfbonk', $re$(b[^b]+)(b[^b]+)$re$, 'g');\nregexp_matches\n----------------\n{bar,beque}\n{bazil,barf}\n(2 rows)\n\nSELECT regexp_positions('foobarbequebazilbarfbonk', $re$(b[^b]+)(b[^b]+)$re$, 'g');\n regexp_positions\n-----------------------\n{\"[4,7)\",\"[7,12)\"}\n{\"[12,17)\",\"[17,21)\"}\n(2 rows)\n\nI've added documentation and tests.\n\nForgive me for just sending a patch without much discussion on the list,\nbut it was so easy to implement, so I thought an implementation can\nhelp the discussion on if this is something we want or not.\n\nA few words on the implementation:\nI copied build_regexp_match_result() to a new function build_regexp_positions_result(),\nand removed the string parts, replacing it with code to make ranges instead.\nMaybe there are common parts that could be put into some helper-function,\nbut I think not, since the functions are two small for it to be worth it.\n\nThanks to David Fetter for the idea on using ranges.\n\nBased on HEAD (f5a5773a9dc4185414fe538525e20d8512c2ba35).\n\n/Joel",
"msg_date": "Mon, 01 Mar 2021 18:38:18 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_flags_?=\n =?UTF-8?Q?text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "\n\n> On Mar 1, 2021, at 9:38 AM, Joel Jacobson <joel@compiler.org> wrote:\n> \n> Forgive me for just sending a patch without much discussion on the list,\n> but it was so easy to implement, so I thought an implementation can\n> help the discussion on if this is something we want or not.\n\nI like the idea so I did a bit of testing. I think the following should not error, but does:\n\n+SELECT regexp_positions('foObARbEqUEbAz', $re$(?=beque)$re$, 'i');\n+ERROR: range lower bound must be less than or equal to range upper bound\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 1 Mar 2021 16:12:21 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "=?utf-8?Q?Re=3A_=5BPATCH=5D_regexp=5Fpositions_=28_string_text=2C?=\n =?utf-8?Q?_pattern_text=2C_flags_text_=29_=E2=86=92_setof_int4range=5B=5D?="
},
{
"msg_contents": "On Tue, Mar 2, 2021, at 01:12, Mark Dilger wrote:\n> I like the idea so I did a bit of testing. I think the following should not error, but does:\n>\n> +SELECT regexp_positions('foObARbEqUEbAz', $re$(?=beque)$re$, 'i');\n> +ERROR: range lower bound must be less than or equal to range upper bound\n\nThanks for testing!\n\nThe bug is due to using an (inclusive,inclusive) range,\nso for the example the code tried to construct a (7,6,'[]') range.\n\nWhen trying to fix, I think I've found a general problem with ranges:\n\nI'll use int4range() to demonstrate the problem:\n\nFirst the expected error for what the patch tries to do internally using make_range().\nThis is all good:\n\n# SELECT int4range(7,6,'[]');\nERROR: range lower bound must be less than or equal to range upper bound\n\nI tried to fix this like this:\n\n@ src/backend/utils/adt/regexp.c\n- lower.val = Int32GetDatum(so + 1);\n+ lower.val = Int32GetDatum(so);\n lower.infinite = false;\n- lower.inclusive = true;\n+ lower.inclusive = false;\n lower.lower = true;\n\nWhich would give the same result as doing:\n\nSELECT int4range(6,6,'(]');\nint4range\n-----------\nempty\n(1 row)\n\nHmm. This \"empty\" value what surprise to me.\nI would instead have assumed the canonical form \"[7,7)\".\n\nIf I wanted to know if the range is empty or not,\nI would have guessed I should use the isempty() function, like this:\n\nSELECT isempty(int4range(6,6,'(]'));\nisempty\n---------\nt\n(1 row)\n\nSince we have this isempty() function, I don't see the point of discarding\nthe lower/upper vals, since they contain possibly interesting information\non where the empty range exists.\n\nI find it strange two ranges of zero-length with different bounds are considered equal:\n\nSELECT '[7,7)'::int4range = '[8,8)'::int4range;\n?column?\n----------\nt\n(1 row)\n\nThis seems like a bug to me. What am I missing here?\n\nUnless fixed, then the way I see it, I don't think we can use int4range[] for regexp_positions(),\nif we want to allow returning the positions for zero-length matches, which would be nice.\n\n/Joel\nOn Tue, Mar 2, 2021, at 01:12, Mark Dilger wrote:> I like the idea so I did a bit of testing. I think the following should not error, but does:>> +SELECT regexp_positions('foObARbEqUEbAz', $re$(?=beque)$re$, 'i');> +ERROR: range lower bound must be less than or equal to range upper boundThanks for testing!The bug is due to using an (inclusive,inclusive) range,so for the example the code tried to construct a (7,6,'[]') range.When trying to fix, I think I've found a general problem with ranges:I'll use int4range() to demonstrate the problem:First the expected error for what the patch tries to do internally using make_range().This is all good:# SELECT int4range(7,6,'[]');ERROR: range lower bound must be less than or equal to range upper boundI tried to fix this like this:@ src/backend/utils/adt/regexp.c- lower.val = Int32GetDatum(so + 1);+ lower.val = Int32GetDatum(so); lower.infinite = false;- lower.inclusive = true;+ lower.inclusive = false; lower.lower = true;Which would give the same result as doing:SELECT int4range(6,6,'(]');int4range-----------empty(1 row)Hmm. This \"empty\" value what surprise to me.I would instead have assumed the canonical form \"[7,7)\".If I wanted to know if the range is empty or not,I would have guessed I should use the isempty() function, like this:SELECT isempty(int4range(6,6,'(]'));isempty---------t(1 row)Since we have this isempty() function, I don't see the point of discardingthe lower/upper vals, since they contain possibly interesting informationon where the empty range exists.I find it strange two ranges of zero-length with different bounds are considered equal:SELECT '[7,7)'::int4range = '[8,8)'::int4range;?column?----------t(1 row)This seems like a bug to me. What am I missing here?Unless fixed, then the way I see it, I don't think we can use int4range[] for regexp_positions(),if we want to allow returning the positions for zero-length matches, which would be nice./Joel",
"msg_date": "Tue, 02 Mar 2021 06:05:29 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "On Tue, 2 Mar 2021 at 00:06, Joel Jacobson <joel@compiler.org> wrote:\n\n> I find it strange two ranges of zero-length with different bounds are\n> considered equal:\n>\n> SELECT '[7,7)'::int4range = '[8,8)'::int4range;\n> ?column?\n> ----------\n> t\n> (1 row)\n>\n> This seems like a bug to me. What am I missing here?\n>\n> Unless fixed, then the way I see it, I don't think we can use int4range[]\n> for regexp_positions(),\n> if we want to allow returning the positions for zero-length matches, which\n> would be nice.\n>\n\nRanges are treated as sets. As such equality is defined by membership.\n\nThat being said, I agree that there may be situations in which it would be\nconvenient to have empty ranges at specific locations. Doing this would\nintroduce numerous questions which would have to be resolved. For example,\nwhere/when is the empty range resulting from an intersection operation?\n\nOn Tue, 2 Mar 2021 at 00:06, Joel Jacobson <joel@compiler.org> wrote:I find it strange two ranges of zero-length with different bounds are considered equal:SELECT '[7,7)'::int4range = '[8,8)'::int4range;?column?----------t(1 row)This seems like a bug to me. What am I missing here?Unless fixed, then the way I see it, I don't think we can use int4range[] for regexp_positions(),if we want to allow returning the positions for zero-length matches, which would be nice.Ranges are treated as sets. As such equality is defined by membership.That being said, I agree that there may be situations in which it would be convenient to have empty ranges at specific locations. Doing this would introduce numerous questions which would have to be resolved. For example, where/when is the empty range resulting from an intersection operation?",
"msg_date": "Tue, 2 Mar 2021 00:22:44 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPATCH=5D_regexp=5Fpositions_=28_string_text=2C_pattern_?=\n\t=?UTF-8?Q?text=2C_flags_text_=29_=E2=86=92_setof_int4range=5B=5D?="
},
{
"msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> Unless fixed, then the way I see it, I don't think we can use int4range[] for regexp_positions(),\n\nYeah. It's a cute idea, but the semantics aren't quite right.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Mar 2021 00:31:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re:\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "On Tue, Mar 2, 2021, at 06:22, Isaac Morland wrote:\n> On Tue, 2 Mar 2021 at 00:06, Joel Jacobson <joel@compiler.org> wrote:\n>> I find it strange two ranges of zero-length with different bounds are considered equal:\n>> \n>> SELECT '[7,7)'::int4range = '[8,8)'::int4range;\n>> ?column?\n>> ----------\n>> t\n>> (1 row)\n>> \n>> This seems like a bug to me. What am I missing here?\n>> \n>> Unless fixed, then the way I see it, I don't think we can use int4range[] for regexp_positions(),\n>> if we want to allow returning the positions for zero-length matches, which would be nice.\n> \n> Ranges are treated as sets. As such equality is defined by membership.\n> \n> That being said, I agree that there may be situations in which it would be convenient to have empty ranges at specific locations. Doing this would introduce numerous questions which would have to be resolved. For example, where/when is the empty range resulting from an intersection operation?\n\nHmm, I think I would assume the intersection of two non-overlapping ranges to be isempty()=TRUE,\nand for lower() and upper() to continue to return NULL.\n\nBut I think a zero-length range created with actual bounds should\nreturn the lower() and upper() values during creation, instead of NULL.\n\nI tried to find some other programming environments with range types.\n\nThe first one I found was Ada.\n\nThe below example is similar to int4range(7,6,'[]') which is invalid in PostgreSQL:\n\nwith Ada.Text_IO; use Ada.Text_IO;\nprocedure Hello is\n type Foo is range 7 .. 6;\nbegin\n Put_Line ( Foo'Image(Foo'First) );\n Put_Line ( Foo'Image(Foo'Last) );\nend Hello;\n\n$ ./gnatmake hello.adb\n$ ./hello\n7\n6\n\nI Ada, the 'Range of the Empty_String is 1 .. 0\nhttps://en.wikibooks.org/wiki/Ada_Programming/Types/array#Array_Attributes\n\nI think there is a case for allowing access to the the lower/upper vals instead of returning NULL,\nsince we can do so without changing what isempty() would return for the same values,.\n\n/Joel\nOn Tue, Mar 2, 2021, at 06:22, Isaac Morland wrote:On Tue, 2 Mar 2021 at 00:06, Joel Jacobson <joel@compiler.org> wrote:I find it strange two ranges of zero-length with different bounds are considered equal:SELECT '[7,7)'::int4range = '[8,8)'::int4range;?column?----------t(1 row)This seems like a bug to me. What am I missing here?Unless fixed, then the way I see it, I don't think we can use int4range[] for regexp_positions(),if we want to allow returning the positions for zero-length matches, which would be nice.Ranges are treated as sets. As such equality is defined by membership.That being said, I agree that there may be situations in which it would be convenient to have empty ranges at specific locations. Doing this would introduce numerous questions which would have to be resolved. For example, where/when is the empty range resulting from an intersection operation?Hmm, I think I would assume the intersection of two non-overlapping ranges to be isempty()=TRUE,and for lower() and upper() to continue to return NULL.But I think a zero-length range created with actual bounds shouldreturn the lower() and upper() values during creation, instead of NULL.I tried to find some other programming environments with range types.The first one I found was Ada.The below example is similar to int4range(7,6,'[]') which is invalid in PostgreSQL:with Ada.Text_IO; use Ada.Text_IO;procedure Hello is type Foo is range 7 .. 6;begin Put_Line ( Foo'Image(Foo'First) ); Put_Line ( Foo'Image(Foo'Last) );end Hello;$ ./gnatmake hello.adb$ ./hello76I Ada, the 'Range of the Empty_String is 1 .. 0https://en.wikibooks.org/wiki/Ada_Programming/Types/array#Array_AttributesI think there is a case for allowing access to the the lower/upper vals instead of returning NULL,since we can do so without changing what isempty() would return for the same values,./Joel",
"msg_date": "Tue, 02 Mar 2021 06:52:07 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "On Tue, Mar 2, 2021, at 06:31, Tom Lane wrote:\n>\"Joel Jacobson\" <joel@compiler.org> writes:\n>> Unless fixed, then the way I see it, I don't think we can use int4range[] for regexp_positions(),\n>\n>Yeah. It's a cute idea, but the semantics aren't quite right.\n\nI think there is a case to allow creating empty ranges *with* bounds information, e.g. '[6,7)'::int4range,\nas well as the current only possibility to create empty ranges *without* bounds information, e.g. 'empty'::int4range\n\nI've had a look at how ranges are implemented,\nand I think I've found a way to support this is a simple non-invasive way.\n\nI've outlined the idea in a patch, which I will send separately,\nas it's a different feature, possibly useful for purposes other than regexp_positions().\n\n/Joel\nOn Tue, Mar 2, 2021, at 06:31, Tom Lane wrote:>\"Joel Jacobson\" <joel@compiler.org> writes:>> Unless fixed, then the way I see it, I don't think we can use int4range[] for regexp_positions(),>>Yeah. It's a cute idea, but the semantics aren't quite right.I think there is a case to allow creating empty ranges *with* bounds information, e.g. '[6,7)'::int4range,as well as the current only possibility to create empty ranges *without* bounds information, e.g. 'empty'::int4rangeI've had a look at how ranges are implemented,and I think I've found a way to support this is a simple non-invasive way.I've outlined the idea in a patch, which I will send separately,as it's a different feature, possibly useful for purposes other than regexp_positions()./Joel",
"msg_date": "Tue, 02 Mar 2021 13:41:09 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "On Tue, 2 Mar 2021 at 00:52, Joel Jacobson <joel@compiler.org> wrote:\n\n> Ranges are treated as sets. As such equality is defined by membership.\n>\n> That being said, I agree that there may be situations in which it would be\n> convenient to have empty ranges at specific locations. Doing this would\n> introduce numerous questions which would have to be resolved. For example,\n> where/when is the empty range resulting from an intersection operation?\n>\n>\n> Hmm, I think I would assume the intersection of two non-overlapping ranges\n> to be isempty()=TRUE,\n> and for lower() and upper() to continue to return NULL.\n>\n> But I think a zero-length range created with actual bounds should\n> return the lower() and upper() values during creation, instead of NULL.\n>\n> I tried to find some other programming environments with range types.\n>\n> The first one I found was Ada.\n>\n\nInteresting!\n\nArray indices are a bit different than general ranges however.\n\nOne question I would have is whether empty ranges are all equal to each\nother. If they are, you have an equality that isn’t really equality; if\nthey aren’t then you would have ranges that are unequal even though they\nhave exactly the same membership. Although I suppose this is already true\nfor some types where ends can be specified as open or closed but end up\nwith the same end element; many range types canonicalize to avoid this but\nI don’t think they all do.\n\nReturning to the RE result issue, I wonder how much it actually matters\nwhere any empty matches are. Certainly the actual contents of the match\ndon’t matter; you don’t need to be able to index into the string to extract\nthe substring. The only scenario I can see where it could matter is if the\nRE is using lookahead or look back to find occurrences before or after\nsomething else. If we stipulate that the result array will be in order,\nthen you still don’t have the exact location of empty matches but you do at\nleast have where they are relative to non-empty matches.\n\nOn Tue, 2 Mar 2021 at 00:52, Joel Jacobson <joel@compiler.org> wrote:Ranges are treated as sets. As such equality is defined by membership.That being said, I agree that there may be situations in which it would be convenient to have empty ranges at specific locations. Doing this would introduce numerous questions which would have to be resolved. For example, where/when is the empty range resulting from an intersection operation?Hmm, I think I would assume the intersection of two non-overlapping ranges to be isempty()=TRUE,and for lower() and upper() to continue to return NULL.But I think a zero-length range created with actual bounds shouldreturn the lower() and upper() values during creation, instead of NULL.I tried to find some other programming environments with range types.The first one I found was Ada.Interesting!Array indices are a bit different than general ranges however.One question I would have is whether empty ranges are all equal to each other. If they are, you have an equality that isn’t really equality; if they aren’t then you would have ranges that are unequal even though they have exactly the same membership. Although I suppose this is already true for some types where ends can be specified as open or closed but end up with the same end element; many range types canonicalize to avoid this but I don’t think they all do.Returning to the RE result issue, I wonder how much it actually matters where any empty matches are. Certainly the actual contents of the match don’t matter; you don’t need to be able to index into the string to extract the substring. The only scenario I can see where it could matter is if the RE is using lookahead or look back to find occurrences before or after something else. If we stipulate that the result array will be in order, then you still don’t have the exact location of empty matches but you do at least have where they are relative to non-empty matches.",
"msg_date": "Tue, 2 Mar 2021 08:34:56 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPATCH=5D_regexp=5Fpositions_=28_string_text=2C_pattern_?=\n\t=?UTF-8?Q?text=2C_flags_text_=29_=E2=86=92_setof_int4range=5B=5D?="
},
{
"msg_contents": "Hi Isaac,\n\nMany thanks for the comments.\n\nOn Tue, Mar 2, 2021, at 14:34, Isaac Morland wrote:\n> One question I would have is whether empty ranges are all equal to each other. If they are, you have an equality that isn’t really equality; if they aren’t then you would have ranges that are unequal even though they have exactly the same membership. Although I suppose this is already true for some types where ends can be specified as open or closed but end up with the same end element; many range types canonicalize to avoid this but I don’t think they all do.\n\nI thought about this problem too. I don't think there is a perfect solution.\nLeaving things as they are is problematic too since it makes the range type useless for some use-cases.\nI've sent a patch in a separate thread with the least invasive idea I could come up with.\n \n> Returning to the RE result issue, I wonder how much it actually matters where any empty matches are. Certainly the actual contents of the match don’t matter; you don’t need to be able to index into the string to extract the substring. The only scenario I can see where it could matter is if the RE is using lookahead or look back to find occurrences before or after something else.\n\nHmm, I think it would be ugly to have corner-cases handled differently than the rest.\n\n> If we stipulate that the result array will be in order, then you still don’t have the exact location of empty matches but you do at least have where they are relative to non-empty matches.\n\nThis part I didn't fully understand. Can you please provide some example on this?\n\n/Joel\n\nHi Isaac,Many thanks for the comments.On Tue, Mar 2, 2021, at 14:34, Isaac Morland wrote:One question I would have is whether empty ranges are all equal to each other. If they are, you have an equality that isn’t really equality; if they aren’t then you would have ranges that are unequal even though they have exactly the same membership. Although I suppose this is already true for some types where ends can be specified as open or closed but end up with the same end element; many range types canonicalize to avoid this but I don’t think they all do.I thought about this problem too. I don't think there is a perfect solution.Leaving things as they are is problematic too since it makes the range type useless for some use-cases.I've sent a patch in a separate thread with the least invasive idea I could come up with. Returning to the RE result issue, I wonder how much it actually matters where any empty matches are. Certainly the actual contents of the match don’t matter; you don’t need to be able to index into the string to extract the substring. The only scenario I can see where it could matter is if the RE is using lookahead or look back to find occurrences before or after something else.Hmm, I think it would be ugly to have corner-cases handled differently than the rest.If we stipulate that the result array will be in order, then you still don’t have the exact location of empty matches but you do at least have where they are relative to non-empty matches.This part I didn't fully understand. Can you please provide some example on this?/Joel",
"msg_date": "Tue, 02 Mar 2021 14:58:16 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "On Tue, 2 Mar 2021 at 08:58, Joel Jacobson <joel@compiler.org> wrote:\n\n> If we stipulate that the result array will be in order, then you still\n> don’t have the exact location of empty matches but you do at least have\n> where they are relative to non-empty matches.\n>\n>\n> This part I didn't fully understand. Can you please provide some example\n> on this?\n>\n\nSuppose the match results are:\n\n[4,8)\n[10,10)\n[13,16)\n[20,20)\n[24,24)\n\nThen this gets turned into:\n\n[4,8)\nempty\n[13,16)\nempty\nempty\n\nSo you know that there are non-empty matches from 4-8 and 13-16, plus an\nempty match between them and two empty matches at the end. Given that all\nempty strings are identical, I think it's only in pretty rare circumstances\nwhere you need to know exactly where the empty matches are; it would have\nto be a matter of identifying empty matches immediately before or after a\nspecific pattern; in which case I suspect it would usually be just as easy\nto match the pattern itself directly.\n\nDoes this help?\n\nOn Tue, 2 Mar 2021 at 08:58, Joel Jacobson <joel@compiler.org> wrote:If we stipulate that the result array will be in order, then you still don’t have the exact location of empty matches but you do at least have where they are relative to non-empty matches.This part I didn't fully understand. Can you please provide some example on this?Suppose the match results are:[4,8)[10,10)[13,16)[20,20)[24,24)Then this gets turned into:[4,8)empty[13,16)emptyemptySo you know that there are non-empty matches from 4-8 and 13-16, plus an empty match between them and two empty matches at the end. Given that all empty strings are identical, I think it's only in pretty rare circumstances where you need to know exactly where the empty matches are; it would have to be a matter of identifying empty matches immediately before or after a specific pattern; in which case I suspect it would usually be just as easy to match the pattern itself directly.Does this help?",
"msg_date": "Tue, 2 Mar 2021 09:05:34 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPATCH=5D_regexp=5Fpositions_=28_string_text=2C_pattern_?=\n\t=?UTF-8?Q?text=2C_flags_text_=29_=E2=86=92_setof_int4range=5B=5D?="
},
{
"msg_contents": "On Tue, Mar 2, 2021, at 15:05, Isaac Morland wrote:\n> Suppose the match results are:\n> \n> [4,8)\n> [10,10)\n> [13,16)\n> [20,20)\n> [24,24)\n> \n> Then this gets turned into:\n> \n> [4,8)\n> empty\n> [13,16)\n> empty\n> empty\n> \n> So you know that there are non-empty matches from 4-8 and 13-16, plus an empty match between them and two empty matches at the end. Given that all empty strings are identical, I think it's only in pretty rare circumstances where you need to know exactly where the empty matches are; it would have to be a matter of identifying empty matches immediately before or after a specific pattern; in which case I suspect it would usually be just as easy to match the pattern itself directly.\n> \n> Does this help?\n\nThanks, I see what you mean now.\n\nI agree it's probably a corner-case,\nbut I think I would still prefer a complete solution by just returning setof two integer[] values,\ninstead of the cuter-but-only-partial solution of using the existing int4range[].\n\nEven better would be if we could fix the range type so it could actually be used in this and other similar situations.\n\nIf so, then we could do:\n\nSELECT r FROM regexp_positions('caaabaaabeee','(?<=b)a+','g') AS r;\n r\n-----------\n{\"[6,9)\"}\n(1 row)\n\nSELECT r FROM regexp_positions('caaabaaabeee','(?<=b)','g') AS r;\n r\n---------\n{empty}\n{empty}\n(2 rows)\n\nSELECT lower(r[1]), upper(r[1]) FROM regexp_positions('caaabaaabeee','(?<=b)','g') AS r;\nlower | upper\n-------+-------\n 5 | 5\n 9 | 9\n(2 rows)\n\n/Joel\nOn Tue, Mar 2, 2021, at 15:05, Isaac Morland wrote:Suppose the match results are:[4,8)[10,10)[13,16)[20,20)[24,24)Then this gets turned into:[4,8)empty[13,16)emptyemptySo you know that there are non-empty matches from 4-8 and 13-16, plus an empty match between them and two empty matches at the end. Given that all empty strings are identical, I think it's only in pretty rare circumstances where you need to know exactly where the empty matches are; it would have to be a matter of identifying empty matches immediately before or after a specific pattern; in which case I suspect it would usually be just as easy to match the pattern itself directly.Does this help?Thanks, I see what you mean now.I agree it's probably a corner-case,but I think I would still prefer a complete solution by just returning setof two integer[] values,instead of the cuter-but-only-partial solution of using the existing int4range[].Even better would be if we could fix the range type so it could actually be used in this and other similar situations.If so, then we could do:SELECT r FROM regexp_positions('caaabaaabeee','(?<=b)a+','g') AS r; r-----------{\"[6,9)\"}(1 row)SELECT r FROM regexp_positions('caaabaaabeee','(?<=b)','g') AS r; r---------{empty}{empty}(2 rows)SELECT lower(r[1]), upper(r[1]) FROM regexp_positions('caaabaaabeee','(?<=b)','g') AS r;lower | upper-------+------- 5 | 5 9 | 9(2 rows)/Joel",
"msg_date": "Tue, 02 Mar 2021 15:21:19 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "\n\n> On Mar 2, 2021, at 5:34 AM, Isaac Morland <isaac.morland@gmail.com> wrote:\n> \n> Returning to the RE result issue, I wonder how much it actually matters where any empty matches are. Certainly the actual contents of the match don’t matter; you don’t need to be able to index into the string to extract the substring. The only scenario I can see where it could matter is if the RE is using lookahead or look back to find occurrences before or after something else. If we stipulate that the result array will be in order, then you still don’t have the exact location of empty matches but you do at least have where they are relative to non-empty matches.\n\nI agree the contents of the match don't matter, because they are always empty. But the position matters. You could intend to split a string in multiple places using lookaheads and lookbehinds to determine the split points.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 2 Mar 2021 06:59:52 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "=?utf-8?Q?Re=3A_=5BPATCH=5D_regexp=5Fpositions_=28_string_text=2C?=\n =?utf-8?Q?_pattern_text=2C_flags_text_=29_=E2=86=92_setof_int4range=5B=5D?="
},
{
"msg_contents": "On Tue, Mar 2, 2021, at 06:31, Tom Lane wrote:\n> \"Joel Jacobson\" <joel@compiler.org> writes:\n> > Unless fixed, then the way I see it, I don't think we can use int4range[] for regexp_positions(),\n> \n> Yeah. It's a cute idea, but the semantics aren't quite right.\n\nHaving abandoned the cute idea that didn't work,\nhere comes a new patch with a regexp_positions() instead returning\nsetof record (start_pos integer[], end_pos integer[]).\n\nExample:\n\nSELECT * FROM regexp_positions('foobarbequebazilbarfbonk', $re$(b[^b]+)(b[^b]+)$re$, 'g');\nstart_pos | end_pos\n-----------+---------\n{3,6} | {6,11}\n{11,16} | {16,20}\n(2 rows)\n\nBased on HEAD (040af779382e8e4797242c49b93a5a8f9b79c370).\n\nI've updated docs and tests.\n\n/Joel",
"msg_date": "Thu, 04 Mar 2021 14:21:02 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> Having abandoned the cute idea that didn't work,\n> here comes a new patch with a regexp_positions() instead returning\n> setof record (start_pos integer[], end_pos integer[]).\n\nI wonder if a 2-D integer array wouldn't be a better idea,\nie {{startpos1,length1},{startpos2,length2},...}. My experience\nwith working with parallel arrays in SQL has been unpleasant.\n\nAlso, did you see\n\nhttps://www.postgresql.org/message-id/fc160ee0-c843-b024-29bb-97b5da61971f%40darold.net\n\nSeems like there may be some overlap in these proposals.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Mar 2021 10:40:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re:\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "On 03/04/21 10:40, Tom Lane wrote:\n> Also, did you see\n> \n> https://www.postgresql.org/message-id/fc160ee0-c843-b024-29bb-97b5da61971f%40darold.net\n> \n> Seems like there may be some overlap in these proposals.\n\nNot only that, the functions in that other proposal are very similar\nto the standard's own functions that are specified to use XML Query\nregular expression syntax (sample implementations in [1]).\n\nThese differently-named (which is good) functions seem to be a de facto\nstandard where the regexp syntax and semantics are those native to the\nDBMS, the correspondence being\n\n de facto ISO XQuery-based\n -------------- ------------------\n regexp_like like_regex\n regexp_count occurrences_regex\n regexp_instr position_regex\n regexp_substr substring_regex\n regexp_replace translate_regex\n\n\nThe regexp_positions proposal highlights an interesting apparent gap in\nboth the de facto and the ISO specs: the provided functions allow you\nto specify which occurrence you're talking about, and get the corresponding\npositions or the corresponding substring, but neither set of functions\nincludes one to just give you all the matching positions at once as\na SETOF something.\n\nWhat the proposed regexp_positions() returns is pretty much exactly\nthe notional \"list of match vectors\" that appears internally throughout\nthe specs of the ISO functions, but is never directly exposed.\n\nIn the LOMV as described in the standard, the position/length arrays\nare indexed from zero, and the start and length at index 0 are those\nfor the overall match as a whole.\n\nRight now, if you have a query that involves, say,\n\n substring_regex('(b[^b]+)(b[^b]+)' IN str GROUP 1) and also\n substring_regex('(b[^b]+)(b[^b]+)' IN str GROUP 2),\n\na na�ve implementation like [1] will of course compile and evaluate\nthe regexp twice and return one group each time. It makes me wonder\nwhether the standards committee was picturing a clever parse analyzer\nand planner that would say \"aha! you want group 1 and group 2 from\na single evaluation of this regex!\", and that might even explain the\ncurious rule in the standard that the regex must be an actual literal,\nnot any other expression. (Still, that strikes me as an awkward way to\nhave to write it, spelling the regex out as a literal, twice.)\n\nIt has also made my idly wonder how close we could get to behaving\nthat way, perhaps with planner support functions and other available\nparse analysis/planning hooks. Would any of those mechanisms get a\nsufficiently global view of the query to do that kind of rewriting?\n\nRegards,\n-Chap\n\n\n[1]\nhttps://tada.github.io/pljava/pljava-examples/apidocs/org/postgresql/pljava/example/saxon/S9.html#method.summary\n\n\n",
"msg_date": "Thu, 4 Mar 2021 11:41:10 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re:_[PATCH]_regexp=5fpositions_=28_string_text=2c_pattern?=\n =?UTF-8?Q?_text=2c_flags_text_=29_=e2=86=92_setof_int4range[]?="
},
{
"msg_contents": "On Thu, Mar 4, 2021, at 16:40, Tom Lane wrote:\n> \"Joel Jacobson\" <joel@compiler.org> writes:\n> > Having abandoned the cute idea that didn't work,\n> > here comes a new patch with a regexp_positions() instead returning\n> > setof record (start_pos integer[], end_pos integer[]).\n> \n> I wonder if a 2-D integer array wouldn't be a better idea,\n> ie {{startpos1,length1},{startpos2,length2},...}. My experience\n> with working with parallel arrays in SQL has been unpleasant.\n\nI considered it, but I prefer two separate simple arrays for two reasons:\n\na) more pedagogic, it's at least then obvious what values are start and end positions,\nthen you only have to understand what the values means.\n\nb) 2-D doesn't arrays don't work well with unnest().\nIf you would unnest() the 2-D array you couldn't separate the start positions from the end positions,\nwhereas with two separate, you could do:\n\nSELECT unnest(start_pos) AS start_pos, unnest(end_pos) AS end_pos FROM regexp_positions('foobarbequebazilbarfbonk', $re$(b[^b]+)(b[^b]+)$re$, 'g');\nstart_pos | end_pos\n-----------+---------\n 3 | 6\n 6 | 11\n 11 | 16\n 16 | 20\n(4 rows)\n\nCan give some details on your unpleasant experiences of parallel arrays?\n\n\n> \n> Also, did you see\n> \n> https://www.postgresql.org/message-id/fc160ee0-c843-b024-29bb-97b5da61971f%40darold.net\n> \n> \n> Seems like there may be some overlap in these proposals.\n\nYes, I saw it, it was sent shortly after my proposal, so I couldn't take it into account.\nSeems useful, except regexp_instr() seems redundant, I would rather have regexp_positions(),\nbut maybe regexp_instr() should also be added for compatibility reasons.\n\n/Joel\n\nOn Thu, Mar 4, 2021, at 16:40, Tom Lane wrote:\"Joel Jacobson\" <joel@compiler.org> writes:> Having abandoned the cute idea that didn't work,> here comes a new patch with a regexp_positions() instead returning> setof record (start_pos integer[], end_pos integer[]).I wonder if a 2-D integer array wouldn't be a better idea,ie {{startpos1,length1},{startpos2,length2},...}. My experiencewith working with parallel arrays in SQL has been unpleasant.I considered it, but I prefer two separate simple arrays for two reasons:a) more pedagogic, it's at least then obvious what values are start and end positions,then you only have to understand what the values means.b) 2-D doesn't arrays don't work well with unnest().If you would unnest() the 2-D array you couldn't separate the start positions from the end positions,whereas with two separate, you could do:SELECT unnest(start_pos) AS start_pos, unnest(end_pos) AS end_pos FROM regexp_positions('foobarbequebazilbarfbonk', $re$(b[^b]+)(b[^b]+)$re$, 'g');start_pos | end_pos-----------+--------- 3 | 6 6 | 11 11 | 16 16 | 20(4 rows)Can give some details on your unpleasant experiences of parallel arrays?Also, did you seehttps://www.postgresql.org/message-id/fc160ee0-c843-b024-29bb-97b5da61971f%40darold.netSeems like there may be some overlap in these proposals.Yes, I saw it, it was sent shortly after my proposal, so I couldn't take it into account.Seems useful, except regexp_instr() seems redundant, I would rather have regexp_positions(),but maybe regexp_instr() should also be added for compatibility reasons./Joel",
"msg_date": "Thu, 04 Mar 2021 17:53:42 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "Le 04/03/2021 à 16:40, Tom Lane a écrit :\n> \"Joel Jacobson\" <joel@compiler.org> writes:\n>> Having abandoned the cute idea that didn't work,\n>> here comes a new patch with a regexp_positions() instead returning\n>> setof record (start_pos integer[], end_pos integer[]).\n> I wonder if a 2-D integer array wouldn't be a better idea,\n> ie {{startpos1,length1},{startpos2,length2},...}. My experience\n> with working with parallel arrays in SQL has been unpleasant.\n>\n> Also, did you see\n>\n> https://www.postgresql.org/message-id/fc160ee0-c843-b024-29bb-97b5da61971f%40darold.net\n>\n> Seems like there may be some overlap in these proposals.\n\n\nThe object of regexp_position() is to return all start+end of captured \nsubstrings, it overlaps a little with regexp_instr() in the way that \nthis function returns the start or end position of a specific captured \nsubstring. I think it is a good idea to have a function that returns all \npositions instead of a single one like regexp_instr(), this is not the \nsame usage. Actually regexp_position() is exactly the same as \nregexp_matches() except that it return positions instead of substrings.\n\n\nI also think that it should return a setof 2-D integer array, an other \nsolution is to return all start/end positions of an occurrence chained \nin an integer array {start1,end1,start2,end2,..}.\n\n\nRegards,\n\n-- \nGilles Darold\n\n\n\n",
"msg_date": "Thu, 4 Mar 2021 17:55:20 +0100",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re=3a_=5bPATCH=5d_regexp=5fpositions_=28_string_text=2c_p?=\n =?UTF-8?Q?attern_text=2c_flags_text_=29_=e2=86=92_setof_int4range=5b=5d?="
},
{
"msg_contents": "On Thu, Mar 4, 2021, at 17:55, Gilles Darold wrote:\n> I also think that it should return a setof 2-D integer array, an other \n> solution is to return all start/end positions of an occurrence chained \n> in an integer array {start1,end1,start2,end2,..}.\n\nHmm. Seems like we've in total managed to come up with three flawed ideas.\n\nPros/cons I see:\n\nIdea #1: setof 2-D integer array\n+ Packs the values into one single value.\n- Difficult to work with 2-D arrays, doesn't work well with unnest(), has to inspect the dims and use for loops to extract values.\n- Looking at a 2-D value, it's not obvious what the integer values means in it means. Which one is \"startpos\" and do we have \"length\" or \"endpos\" values?\n\nIdea #2: setof (start_pos integer[], end_pos integer[])\n+ It's obvious to the user what type of integers \"start_pos\" and \"end_pos\" contain.\n- Decouples the values into two separate values.\n- Tom mentioned some bad experiences with separate array values. (Details on this would be interesting.)\n\nIdea #3: chained integer array {start1,end1,start2,end2,..}\n- Mixes different values into the same value\n- Requires maths (although simple calculations) to extract values\n\nI think all three ideas (including mine) are ugly. None of them is wart free.\n\nIdea #4: add a new composite built-in type.\n\nA simple composite type with two int8 fields.\n\nThe field names seems to vary a lot between languages:\n\nRust: \"start\", \"end\" [1]\nC++: \"begin\", \"end\" [2]\nPython: \"start\", \"stop\" [3]\n\nSuch a simple composite type, could then always be used,\nwhen you want to represent simple integer ranges,\nbetween two exact values, arguably a very common need.\n\nSuch type could be converted to/from int8range,\nbut would have easily accessible field names,\nwhich is simpler than using lower() and upper(),\nsince upper() always returns the canonical\nexclusive upper bound for discrete types,\nwhich is not usually what you want when\ndealing with \"start\" and \"end\" integer ranges.\n\nSince there is no type named just \"range\", why not just use this name?\n\nSince \"end\" is a keyword, I suggest the \"stop\" name:\n\nPoC:\n\nCREATE TYPE range AS (start int8, stop int8);\n\nA real implementation would of course also verify CHECK (start <= stop),\nand would add conversions to/from int8range.\n\nI realise this is probably a controversial idea.\nBut, I think this is a general common problem that deserves a clean general solution.\n\nThoughts? More ideas?\n\n[1] https://doc.rust-lang.org/std/ops/struct.Range.html\n[2] https://en.cppreference.com/w/cpp/ranges\n[3] https://www.w3schools.com/python/ref_func_range.asp\n\nOn Thu, Mar 4, 2021, at 17:55, Gilles Darold wrote:I also think that it should return a setof 2-D integer array, an other solution is to return all start/end positions of an occurrence chained in an integer array {start1,end1,start2,end2,..}.Hmm. Seems like we've in total managed to come up with three flawed ideas.Pros/cons I see:Idea #1: setof 2-D integer array+ Packs the values into one single value.- Difficult to work with 2-D arrays, doesn't work well with unnest(), has to inspect the dims and use for loops to extract values.- Looking at a 2-D value, it's not obvious what the integer values means in it means. Which one is \"startpos\" and do we have \"length\" or \"endpos\" values?Idea #2: setof (start_pos integer[], end_pos integer[])+ It's obvious to the user what type of integers \"start_pos\" and \"end_pos\" contain.- Decouples the values into two separate values.- Tom mentioned some bad experiences with separate array values. (Details on this would be interesting.)Idea #3: chained integer array {start1,end1,start2,end2,..}- Mixes different values into the same value- Requires maths (although simple calculations) to extract valuesI think all three ideas (including mine) are ugly. None of them is wart free.Idea #4: add a new composite built-in type.A simple composite type with two int8 fields.The field names seems to vary a lot between languages:Rust: \"start\", \"end\" [1]C++: \"begin\", \"end\" [2]Python: \"start\", \"stop\" [3]Such a simple composite type, could then always be used,when you want to represent simple integer ranges,between two exact values, arguably a very common need.Such type could be converted to/from int8range,but would have easily accessible field names,which is simpler than using lower() and upper(),since upper() always returns the canonicalexclusive upper bound for discrete types,which is not usually what you want whendealing with \"start\" and \"end\" integer ranges.Since there is no type named just \"range\", why not just use this name?Since \"end\" is a keyword, I suggest the \"stop\" name:PoC:CREATE TYPE range AS (start int8, stop int8);A real implementation would of course also verify CHECK (start <= stop),and would add conversions to/from int8range.I realise this is probably a controversial idea.But, I think this is a general common problem that deserves a clean general solution.Thoughts? More ideas?[1] https://doc.rust-lang.org/std/ops/struct.Range.html[2] https://en.cppreference.com/w/cpp/ranges[3] https://www.w3schools.com/python/ref_func_range.asp",
"msg_date": "Fri, 05 Mar 2021 11:37:35 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "Idea #5:\n\nAllow disabling canonicalization via optional parameter to range constructor functions.\n\nThis would then allow using the range type,\nto create inclusive/inclusive integer ranges,\nwhere lower() and upper() would return what you expect.\n\n/Joel\nIdea #5:Allow disabling canonicalization via optional parameter to range constructor functions.This would then allow using the range type,to create inclusive/inclusive integer ranges,where lower() and upper() would return what you expect./Joel",
"msg_date": "Fri, 05 Mar 2021 13:42:19 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "Hi\n\npá 5. 3. 2021 v 13:44 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> Idea #5:\n>\n> Allow disabling canonicalization via optional parameter to range\n> constructor functions.\n>\n\nI think so rules describing ranges and multirages are long enough, so\nincreasing functionality doesn't look like a practical idea.\n\nI prefere special simple composite type like you described in the previous\nemail (start, stop) or (start, length). It can be used more times when\nusing range or multi range is not practical.\n\nThe composite types are more natural for this purpose than 2D arrays.\n\nRegards\n\nPavel\n\n\n> This would then allow using the range type,\n> to create inclusive/inclusive integer ranges,\n> where lower() and upper() would return what you expect.\n>\n> /Joel\n>\n\nHipá 5. 3. 2021 v 13:44 odesílatel Joel Jacobson <joel@compiler.org> napsal:Idea #5:Allow disabling canonicalization via optional parameter to range constructor functions.I think so rules describing ranges and multirages are long enough, so increasing functionality doesn't look like a practical idea.I prefere special simple composite type like you described in the previous email (start, stop) or (start, length). It can be used more times when using range or multi range is not practical.The composite types are more natural for this purpose than 2D arrays.RegardsPavel This would then allow using the range type,to create inclusive/inclusive integer ranges,where lower() and upper() would return what you expect./Joel",
"msg_date": "Fri, 5 Mar 2021 13:57:43 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPATCH=5D_regexp=5Fpositions_=28_string_text=2C_pattern_?=\n\t=?UTF-8?Q?text=2C_flags_text_=29_=E2=86=92_setof_int4range=5B=5D?="
},
{
"msg_contents": "On 3/4/21 4:40 PM, Tom Lane wrote:\n> I wonder if a 2-D integer array wouldn't be a better idea,\n> ie {{startpos1,length1},{startpos2,length2},...}. My experience\n> with working with parallel arrays in SQL has been unpleasant.\n\nHm, I can see your point but on the other hand I can't say my \nexperiences working with 2-D arrays have been that pleasant either. The \nmain issue being how there is no simple way to unnest just one dimension \nof the array. Maybe it would be worth considering implementing a \nfunction for that.\n\nAs far as I know to unnest just one dimension you would need to use \ngenerate_series() or something like the query below. Please correct me \nif I am wrong and there is some more ergonomic way to do it.\n\nWITH d (a) AS (SELECT '{{2,3},{4,5}}'::int[])\nSELECT array_agg(unnest) FROM d, unnest(a) WITH ORDINALITY GROUP BY \n(ordinality - 1) / array_length(a, 2);\n\nAndreas\n\n\n",
"msg_date": "Fri, 5 Mar 2021 16:19:44 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re=3a_=5bPATCH=5d_regexp=5fpositions_=28_string_text=2c_p?=\n =?UTF-8?Q?attern_text=2c_flags_text_=29_=e2=86=92_setof_int4range=5b=5d?="
},
{
"msg_contents": "On Tue, Mar 2, 2021, at 01:12, Mark Dilger wrote:\n> I like the idea so I did a bit of testing. I think the following should not error, but does:\n> \n> +SELECT regexp_positions('foObARbEqUEbAz', $re$(?=beque)$re$, 'i');\n> +ERROR: range lower bound must be less than or equal to range upper bound\n\nDoh! How stupid of me. I realize now I had a off-by-one thinko in my 0001 patch using int4range.\n\nI didn't use the raw \"so\" and \"eo\" values in regexp.c like I should have,\ninstead, I incorrectly used (so + 1) as the startpos,\nand just eo as the endpos.\n\nThis is what caused all the problems.\n\nThe fix is simple:\n- lower.val = Int32GetDatum(so + 1);\n+ lower.val = Int32GetDatum(so);\n\nThe example that gave the error now works properly:\n\nSELECT regexp_positions('foObARbEqUEbAz', $re$(?=beque)$re$, 'i');\nregexp_positions\n------------------\n{\"[6,7)\"}\n(1 row)\n\nI've also created a SQL PoC of the composite range type idea,\nand convenience wrapper functions for int4range and int8range.\n\nCREATE TYPE range AS (start int8, stop int8);\n\nHelper functions:\nrange(start int8, stop int8) -> range\nrange(int8range) -> range\nrange(int4range) -> range\nrange(int8range[]) -> range[]\nrange(int4range[]) -> range[]\n\nDemo:\n\nregexp_positions() returns setof int4range[]:\n\nSELECT r FROM regexp_positions('foobarbequebazilbarfbonk', $re$(b[^b]+)(b[^b]+)$re$, 'g') AS r;\n r\n-----------------------\n{\"[3,7)\",\"[6,12)\"}\n{\"[11,17)\",\"[16,21)\"}\n(2 rows)\n\nConvert int4range[] -> range[]:\n\nSELECT range(r) FROM regexp_positions('foobarbequebazilbarfbonk', $re$(b[^b]+)(b[^b]+)$re$, 'g') AS r;\n range\n-----------------------\n{\"(3,6)\",\"(6,11)\"}\n{\"(11,16)\",\"(16,20)\"}\n(2 rows)\n\n\"start\" and \"stop\" fields:\n\nSELECT (range(r[1])).* FROM regexp_positions('foobarbequebazilbarfbonk', $re$(b[^b]+)(b[^b]+)$re$, 'g') AS r;\nstart | stop\n-------+------\n 3 | 6\n 11 | 16\n(2 rows)\n\nzero-length match at beginning:\n\nSELECT r FROM regexp_positions('','^','g') AS r;\n r\n-----------\n{\"[0,1)\"}\n(1 row)\n\nSELECT (range(r[1])).* FROM regexp_positions('','^','g') AS r;\nstart | stop\n-------+------\n 0 | 0\n(1 row)\n\nMy conclusion is that we should use setof int4range[] as the return value for regexp_positions().\n\nNew patch attached.\n\nThe composite range type and helper functions are of course not at all necessary,\nbut I think they would be a nice addition, to make it easier to work with ranges\nfor composite types. I intentionally didn't create anyrange versions of them,\nsince they can only support composite types,\nsince they don't require the inclusive/exclusive semantics.\n\n/Joel",
"msg_date": "Fri, 05 Mar 2021 20:46:24 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "On Fri, Mar 5, 2021, at 20:46, Joel Jacobson wrote:\n> My conclusion is that we should use setof int4range[] as the return value for regexp_positions().\n\nIf acceptable by the project, it be even nicer if we could just return the suggested composite type.\n\nI don't see any existing catalog functions returning composite types though?\nIs this due to some policy of not wanting composite types as return values for built-ins or just a coincidence?\n\nExample on regexp_positions -> setof range[] \nwhere range is:\n\nCREATE TYPE range AS (start int8, stop int8);\n\nSELECT regexp_positions('foObARbEqUEbAz', $re$(?=beque)$re$, 'i');\nregexp_positions\n------------------\n{\"(6,6)\"}\n(1 row)\n\nSELECT r FROM regexp_positions('foobarbequebazilbarfbonk', $re$(b[^b]+)(b[^b]+)$re$, 'g') AS r;\n r\n-----------------------\n{\"(3,6)\",\"(6,11)\"}\n{\"(11,16)\",\"(16,20)\"}\n(2 rows)\n\nSELECT r[1].*, r[2].* FROM regexp_positions('foobarbequebazilbarfbonk', $re$(b[^b]+)(b[^b]+)$re$, 'g') AS r;\nstart | stop | start | stop\n-------+------+-------+------\n 3 | 6 | 6 | 11\n 11 | 16 | 16 | 20\n(2 rows)\n\nSELECT r[1].* FROM regexp_positions('','^','g') AS r;\nstart | stop\n-------+------\n 0 | 0\n(1 row)\n\nThoughts?\n\n/Joel\nOn Fri, Mar 5, 2021, at 20:46, Joel Jacobson wrote:My conclusion is that we should use setof int4range[] as the return value for regexp_positions().If acceptable by the project, it be even nicer if we could just return the suggested composite type.I don't see any existing catalog functions returning composite types though?Is this due to some policy of not wanting composite types as return values for built-ins or just a coincidence?Example on regexp_positions -> setof range[] where range is:CREATE TYPE range AS (start int8, stop int8);SELECT regexp_positions('foObARbEqUEbAz', $re$(?=beque)$re$, 'i');regexp_positions------------------{\"(6,6)\"}(1 row)SELECT r FROM regexp_positions('foobarbequebazilbarfbonk', $re$(b[^b]+)(b[^b]+)$re$, 'g') AS r; r-----------------------{\"(3,6)\",\"(6,11)\"}{\"(11,16)\",\"(16,20)\"}(2 rows)SELECT r[1].*, r[2].* FROM regexp_positions('foobarbequebazilbarfbonk', $re$(b[^b]+)(b[^b]+)$re$, 'g') AS r;start | stop | start | stop-------+------+-------+------ 3 | 6 | 6 | 11 11 | 16 | 16 | 20(2 rows)SELECT r[1].* FROM regexp_positions('','^','g') AS r;start | stop-------+------ 0 | 0(1 row)Thoughts?/Joel",
"msg_date": "Sat, 06 Mar 2021 04:25:22 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "\n\n> On Mar 5, 2021, at 11:46 AM, Joel Jacobson <joel@compiler.org> wrote:\n> \n> \n> /Joel\n> <range.sql><0003-regexp-positions.patch>\n\nI did a bit more testing:\n\n+SELECT regexp_positions('foobarbequebaz', 'b', 'g');\n+ regexp_positions \n+------------------\n+ {\"[3,5)\"}\n+ {\"[6,8)\"}\n+ {\"[11,13)\"}\n+(3 rows)\n+\n\nI understand that these ranges are intended to be read as one character long matches starting at positions 3, 6, and 11, but they look like they match either two or three characters, depending on how you read them, and users will likely be confused by that.\n\n+SELECT regexp_positions('foobarbequebaz', '(?=beque)', 'g');\n+ regexp_positions \n+------------------\n+ {\"[6,7)\"}\n+(1 row)\n+\n\nThis is a zero length match. As above, it might be confusing that a zero length match reads this way.\n\n+SELECT regexp_positions('foobarbequebaz', '(?<=z)', 'g');\n+ regexp_positions \n+------------------\n+ {\"[14,15)\"}\n+(1 row)\n+\n\nSame here, except this time position 15 is referenced, which is beyond the end of the string.\n\nI think a zero length match at the end of this string should read as {\"[14,14)\"}, and you have been forced to add one to avoid that collapsing down to \"empty\", but I'd rather you found a different datatype rather than abuse the definition of int4range.\n\nIt seems that you may have reached a similar conclusion down-thread?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 8 Mar 2021 08:20:14 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "=?utf-8?Q?Re=3A_=5BPATCH=5D_regexp=5Fpositions_=28_string_text=2C?=\n =?utf-8?Q?_pattern_text=2C_flags_text_=29_=E2=86=92_setof_int4range=5B=5D?="
},
{
"msg_contents": "On Mon, Mar 8, 2021, at 17:20, Mark Dilger wrote:\n> > On Mar 5, 2021, at 11:46 AM, Joel Jacobson <joel@compiler.org> wrote:\n> > <range.sql><0003-regexp-positions.patch>\n> \n> I did a bit more testing:\n> \n> +SELECT regexp_positions('foobarbequebaz', 'b', 'g');\n> + regexp_positions \n> +------------------\n> + {\"[3,5)\"}\n> + {\"[6,8)\"}\n> + {\"[11,13)\"}\n> +(3 rows)\n> +\n> \n> I understand that these ranges are intended to be read as one character long matches starting at positions 3, 6, and 11, but they look like they match either two or three characters, depending on how you read them, and users will likely be confused by that.\n> \n> +SELECT regexp_positions('foobarbequebaz', '(?=beque)', 'g');\n> + regexp_positions \n> +------------------\n> + {\"[6,7)\"}\n> +(1 row)\n> +\n> \n> This is a zero length match. As above, it might be confusing that a zero length match reads this way.\n> \n> +SELECT regexp_positions('foobarbequebaz', '(?<=z)', 'g');\n> + regexp_positions \n> +------------------\n> + {\"[14,15)\"}\n> +(1 row)\n> +\n> \n> Same here, except this time position 15 is referenced, which is beyond the end of the string.\n> \n> I think a zero length match at the end of this string should read as {\"[14,14)\"}, and you have been forced to add one to avoid that collapsing down to \"empty\", but I'd rather you found a different datatype rather than abuse the definition of int4range.\n> \n> It seems that you may have reached a similar conclusion down-thread?\n\nThis is due to the, in my opinion, unfortunate decision of using inclusive/exclusive as the canonical form for discrete types.\nProbably not much we can do about that, but that's what we have, so I think it's fine.\n\n[6,7) is exactly the same thing as [6,6] for discrete types, it simply means the startpos and endpos both are 6.\n\nI prefer to think of a match as two points. If the points are at the same position, it's a zero length match.\n\nIn the example, the startpos and endpos are both at 6, so it's a zero length match.,\n\nThis was very confusing to me at first. I wrongly thought I needed an empty int4range and had the perverse idea of hacking the range type to allow setting lower and upper even though it was empty. This was a really really bad idea which I feel stupid of even considering. It was before I understood a zero length match should actually *not* be represented as an empty int4range, but as an int4range covering exactly one single integer, since startpos=endpos. This was due to my off-by-one error. With that fixed, the only problem is the (in my opinion) unnatural canonical form for discrete types, since in this context it's just silly to talk about inclusive/exclusive. I think inclusive/inclusive would have been much more SQL idiomatic, since that's the semantics for BETWEEN in SQL, it's inclusive/inclusive. So is most other programming environments I've seen.\n\nHowever, not much we can do about that for int4range/int8range,\nbut maybe multirange could change the canonical form going forward.\n\nEven if not changed, I think int4range works just fine. It just requires a bit more mental effort to understand what the values mean. Probably an annoyance for users at first, but I think they easily will understand they should just do \"-1\" on the upper() value (but only if upper_inc() is FALSE, but you know that for sure for int4ranges, so it is really necessary, one might wonder).\n\nIf a N+1 dimension array could easily be unnested to a N dimension array,\nI would prefer Tom's idea of a 2-D regexp_positions(), since it simple and not controversial.\n\nSince there are currently zero composite type returning catalog functions, I can see why the idea of returning a \"range\" with two \"start\" and \"stop\" fields is controversial. There are probably good reasons that I fail to see why there are no composite type returning functions in the catalogs. Ideas on why this is the case, anyone?\n\n/Joel\n\nOn Mon, Mar 8, 2021, at 17:20, Mark Dilger wrote:> On Mar 5, 2021, at 11:46 AM, Joel Jacobson <joel@compiler.org> wrote:> <range.sql><0003-regexp-positions.patch>I did a bit more testing:+SELECT regexp_positions('foobarbequebaz', 'b', 'g');+ regexp_positions +------------------+ {\"[3,5)\"}+ {\"[6,8)\"}+ {\"[11,13)\"}+(3 rows)+I understand that these ranges are intended to be read as one character long matches starting at positions 3, 6, and 11, but they look like they match either two or three characters, depending on how you read them, and users will likely be confused by that.+SELECT regexp_positions('foobarbequebaz', '(?=beque)', 'g');+ regexp_positions +------------------+ {\"[6,7)\"}+(1 row)+This is a zero length match. As above, it might be confusing that a zero length match reads this way.+SELECT regexp_positions('foobarbequebaz', '(?<=z)', 'g');+ regexp_positions +------------------+ {\"[14,15)\"}+(1 row)+Same here, except this time position 15 is referenced, which is beyond the end of the string.I think a zero length match at the end of this string should read as {\"[14,14)\"}, and you have been forced to add one to avoid that collapsing down to \"empty\", but I'd rather you found a different datatype rather than abuse the definition of int4range.It seems that you may have reached a similar conclusion down-thread?This is due to the, in my opinion, unfortunate decision of using inclusive/exclusive as the canonical form for discrete types.Probably not much we can do about that, but that's what we have, so I think it's fine.[6,7) is exactly the same thing as [6,6] for discrete types, it simply means the startpos and endpos both are 6.I prefer to think of a match as two points. If the points are at the same position, it's a zero length match.In the example, the startpos and endpos are both at 6, so it's a zero length match.,This was very confusing to me at first. I wrongly thought I needed an empty int4range and had the perverse idea of hacking the range type to allow setting lower and upper even though it was empty. This was a really really bad idea which I feel stupid of even considering. It was before I understood a zero length match should actually *not* be represented as an empty int4range, but as an int4range covering exactly one single integer, since startpos=endpos. This was due to my off-by-one error. With that fixed, the only problem is the (in my opinion) unnatural canonical form for discrete types, since in this context it's just silly to talk about inclusive/exclusive. I think inclusive/inclusive would have been much more SQL idiomatic, since that's the semantics for BETWEEN in SQL, it's inclusive/inclusive. So is most other programming environments I've seen.However, not much we can do about that for int4range/int8range,but maybe multirange could change the canonical form going forward.Even if not changed, I think int4range works just fine. It just requires a bit more mental effort to understand what the values mean. Probably an annoyance for users at first, but I think they easily will understand they should just do \"-1\" on the upper() value (but only if upper_inc() is FALSE, but you know that for sure for int4ranges, so it is really necessary, one might wonder).If a N+1 dimension array could easily be unnested to a N dimension array,I would prefer Tom's idea of a 2-D regexp_positions(), since it simple and not controversial.Since there are currently zero composite type returning catalog functions, I can see why the idea of returning a \"range\" with two \"start\" and \"stop\" fields is controversial. There are probably good reasons that I fail to see why there are no composite type returning functions in the catalogs. Ideas on why this is the case, anyone?/Joel",
"msg_date": "Mon, 08 Mar 2021 18:05:15 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "\n\n> On Mar 8, 2021, at 9:05 AM, Joel Jacobson <joel@compiler.org> wrote:\n> \n> If a N+1 dimension array could easily be unnested to a N dimension array,\n> I would prefer Tom's idea of a 2-D regexp_positions(), since it simple and not controversial.\n\nHow about proposing some array functions to go along with the regexp_positions, and then do it that way?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 8 Mar 2021 09:11:53 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "=?utf-8?Q?Re=3A_=5BPATCH=5D_regexp=5Fpositions_=28_string_text=2C?=\n =?utf-8?Q?_pattern_text=2C_flags_text_=29_=E2=86=92_setof_int4range=5B=5D?="
},
{
"msg_contents": "On Mon, Mar 8, 2021, at 18:11, Mark Dilger wrote:\n> > On Mar 8, 2021, at 9:05 AM, Joel Jacobson <joel@compiler.org> wrote:\n> > \n> > If a N+1 dimension array could easily be unnested to a N dimension array,\n> > I would prefer Tom's idea of a 2-D regexp_positions(), since it simple and not controversial.\n> \n> How about proposing some array functions to go along with the regexp_positions, and then do it that way?\n\nSounds like a nice solution. That would be a huge win when dealing with multidimensional arrays in general.\n\nDo we have strong support on the list for such a function? If so, I can make an attempt implementing it, unless some more experienced hacker wants to do it.\n\n/Joel\n\nOn Mon, Mar 8, 2021, at 18:11, Mark Dilger wrote:> On Mar 8, 2021, at 9:05 AM, Joel Jacobson <joel@compiler.org> wrote:> > If a N+1 dimension array could easily be unnested to a N dimension array,> I would prefer Tom's idea of a 2-D regexp_positions(), since it simple and not controversial.How about proposing some array functions to go along with the regexp_positions, and then do it that way?Sounds like a nice solution. That would be a huge win when dealing with multidimensional arrays in general.Do we have strong support on the list for such a function? If so, I can make an attempt implementing it, unless some more experienced hacker wants to do it./Joel",
"msg_date": "Mon, 08 Mar 2021 18:20:02 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> I prefer to think of a match as two points. If the points are at the same position, it's a zero length match.\n\nFWIW, I personally think that returning a start position and a length\nwould be the most understandable way to operate. If you report start\nposition and end position then there is always going to be confusion\nover whether the end position is inclusive or exclusive (that is,\nsome code including our regex library thinks of the \"end\" as being\n\"first character after the match\"). This is indeed the same\ndefinitional issue you're contending with vis-a-vis range endpoints,\nonly now you lack any pre-existing definition that people might rely on\nto know what you meant.\n\n> Since there are currently zero composite type returning catalog functions, I can see why the idea of returning a \"range\" with two \"start\" and \"stop\" fields is controversial. There are probably good reasons that I fail to see why there are no composite type returning functions in the catalogs. Ideas on why this is the case, anyone?\n\nYeah: it's hard. The amount of catalog infrastructure needed by a\ncomposite type is dauntingly large, and genbki.pl doesn't offer any\nsupport for building composite types that aren't tied to catalogs.\n(I suppose if you don't mind hacking Perl, you could try to refactor\nit to improve that.) Up to now we've avoided the need for that,\nsince a function can be declared to return an anonymous record type\nby giving it some OUT parameters. However, if I'm understanding\nthings correctly \"regexp_positions(IN ..., OUT match_start integer,\nOUT match_length integer) RETURNS SETOF record\" wouldn't be enough\nfor you, because you really need a 2-D tableau of match data to\nhandle the case of multiple capturing parens plus 'g' mode. It\nseems like you need it to return setof array(s), so the choices are\narray of composite, 2-D array, or two parallel arrays. I'm not sure\nthe first of those is so much better than the others that it's worth\nthe pain involved to set up the initial catalog data that way.\n\nBTW, I don't know if you know the history here, but regexp_matches()\nis way older than regexp_match(); we eventually invented the latter\nbecause the former was just too hard to use for easy non-'g' cases.\nI'm inclined to think we should learn from that and provide equivalent\nvariants regexp_position[s] right off the bat.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Mar 2021 12:30:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re:\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "On 03/08/21 12:30, Tom Lane wrote:\n> I'm inclined to think we should learn from that and provide equivalent\n> variants regexp_position[s] right off the bat.\n\nI think the s-free version is exactly the regexp_instr included in\nthe other concurrent proposal [1], which closely corresponds to the\nISO position_regex() except for the ISO one using XQuery regex syntax.\n\nI gather from [1] that the name regexp_instr is chosen in solidarity\nwith other DBMSs that de facto have it. Would it be weirder to have the\nsingular form be regexp_instr and the plural be regexp_positions?\nOr to diverge from the other systems' de facto convention and name\nthe singular form regexp_position? (Or the plural form regexp_instrs?\nThat sounds to me like a disassembler for regexps. Or regexps_instr,\nlike attorneys general? Never mind.)\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Mon, 8 Mar 2021 13:29:11 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re:_[PATCH]_regexp=5fpositions_=28_string_text=2c_pattern?=\n =?UTF-8?Q?_text=2c_flags_text_=29_=e2=86=92_setof_int4range[]?="
},
{
"msg_contents": "On 03/08/21 13:29, Chapman Flack wrote:\n> I think the s-free version is exactly the regexp_instr included in\n> the other concurrent proposal [1]\n\nsorry.\n\n[1]\nhttps://www.postgresql.org/message-id/fc160ee0-c843-b024-29bb-97b5da61971f%40darold.net\n\n\n",
"msg_date": "Mon, 8 Mar 2021 13:30:59 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re:_[PATCH]_regexp=5fpositions_=28_string_text=2c_pattern?=\n =?UTF-8?Q?_text=2c_flags_text_=29_=e2=86=92_setof_int4range[]?="
},
{
"msg_contents": "\n\n> On Mar 8, 2021, at 9:20 AM, Joel Jacobson <joel@compiler.org> wrote:\n> \n> On Mon, Mar 8, 2021, at 18:11, Mark Dilger wrote:\n>> > On Mar 8, 2021, at 9:05 AM, Joel Jacobson <joel@compiler.org> wrote:\n>> > \n>> > If a N+1 dimension array could easily be unnested to a N dimension array,\n>> > I would prefer Tom's idea of a 2-D regexp_positions(), since it simple and not controversial.\n>> \n>> How about proposing some array functions to go along with the regexp_positions, and then do it that way?\n> \n> Sounds like a nice solution. That would be a huge win when dealing with multidimensional arrays in general.\n> \n> Do we have strong support on the list for such a function? If so, I can make an attempt implementing it, unless some more experienced hacker wants to do it.\n\nThat's a hard question to answer in advance. Typically, you need to propose a solution, and then get feedback. You wouldn't need to post a patch, but perhaps some examples of how you would expect it to work, like\n\n+SELECT slice('{{1,2,3,4},{5,6,7,8},{9,10,11,12},{13,14,15,16}}'::integer[][], '[2,4)'::int4range);\n+ slice \n+-----------\n+ {{2,3}}\n+ {{6,7}}\n+ {{10,11}}\n+ {{14,15}}\n+(4 rows)\n+\n+SELECT slice('{{{1,2,3},{4,5,6},{7,8,9}},{{10,11,12},{13,14,15},{16,17,18}}}'::integer[][][], '[2,4)'::int4range);\n+ slice \n+---------------------------\n+ {{{4,5,6},{7,8,9}}}\n+ {{{13,14,15},{16,17,18}}}\n+(2 rows)\n+\n\nand then people can tell you why they hate that choice of interface.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 8 Mar 2021 10:32:43 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "=?utf-8?Q?Re=3A_=5BPATCH=5D_regexp=5Fpositions_=28_string_text=2C?=\n =?utf-8?Q?_pattern_text=2C_flags_text_=29_=E2=86=92_setof_int4range=5B=5D?="
},
{
"msg_contents": "On Mon, Mar 8, 2021, at 18:30, Tom Lane wrote:\n> FWIW, I personally think that returning a start position and a length\n> would be the most understandable way to operate. \n\nVery good point. I agree. (And then ranges cannot be used, regardless of canonical form.)\n\n> Yeah: it's hard. The amount of catalog infrastructure needed by a\n> composite type is dauntingly large, and genbki.pl doesn't offer any\n> support for building composite types that aren't tied to catalogs.\n> (I suppose if you don't mind hacking Perl, you could try to refactor\n> it to improve that.) \n\nI haven't studied genbki.pl in detail, but seen its name on the list many times,\nmaybe I should go through it to understand it in detail.\n\nOn the topic of Perl.\nI've written a lot of Perl code over the years.\nTrustly was initially a Perl+PostgreSQL microservice project, with different components\nwritten in Perl run as daemons, communicating with each other over TCP/IP,\nvia JSON-RPC. We had lots of strange problems difficult to debug.\nIn the end, we moved all the business logics from Perl into database functions in PostgreSQL,\nand all problems went away. The biggest win was the nice UTF-8 support,\nwhich was really awkward in Perl. It's kind of UTF-8, but not really and not always.\n\nMost programming languages/compilers are obsessed\nwith the concept of \"bootstrapping\"/\"dogfooding\".\n\nThinking of PostgreSQL as a language/compiler, that would mean we should be obsessed with the idea\nof implementing PostgreSQL in SQL or PL/pgSQL. That would be quite a challenge of course.\n\nHowever, for certain tasks, when a high-level language is preferred,\nand when the raw performance of C isn't necessary, then maybe SQL/PLpgSQL\ncould be a serious alternative to Perl? \n\nIf I understand it correctly, we don't need to run genbki.pl to compile PostgreSQL,\nso someone wanting to compile PostgreSQL without having a running PostgreSQL-instance\ncould do so without problems.\n\nA dependency on having a PostgreSQL instance running,\nis perhaps acceptable for hackers developing PostgreSQL?\nBut of course not for normal users just wanting to compile PostgreSQL.\n\nIf we think there is at least a 1% chance this is a feasible idea,\nI'm willing to try implementing a SQL/PLpgSQL-version of genbki.pl.\nWould be a fun hack, but not if it's guaranteed time-waste.\n\n> It seems like you need it to return setof array(s), so the choices are\n> array of composite, 2-D array, or two parallel arrays. I'm not sure\n> the first of those is so much better than the others that it's worth\n> the pain involved to set up the initial catalog data that way.\n\nI agree, I like the 2-D array version, but only if a we could provide a C-function\nto allow unnesting N+1 dims to N dims. Is that a fruitful idea, or are there\nreasons why it cannot be done easily? I could give it a try, if we think it's a good idea.\n\n> \n> BTW, I don't know if you know the history here, but regexp_matches()\n> is way older than regexp_match(); we eventually invented the latter\n> because the former was just too hard to use for easy non-'g' cases.\n> I'm inclined to think we should learn from that and provide equivalent\n> variants regexp_position[s] right off the bat.\n\nI remember! regexp_match() was a very welcomed addition.\nI agree both regexp_position[s] variants would be good for same reasons.\n\n/Joel\nOn Mon, Mar 8, 2021, at 18:30, Tom Lane wrote:FWIW, I personally think that returning a start position and a lengthwould be the most understandable way to operate. Very good point. I agree. (And then ranges cannot be used, regardless of canonical form.)Yeah: it's hard. The amount of catalog infrastructure needed by acomposite type is dauntingly large, and genbki.pl doesn't offer anysupport for building composite types that aren't tied to catalogs.(I suppose if you don't mind hacking Perl, you could try to refactorit to improve that.) I haven't studied genbki.pl in detail, but seen its name on the list many times,maybe I should go through it to understand it in detail.On the topic of Perl.I've written a lot of Perl code over the years.Trustly was initially a Perl+PostgreSQL microservice project, with different componentswritten in Perl run as daemons, communicating with each other over TCP/IP,via JSON-RPC. We had lots of strange problems difficult to debug.In the end, we moved all the business logics from Perl into database functions in PostgreSQL,and all problems went away. The biggest win was the nice UTF-8 support,which was really awkward in Perl. It's kind of UTF-8, but not really and not always.Most programming languages/compilers are obsessedwith the concept of \"bootstrapping\"/\"dogfooding\".Thinking of PostgreSQL as a language/compiler, that would mean we should be obsessed with the ideaof implementing PostgreSQL in SQL or PL/pgSQL. That would be quite a challenge of course.However, for certain tasks, when a high-level language is preferred,and when the raw performance of C isn't necessary, then maybe SQL/PLpgSQLcould be a serious alternative to Perl? If I understand it correctly, we don't need to run genbki.pl to compile PostgreSQL,so someone wanting to compile PostgreSQL without having a running PostgreSQL-instancecould do so without problems.A dependency on having a PostgreSQL instance running,is perhaps acceptable for hackers developing PostgreSQL?But of course not for normal users just wanting to compile PostgreSQL.If we think there is at least a 1% chance this is a feasible idea,I'm willing to try implementing a SQL/PLpgSQL-version of genbki.pl.Would be a fun hack, but not if it's guaranteed time-waste. It seems like you need it to return setof array(s), so the choices arearray of composite, 2-D array, or two parallel arrays. I'm not surethe first of those is so much better than the others that it's worththe pain involved to set up the initial catalog data that way.I agree, I like the 2-D array version, but only if a we could provide a C-functionto allow unnesting N+1 dims to N dims. Is that a fruitful idea, or are therereasons why it cannot be done easily? I could give it a try, if we think it's a good idea.BTW, I don't know if you know the history here, but regexp_matches()is way older than regexp_match(); we eventually invented the latterbecause the former was just too hard to use for easy non-'g' cases.I'm inclined to think we should learn from that and provide equivalentvariants regexp_position[s] right off the bat.I remember! regexp_match() was a very welcomed addition.I agree both regexp_position[s] variants would be good for same reasons./Joel",
"msg_date": "Mon, 08 Mar 2021 19:46:56 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "On Mon, Mar 8, 2021, at 19:46, Joel Jacobson wrote:\n> However, for certain tasks, when a high-level language is preferred,\n> and when the raw performance of C isn't necessary, then maybe SQL/PLpgSQL\n> could be a serious alternative to Perl? \n\nBefore we had jsonb, this would have been totally unrealistic.\n\nBut with jsonb, I think we actually have complete coverage of Perl's data types:\n\nPerl array <=> jsonb array\nPerl hash <=> jsonb object\nPerl scalar <=> jsonb string/boolean/number\n\nI've been using jsonb with great success for code generation.\nASTs are nicely represented as nested jsonb arrays.\n\n/Joel\n\n\nOn Mon, Mar 8, 2021, at 19:46, Joel Jacobson wrote:However, for certain tasks, when a high-level language is preferred,and when the raw performance of C isn't necessary, then maybe SQL/PLpgSQLcould be a serious alternative to Perl? Before we had jsonb, this would have been totally unrealistic.But with jsonb, I think we actually have complete coverage of Perl's data types:Perl array <=> jsonb arrayPerl hash <=> jsonb objectPerl scalar <=> jsonb string/boolean/numberI've been using jsonb with great success for code generation.ASTs are nicely represented as nested jsonb arrays./Joel",
"msg_date": "Mon, 08 Mar 2021 20:41:03 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> If I understand it correctly, we don't need to run genbki.pl to compile PostgreSQL,\n> so someone wanting to compile PostgreSQL without having a running PostgreSQL-instance\n> could do so without problems.\n> A dependency on having a PostgreSQL instance running,\n> is perhaps acceptable for hackers developing PostgreSQL?\n> But of course not for normal users just wanting to compile PostgreSQL.\n> If we think there is at least a 1% chance this is a feasible idea,\n> I'm willing to try implementing a SQL/PLpgSQL-version of genbki.pl.\n\nNo, I think this is a non-starter. Bootstrapping from just the\ncontents of the git repo is something developers do all the time\n(and indeed the buildfarm does it in every run). We do not want to\nneed a running PG instance in advance of doing that.\n\nYeah, we could make it work if we started treating all the genbki\noutput files as things to include in the git repo, but I don't think\nanybody wants to go there.\n\nI understand some folks' distaste for Perl, and indeed I don't like it\nthat much myself. If we were starting over from scratch I'm sure\nwe'd choose a different language for our build/test infrastructure.\nBut that's where we are, and I would not be in favor of having more\nthan one scripting language as build requirements. So Perl is going\nto be it unless somebody gets ambitious enough to replace all the Perl\nscripts at once, which seems unlikely to happen.\n\n> I agree, I like the 2-D array version, but only if a we could provide a C-function\n> to allow unnesting N+1 dims to N dims. Is that a fruitful idea, or are there\n> reasons why it cannot be done easily? I could give it a try, if we think it's a good idea.\n\n+1, I think this need has come up before. My guess is that the\nhardest part of that will be choosing a function name that will\nsatisfy everybody ;-).\n\nCould there be any value in allowing unnesting a variable number\nof levels? If so, we could dodge the naming issue by inventing\n\"unnest(anyarray, int) returns anyarray\" where the second argument\nspecifies the number of subscript levels to remove, or perhaps\nthe number to keep.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Mar 2021 15:12:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re:\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "po 8. 3. 2021 v 21:12 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> \"Joel Jacobson\" <joel@compiler.org> writes:\n> > If I understand it correctly, we don't need to run genbki.pl to compile\n> PostgreSQL,\n> > so someone wanting to compile PostgreSQL without having a running\n> PostgreSQL-instance\n> > could do so without problems.\n> > A dependency on having a PostgreSQL instance running,\n> > is perhaps acceptable for hackers developing PostgreSQL?\n> > But of course not for normal users just wanting to compile PostgreSQL.\n> > If we think there is at least a 1% chance this is a feasible idea,\n> > I'm willing to try implementing a SQL/PLpgSQL-version of genbki.pl.\n>\n> No, I think this is a non-starter. Bootstrapping from just the\n> contents of the git repo is something developers do all the time\n> (and indeed the buildfarm does it in every run). We do not want to\n> need a running PG instance in advance of doing that.\n>\n> Yeah, we could make it work if we started treating all the genbki\n> output files as things to include in the git repo, but I don't think\n> anybody wants to go there.\n>\n> I understand some folks' distaste for Perl, and indeed I don't like it\n> that much myself. If we were starting over from scratch I'm sure\n> we'd choose a different language for our build/test infrastructure.\n> But that's where we are, and I would not be in favor of having more\n> than one scripting language as build requirements. So Perl is going\n> to be it unless somebody gets ambitious enough to replace all the Perl\n> scripts at once, which seems unlikely to happen.\n>\n> > I agree, I like the 2-D array version, but only if a we could provide a\n> C-function\n> > to allow unnesting N+1 dims to N dims. Is that a fruitful idea, or are\n> there\n> > reasons why it cannot be done easily? I could give it a try, if we think\n> it's a good idea.\n>\n> +1, I think this need has come up before. My guess is that the\n> hardest part of that will be choosing a function name that will\n> satisfy everybody ;-).\n>\n> Could there be any value in allowing unnesting a variable number\n> of levels? If so, we could dodge the naming issue by inventing\n> \"unnest(anyarray, int) returns anyarray\" where the second argument\n> specifies the number of subscript levels to remove, or perhaps\n> the number to keep.\n>\n\nso what about?\n\nCREATE OR REPLACE FUNCTION unnest_slice(anyarray, int)\nRETURNS SETOF anyarray AS $$\nDECLARE r $1%type;\nBEGIN\n FOREACH r SLICE $2 IN ARRAY $1 --- now $2 should be constant\n LOOP\n RETURN NEXT r;\n END LOOP;\nEND;\n$$ LANGUAGE plpgsql;\n\nRegards\n\nPavel\n\n regards, tom lane\n>\n>\n>\n\npo 8. 3. 2021 v 21:12 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\"Joel Jacobson\" <joel@compiler.org> writes:\n> If I understand it correctly, we don't need to run genbki.pl to compile PostgreSQL,\n> so someone wanting to compile PostgreSQL without having a running PostgreSQL-instance\n> could do so without problems.\n> A dependency on having a PostgreSQL instance running,\n> is perhaps acceptable for hackers developing PostgreSQL?\n> But of course not for normal users just wanting to compile PostgreSQL.\n> If we think there is at least a 1% chance this is a feasible idea,\n> I'm willing to try implementing a SQL/PLpgSQL-version of genbki.pl.\n\nNo, I think this is a non-starter. Bootstrapping from just the\ncontents of the git repo is something developers do all the time\n(and indeed the buildfarm does it in every run). We do not want to\nneed a running PG instance in advance of doing that.\n\nYeah, we could make it work if we started treating all the genbki\noutput files as things to include in the git repo, but I don't think\nanybody wants to go there.\n\nI understand some folks' distaste for Perl, and indeed I don't like it\nthat much myself. If we were starting over from scratch I'm sure\nwe'd choose a different language for our build/test infrastructure.\nBut that's where we are, and I would not be in favor of having more\nthan one scripting language as build requirements. So Perl is going\nto be it unless somebody gets ambitious enough to replace all the Perl\nscripts at once, which seems unlikely to happen.\n\n> I agree, I like the 2-D array version, but only if a we could provide a C-function\n> to allow unnesting N+1 dims to N dims. Is that a fruitful idea, or are there\n> reasons why it cannot be done easily? I could give it a try, if we think it's a good idea.\n\n+1, I think this need has come up before. My guess is that the\nhardest part of that will be choosing a function name that will\nsatisfy everybody ;-).\n\nCould there be any value in allowing unnesting a variable number\nof levels? If so, we could dodge the naming issue by inventing\n\"unnest(anyarray, int) returns anyarray\" where the second argument\nspecifies the number of subscript levels to remove, or perhaps\nthe number to keep.so what about?CREATE OR REPLACE FUNCTION unnest_slice(anyarray, int)RETURNS SETOF anyarray AS $$DECLARE r $1%type;BEGIN FOREACH r SLICE $2 IN ARRAY $1 --- now $2 should be constant LOOP RETURN NEXT r; END LOOP;END;$$ LANGUAGE plpgsql; RegardsPavel\n regards, tom lane",
"msg_date": "Mon, 8 Mar 2021 21:46:02 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_Re=3A_=5BPATCH=5D_regexp=5Fpositions_=28_string_text=2C_patt?=\n\t=?UTF-8?Q?ern_text=2C_flags_text_=29_=E2=86=92_setof_int4range=5B=5D?="
},
{
"msg_contents": "On Mon, Mar 8, 2021, at 21:46, Pavel Stehule wrote:\n> so what about?\n> \n> CREATE OR REPLACE FUNCTION unnest_slice(anyarray, int)\n> RETURNS SETOF anyarray AS $$\n> DECLARE r $1%type;\n> BEGIN\n> FOREACH r SLICE $2 IN ARRAY $1 --- now $2 should be constant\n> LOOP\n> RETURN NEXT r;\n> END LOOP;\n> END;\n> $$ LANGUAGE plpgsql;\n\nNot sure I understand. Is the suggestion to add \"SLICE\" as syntactic sugar in PL/pgSQL to invoke the proposed two-argument C-version of unnest()?\n\n/Joel\n\n\nOn Mon, Mar 8, 2021, at 21:46, Pavel Stehule wrote:so what about?CREATE OR REPLACE FUNCTION unnest_slice(anyarray, int)RETURNS SETOF anyarray AS $$DECLARE r $1%type;BEGIN FOREACH r SLICE $2 IN ARRAY $1 --- now $2 should be constant LOOP RETURN NEXT r; END LOOP;END;$$ LANGUAGE plpgsql;Not sure I understand. Is the suggestion to add \"SLICE\" as syntactic sugar in PL/pgSQL to invoke the proposed two-argument C-version of unnest()?/Joel",
"msg_date": "Tue, 09 Mar 2021 07:57:05 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "út 9. 3. 2021 v 7:57 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Mon, Mar 8, 2021, at 21:46, Pavel Stehule wrote:\n>\n> so what about?\n>\n> CREATE OR REPLACE FUNCTION unnest_slice(anyarray, int)\n> RETURNS SETOF anyarray AS $$\n> DECLARE r $1%type;\n> BEGIN\n> FOREACH r SLICE $2 IN ARRAY $1 --- now $2 should be constant\n> LOOP\n> RETURN NEXT r;\n> END LOOP;\n> END;\n> $$ LANGUAGE plpgsql;\n>\n>\n> Not sure I understand. Is the suggestion to add \"SLICE\" as syntactic sugar\n> in PL/pgSQL to invoke the proposed two-argument C-version of unnest()?\n>\n\nthere are two ideas:\n\n1. the behaviour can be same like SLICE clause of FOREACH statement\n\n2. use unnest_slice as name - the function \"unnest\" is relatively rich\ntoday and using other overloading doesn't look too practical. But this is\njust an idea. I can imagine more forms of slicing or unnesting, so it can\nbe practical to use different names than just \"unnest\".\n\nPersonally I don't like too much using 2D arrays for this purpose. The\nqueries over this functionality will be harder to read (it is like fortran\n77). I understand so now, there is no other possibility, because pg cannot\nbuild array type from function signature. So it is harder to build an array\nof record types.\n\nWe can make an easy tuple store of records - like FUNCTION fx(OUT a int,\nOUT b int) RETURNS SETOF RECORD. But now, thanks to Tom and Amit's work,\nthe simple expression evaluation is significantly faster than SQL\nevaluation. So using any SRF function has performance impact. What I miss\nis the possibility to write functions like FUNCTION fx(OUT a int, OUT b\nint) RETURNS ARRAY. With this possibility is easy to write functions that\nyou need, and is not necessary to use 2d arrays. If the result of regexp\nfunctions will be arrays of records, then a new unnest function is not\nnecessary. So this is not a good direction. Instead of fixing core issues,\nwe design workarounds. There can be more wide usage of arrays of composites.\n\nRegards\n\nPavel\n\n\n\n\n\n\n> /Joel\n>\n>\n>\n\nút 9. 3. 2021 v 7:57 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Mon, Mar 8, 2021, at 21:46, Pavel Stehule wrote:so what about?CREATE OR REPLACE FUNCTION unnest_slice(anyarray, int)RETURNS SETOF anyarray AS $$DECLARE r $1%type;BEGIN FOREACH r SLICE $2 IN ARRAY $1 --- now $2 should be constant LOOP RETURN NEXT r; END LOOP;END;$$ LANGUAGE plpgsql;Not sure I understand. Is the suggestion to add \"SLICE\" as syntactic sugar in PL/pgSQL to invoke the proposed two-argument C-version of unnest()?there are two ideas:1. the behaviour can be same like SLICE clause of FOREACH statement2. use unnest_slice as name - the function \"unnest\" is relatively rich today and using other overloading doesn't look too practical. But this is just an idea. I can imagine more forms of slicing or unnesting, so it can be practical to use different names than just \"unnest\". Personally I don't like too much using 2D arrays for this purpose. The queries over this functionality will be harder to read (it is like fortran 77). I understand so now, there is no other possibility, because pg cannot build array type from function signature. So it is harder to build an array of record types. We can make an easy tuple store of records - like FUNCTION fx(OUT a int, OUT b int) RETURNS SETOF RECORD. But now, thanks to Tom and Amit's work, the simple expression evaluation is significantly faster than SQL evaluation. So using any SRF function has performance impact. What I miss is the possibility to write functions like FUNCTION fx(OUT a int, OUT b int) RETURNS ARRAY. With this possibility is easy to write functions that you need, and is not necessary to use 2d arrays. If the result of regexp functions will be arrays of records, then a new unnest function is not necessary. So this is not a good direction. Instead of fixing core issues, we design workarounds. There can be more wide usage of arrays of composites.RegardsPavel/Joel",
"msg_date": "Tue, 9 Mar 2021 08:26:18 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPATCH=5D_regexp=5Fpositions_=28_string_text=2C_pattern_?=\n\t=?UTF-8?Q?text=2C_flags_text_=29_=E2=86=92_setof_int4range=5B=5D?="
},
{
"msg_contents": "On Tue, Mar 9, 2021, at 08:26, Pavel Stehule wrote:\n> there are two ideas:\n> \n> 1. the behaviour can be same like SLICE clause of FOREACH statement\n\nHm, I'm sorry I don't understand, is there an existing SLICE clause?\nI get syntax error in HEAD:\n\nERROR: syntax error at or near \"$2\"\nLINE 5: FOREACH r SLICE $2 IN ARRAY $1 --- now $2 should be consta...\n\nOr do you mean you suggest adding such a clause?\n\n> 2. use unnest_slice as name - the function \"unnest\" is relatively rich today and using other overloading doesn't look too practical.\n\nHm, rich in what way? There is currently only one version for arrays, and a different one for tsvector.\n\n> But this is just an idea. I can imagine more forms of slicing or unnesting, so it can be practical to use different names than just \"unnest\".\n> \n> Personally I don't like too much using 2D arrays for this purpose. The queries over this functionality will be harder to read (it is like fortran 77). I understand so now, there is no other possibility, because pg cannot build array type from function signature. So it is harder to build an array of record types. \n> \n> We can make an easy tuple store of records - like FUNCTION fx(OUT a int, OUT b int) RETURNS SETOF RECORD. But now, thanks to Tom and Amit's work, the simple expression evaluation is significantly faster than SQL evaluation. So using any SRF function has performance impact. What I miss is the possibility to write functions like FUNCTION fx(OUT a int, OUT b int) RETURNS ARRAY. With this possibility is easy to write functions that you need, and is not necessary to use 2d arrays. If the result of regexp functions will be arrays of records, then a new unnest function is not necessary. So this is not a good direction. Instead of fixing core issues, we design workarounds. There can be more wide usage of arrays of composites.\n\nHm, I struggle to understand what your point is.\n2D arrays already exist, and when having to deal with them, I think unnest(anyarray,int) would improve the situation.\nNow, there might be other situations like you describe where something else than 2D arrays are preferred.\nBut this doesn't change the fact you sometimes have to deal with 2D arrays, in which case the proposed unnest(anyarray,int) would improve the user-experience a lot, when wanting to unnest just one level (or N levels).\n\nSounds like you are suggesting some other improvements, in addition to the proposed unnest(anyarray,int)? Correct?\n\nA regexp_positions() returning setof 2-D array[] would not be a workaround, in my opinion,\nit would be what I actually want, but only if I also get unnest(anyarray,int), then I'm perfectly happy.\n\n/Joel\nOn Tue, Mar 9, 2021, at 08:26, Pavel Stehule wrote:there are two ideas:1. the behaviour can be same like SLICE clause of FOREACH statementHm, I'm sorry I don't understand, is there an existing SLICE clause?I get syntax error in HEAD:ERROR: syntax error at or near \"$2\"LINE 5: FOREACH r SLICE $2 IN ARRAY $1 --- now $2 should be consta...Or do you mean you suggest adding such a clause?2. use unnest_slice as name - the function \"unnest\" is relatively rich today and using other overloading doesn't look too practical.Hm, rich in what way? There is currently only one version for arrays, and a different one for tsvector.But this is just an idea. I can imagine more forms of slicing or unnesting, so it can be practical to use different names than just \"unnest\".Personally I don't like too much using 2D arrays for this purpose. The queries over this functionality will be harder to read (it is like fortran 77). I understand so now, there is no other possibility, because pg cannot build array type from function signature. So it is harder to build an array of record types. We can make an easy tuple store of records - like FUNCTION fx(OUT a int, OUT b int) RETURNS SETOF RECORD. But now, thanks to Tom and Amit's work, the simple expression evaluation is significantly faster than SQL evaluation. So using any SRF function has performance impact. What I miss is the possibility to write functions like FUNCTION fx(OUT a int, OUT b int) RETURNS ARRAY. With this possibility is easy to write functions that you need, and is not necessary to use 2d arrays. If the result of regexp functions will be arrays of records, then a new unnest function is not necessary. So this is not a good direction. Instead of fixing core issues, we design workarounds. There can be more wide usage of arrays of composites.Hm, I struggle to understand what your point is.2D arrays already exist, and when having to deal with them, I think unnest(anyarray,int) would improve the situation.Now, there might be other situations like you describe where something else than 2D arrays are preferred.But this doesn't change the fact you sometimes have to deal with 2D arrays, in which case the proposed unnest(anyarray,int) would improve the user-experience a lot, when wanting to unnest just one level (or N levels).Sounds like you are suggesting some other improvements, in addition to the proposed unnest(anyarray,int)? Correct?A regexp_positions() returning setof 2-D array[] would not be a workaround, in my opinion,it would be what I actually want, but only if I also get unnest(anyarray,int), then I'm perfectly happy./Joel",
"msg_date": "Tue, 09 Mar 2021 09:01:16 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "út 9. 3. 2021 v 9:01 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Tue, Mar 9, 2021, at 08:26, Pavel Stehule wrote:\n>\n> there are two ideas:\n>\n> 1. the behaviour can be same like SLICE clause of FOREACH statement\n>\n>\n> Hm, I'm sorry I don't understand, is there an existing SLICE clause?\n> I get syntax error in HEAD:\n>\n> ERROR: syntax error at or near \"$2\"\n> LINE 5: FOREACH r SLICE $2 IN ARRAY $1 --- now $2 should be consta...\n>\n> Or do you mean you suggest adding such a clause?\n>\n\nhttps://www.postgresql.org/docs/current/plpgsql-control-structures.html#PLPGSQL-FOREACH-ARRAY\n\nbut the SLICE argument should be constant. But this limit is artificial,\njust for implementation simplicity. Important is behaviour.\n\n\n> 2. use unnest_slice as name - the function \"unnest\" is relatively rich\n> today and using other overloading doesn't look too practical.\n>\n>\n> Hm, rich in what way? There is currently only one version for arrays, and\n> a different one for tsvector.\n>\n\nno, there is possible to unnest more arrays once\n\n\n> But this is just an idea. I can imagine more forms of slicing or\n> unnesting, so it can be practical to use different names than just \"unnest\".\n>\n> Personally I don't like too much using 2D arrays for this purpose. The\n> queries over this functionality will be harder to read (it is like fortran\n> 77). I understand so now, there is no other possibility, because pg cannot\n> build array type from function signature. So it is harder to build an array\n> of record types.\n>\n> We can make an easy tuple store of records - like FUNCTION fx(OUT a int,\n> OUT b int) RETURNS SETOF RECORD. But now, thanks to Tom and Amit's work,\n> the simple expression evaluation is significantly faster than SQL\n> evaluation. So using any SRF function has performance impact. What I miss\n> is the possibility to write functions like FUNCTION fx(OUT a int, OUT b\n> int) RETURNS ARRAY. With this possibility is easy to write functions that\n> you need, and is not necessary to use 2d arrays. If the result of regexp\n> functions will be arrays of records, then a new unnest function is not\n> necessary. So this is not a good direction. Instead of fixing core issues,\n> we design workarounds. There can be more wide usage of arrays of composites.\n>\n>\n> Hm, I struggle to understand what your point is.\n> 2D arrays already exist, and when having to deal with them, I think\n> unnest(anyarray,int) would improve the situation.\n>\n\nI cannot find any function in Postgres that returns a 2D array now.\n\nFor me - using 2D arrays is not a win. It is not a bad solution, but I\ncannot say, so I like it, because it is not a good solution. For example,\nyou cannot enhance this functionality about returning searched substring.\nSo you need to repeat searching. I have bad experience with using arrays in\nthis style. Sometimes it is necessary, because external interfaces cannot\nwork with composites, but the result is unreadable. So this is the reason\nfor my opinion.\n\nabout unnest_2d .. probably it can be used for some cases when users cannot\nuse composites on the client side. But now, because they can use FOREACH\nSLICE is not problem to write any custom function like they exactly need.\nAnd in this case there is very low overhead of plpgsql. But it is true, so\nthis function can be used for some vector unnesting.\n\nNow, there might be other situations like you describe where something else\n> than 2D arrays are preferred.\n> But this doesn't change the fact you sometimes have to deal with 2D\n> arrays, in which case the proposed unnest(anyarray,int) would improve the\n> user-experience a lot, when wanting to unnest just one level (or N levels).\n>\n\n> Sounds like you are suggesting some other improvements, in addition to the\n> proposed unnest(anyarray,int)? Correct?\n>\n> A regexp_positions() returning setof 2-D array[] would not be a\n> workaround, in my opinion,\n> it would be what I actually want, but only if I also get\n> unnest(anyarray,int), then I'm perfectly happy.\n>\n\nWe are talking about design, not about usage. try to write some examples of\nusage, please?\n\nRegards\n\nPavel\n\n\n> /Joel\n>\n\nút 9. 3. 2021 v 9:01 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Tue, Mar 9, 2021, at 08:26, Pavel Stehule wrote:there are two ideas:1. the behaviour can be same like SLICE clause of FOREACH statementHm, I'm sorry I don't understand, is there an existing SLICE clause?I get syntax error in HEAD:ERROR: syntax error at or near \"$2\"LINE 5: FOREACH r SLICE $2 IN ARRAY $1 --- now $2 should be consta...Or do you mean you suggest adding such a clause?https://www.postgresql.org/docs/current/plpgsql-control-structures.html#PLPGSQL-FOREACH-ARRAYbut the SLICE argument should be constant. But this limit is artificial, just for implementation simplicity. Important is behaviour. 2. use unnest_slice as name - the function \"unnest\" is relatively rich today and using other overloading doesn't look too practical.Hm, rich in what way? There is currently only one version for arrays, and a different one for tsvector.no, there is possible to unnest more arrays once But this is just an idea. I can imagine more forms of slicing or unnesting, so it can be practical to use different names than just \"unnest\".Personally I don't like too much using 2D arrays for this purpose. The queries over this functionality will be harder to read (it is like fortran 77). I understand so now, there is no other possibility, because pg cannot build array type from function signature. So it is harder to build an array of record types. We can make an easy tuple store of records - like FUNCTION fx(OUT a int, OUT b int) RETURNS SETOF RECORD. But now, thanks to Tom and Amit's work, the simple expression evaluation is significantly faster than SQL evaluation. So using any SRF function has performance impact. What I miss is the possibility to write functions like FUNCTION fx(OUT a int, OUT b int) RETURNS ARRAY. With this possibility is easy to write functions that you need, and is not necessary to use 2d arrays. If the result of regexp functions will be arrays of records, then a new unnest function is not necessary. So this is not a good direction. Instead of fixing core issues, we design workarounds. There can be more wide usage of arrays of composites.Hm, I struggle to understand what your point is.2D arrays already exist, and when having to deal with them, I think unnest(anyarray,int) would improve the situation.I cannot find any function in Postgres that returns a 2D array now.For me - using 2D arrays is not a win. It is not a bad solution, but I cannot say, so I like it, because it is not a good solution. For example, you cannot enhance this functionality about returning searched substring. So you need to repeat searching. I have bad experience with using arrays in this style. Sometimes it is necessary, because external interfaces cannot work with composites, but the result is unreadable. So this is the reason for my opinion. about unnest_2d .. probably it can be used for some cases when users cannot use composites on the client side. But now, because they can use FOREACH SLICE is not problem to write any custom function like they exactly need. And in this case there is very low overhead of plpgsql. But it is true, so this function can be used for some vector unnesting. Now, there might be other situations like you describe where something else than 2D arrays are preferred.But this doesn't change the fact you sometimes have to deal with 2D arrays, in which case the proposed unnest(anyarray,int) would improve the user-experience a lot, when wanting to unnest just one level (or N levels).Sounds like you are suggesting some other improvements, in addition to the proposed unnest(anyarray,int)? Correct?A regexp_positions() returning setof 2-D array[] would not be a workaround, in my opinion,it would be what I actually want, but only if I also get unnest(anyarray,int), then I'm perfectly happy.We are talking about design, not about usage. try to write some examples of usage, please?RegardsPavel/Joel",
"msg_date": "Tue, 9 Mar 2021 09:29:45 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPATCH=5D_regexp=5Fpositions_=28_string_text=2C_pattern_?=\n\t=?UTF-8?Q?text=2C_flags_text_=29_=E2=86=92_setof_int4range=5B=5D?="
},
{
"msg_contents": "On Tue, Mar 9, 2021, at 09:29, Pavel Stehule wrote:\n> \n> https://www.postgresql.org/docs/current/plpgsql-control-structures.html#PLPGSQL-FOREACH-ARRAY\n> \n> but the SLICE argument should be constant. But this limit is artificial, just for implementation simplicity. Important is behaviour.\n\nI see now what you mean. Yes, being able to specify the SLICE argument as a variable instead of a constant would be a good improvement. Maybe the SLICE implementation from PL/pgSQL could be modified and used for both cases? (Both in the C-version unnest() and in PL/pgSQL to allow variables and not just constants to SLICE)\n\n> \n>> \n>> \n>>> 2. use unnest_slice as name - the function \"unnest\" is relatively rich today and using other overloading doesn't look too practical.\n>> \n>> Hm, rich in what way? There is currently only one version for arrays, and a different one for tsvector.\n> \n> no, there is possible to unnest more arrays once\n\nWhat do you mean?\nMore than one unnest() in the same query, e.g. SELECT unnest(..), unnest(..)?\n\n> I cannot find any function in Postgres that returns a 2D array now.\n> \n> For me - using 2D arrays is not a win. It is not a bad solution, but I cannot say, so I like it, because it is not a good solution. For example, you cannot enhance this functionality about returning searched substring. So you need to repeat searching. I have bad experience with using arrays in this style. Sometimes it is necessary, because external interfaces cannot work with composites, but the result is unreadable. So this is the reason for my opinion.\n\nEven if it would return arrays of a range record with \"start\" and \"stop\" field, I don't see how we could enhance it to later return searched substring without changing the return type? Doing so would break any code using the function anyway.\n\nRepeating searching if you want something else than positions, seems like the most SQL-idiomatic solution.\n\n/Joel\n\nOn Tue, Mar 9, 2021, at 09:29, Pavel Stehule wrote:https://www.postgresql.org/docs/current/plpgsql-control-structures.html#PLPGSQL-FOREACH-ARRAYbut the SLICE argument should be constant. But this limit is artificial, just for implementation simplicity. Important is behaviour.I see now what you mean. Yes, being able to specify the SLICE argument as a variable instead of a constant would be a good improvement. Maybe the SLICE implementation from PL/pgSQL could be modified and used for both cases? (Both in the C-version unnest() and in PL/pgSQL to allow variables and not just constants to SLICE) 2. use unnest_slice as name - the function \"unnest\" is relatively rich today and using other overloading doesn't look too practical.Hm, rich in what way? There is currently only one version for arrays, and a different one for tsvector.no, there is possible to unnest more arrays onceWhat do you mean?More than one unnest() in the same query, e.g. SELECT unnest(..), unnest(..)?I cannot find any function in Postgres that returns a 2D array now.For me - using 2D arrays is not a win. It is not a bad solution, but I cannot say, so I like it, because it is not a good solution. For example, you cannot enhance this functionality about returning searched substring. So you need to repeat searching. I have bad experience with using arrays in this style. Sometimes it is necessary, because external interfaces cannot work with composites, but the result is unreadable. So this is the reason for my opinion.Even if it would return arrays of a range record with \"start\" and \"stop\" field, I don't see how we could enhance it to later return searched substring without changing the return type? Doing so would break any code using the function anyway.Repeating searching if you want something else than positions, seems like the most SQL-idiomatic solution./Joel",
"msg_date": "Tue, 09 Mar 2021 10:00:34 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "út 9. 3. 2021 v 10:01 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Tue, Mar 9, 2021, at 09:29, Pavel Stehule wrote:\n>\n>\n>\n> https://www.postgresql.org/docs/current/plpgsql-control-structures.html#PLPGSQL-FOREACH-ARRAY\n>\n> but the SLICE argument should be constant. But this limit is artificial,\n> just for implementation simplicity. Important is behaviour.\n>\n>\n> I see now what you mean. Yes, being able to specify the SLICE argument as\n> a variable instead of a constant would be a good improvement. Maybe the\n> SLICE implementation from PL/pgSQL could be modified and used for both\n> cases? (Both in the C-version unnest() and in PL/pgSQL to allow variables\n> and not just constants to SLICE)\n>\n\nprobably\n\n\n>\n>\n>\n> 2. use unnest_slice as name - the function \"unnest\" is relatively rich\n> today and using other overloading doesn't look too practical.\n>\n>\n> Hm, rich in what way? There is currently only one version for arrays, and\n> a different one for tsvector.\n>\n>\n> no, there is possible to unnest more arrays once\n>\n>\n> What do you mean?\n> More than one unnest() in the same query, e.g. SELECT unnest(..),\n> unnest(..)?\n>\n\nyou can do unnest(array1, array2, ...)\n\n\n> I cannot find any function in Postgres that returns a 2D array now.\n>\n> For me - using 2D arrays is not a win. It is not a bad solution, but I\n> cannot say, so I like it, because it is not a good solution. For example,\n> you cannot enhance this functionality about returning searched substring.\n> So you need to repeat searching. I have bad experience with using arrays in\n> this style. Sometimes it is necessary, because external interfaces cannot\n> work with composites, but the result is unreadable. So this is the reason\n> for my opinion.\n>\n>\n> Even if it would return arrays of a range record with \"start\" and \"stop\"\n> field, I don't see how we could enhance it to later return searched\n> substring without changing the return type? Doing so would break any code\n> using the function anyway.\n>\n\nyou can have composite (position int, value text) or (position int,\noffset_bytes int, size_char int, size_bytes int), ... just there are more\npossibilities\n\n\n> Repeating searching if you want something else than positions, seems like\n> the most SQL-idiomatic solution.\n>\n\n:)\n\nyes, but usually you can use index on bigger data. Substring searching is\nslower, and we use mostly UTF8, so if start is not in bytes, then you have\nto iterate from start of string to find start of substring.\n\n\n\n> /Joel\n>\n>\n\nút 9. 3. 2021 v 10:01 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Tue, Mar 9, 2021, at 09:29, Pavel Stehule wrote:https://www.postgresql.org/docs/current/plpgsql-control-structures.html#PLPGSQL-FOREACH-ARRAYbut the SLICE argument should be constant. But this limit is artificial, just for implementation simplicity. Important is behaviour.I see now what you mean. Yes, being able to specify the SLICE argument as a variable instead of a constant would be a good improvement. Maybe the SLICE implementation from PL/pgSQL could be modified and used for both cases? (Both in the C-version unnest() and in PL/pgSQL to allow variables and not just constants to SLICE)probably 2. use unnest_slice as name - the function \"unnest\" is relatively rich today and using other overloading doesn't look too practical.Hm, rich in what way? There is currently only one version for arrays, and a different one for tsvector.no, there is possible to unnest more arrays onceWhat do you mean?More than one unnest() in the same query, e.g. SELECT unnest(..), unnest(..)?you can do unnest(array1, array2, ...) I cannot find any function in Postgres that returns a 2D array now.For me - using 2D arrays is not a win. It is not a bad solution, but I cannot say, so I like it, because it is not a good solution. For example, you cannot enhance this functionality about returning searched substring. So you need to repeat searching. I have bad experience with using arrays in this style. Sometimes it is necessary, because external interfaces cannot work with composites, but the result is unreadable. So this is the reason for my opinion.Even if it would return arrays of a range record with \"start\" and \"stop\" field, I don't see how we could enhance it to later return searched substring without changing the return type? Doing so would break any code using the function anyway.you can have composite (position int, value text) or (position int, offset_bytes int, size_char int, size_bytes int), ... just there are more possibilities Repeating searching if you want something else than positions, seems like the most SQL-idiomatic solution.:)yes, but usually you can use index on bigger data. Substring searching is slower, and we use mostly UTF8, so if start is not in bytes, then you have to iterate from start of string to find start of substring./Joel",
"msg_date": "Tue, 9 Mar 2021 10:18:01 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPATCH=5D_regexp=5Fpositions_=28_string_text=2C_pattern_?=\n\t=?UTF-8?Q?text=2C_flags_text_=29_=E2=86=92_setof_int4range=5B=5D?="
},
{
"msg_contents": "On Tue, Mar 9, 2021, at 10:18, Pavel Stehule wrote:\n> What do you mean?\n>> More than one unnest() in the same query, e.g. SELECT unnest(..), unnest(..)?\n> \n> you can do unnest(array1, array2, ...)\n\nRight, I had forgotten about that variant.\n\nBut isn't this a bit surprising then:\n\n\\df unnest\n List of functions\n Schema | Name | Result data type | Argument data types | Type\n------------+--------+------------------+----------------------------------------------------------------------------------+------\npg_catalog | unnest | SETOF anyelement | anyarray | func\npg_catalog | unnest | SETOF record | tsvector tsvector, OUT lexeme text, OUT positions smallint[], OUT weights text[] | func\n(2 rows)\n\nShould there be an entry there showing the VARIADIC anyelement version as well?\n\nI know it's a documented feature, but \\df seems out-of-sync with the docs.\n\n/Joel\n\nOn Tue, Mar 9, 2021, at 10:18, Pavel Stehule wrote:What do you mean?More than one unnest() in the same query, e.g. SELECT unnest(..), unnest(..)?you can do unnest(array1, array2, ...)Right, I had forgotten about that variant.But isn't this a bit surprising then:\\df unnest List of functions Schema | Name | Result data type | Argument data types | Type------------+--------+------------------+----------------------------------------------------------------------------------+------pg_catalog | unnest | SETOF anyelement | anyarray | funcpg_catalog | unnest | SETOF record | tsvector tsvector, OUT lexeme text, OUT positions smallint[], OUT weights text[] | func(2 rows)Should there be an entry there showing the VARIADIC anyelement version as well?I know it's a documented feature, but \\df seems out-of-sync with the docs./Joel",
"msg_date": "Tue, 09 Mar 2021 11:24:46 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "On Thu, Mar 4, 2021, at 16:40, Tom Lane wrote:\n> My experience with working with parallel arrays in SQL has been unpleasant.\n\nCould you please give an example on such an unpleasant experience?\n\nI can see a problem if the arrays could possibly have difference dimensionality/cardinality,\nbut regexp_positions() could guarantee they won't, so I don't see a problem here,\nbut there is probably something I'm missing here?\n\n/Joel\nOn Thu, Mar 4, 2021, at 16:40, Tom Lane wrote:My experience with working with parallel arrays in SQL has been unpleasant.Could you please give an example on such an unpleasant experience?I can see a problem if the arrays could possibly have difference dimensionality/cardinality,but regexp_positions() could guarantee they won't, so I don't see a problem here,but there is probably something I'm missing here?/Joel",
"msg_date": "Tue, 09 Mar 2021 11:32:11 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "út 9. 3. 2021 v 11:32 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Thu, Mar 4, 2021, at 16:40, Tom Lane wrote:\n>\n> My experience with working with parallel arrays in SQL has been unpleasant.\n>\n>\n> Could you please give an example on such an unpleasant experience?\n>\n\nit was more complex application with 3D data of some points in 2D array.\nEverywhere was a[d, 0], a[d, 1], a[d, 2], instead a[d] or instead a[d].x,\n...\n\n\n\n> I can see a problem if the arrays could possibly have difference\n> dimensionality/cardinality,\n> but regexp_positions() could guarantee they won't, so I don't see a\n> problem here,\n> but there is probably something I'm missing here?\n>\n\nI think so the functions based on arrays can work, why not. But the\nsemantic is lost.\n\n\n> /Joel\n>\n\nút 9. 3. 2021 v 11:32 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Thu, Mar 4, 2021, at 16:40, Tom Lane wrote:My experience with working with parallel arrays in SQL has been unpleasant.Could you please give an example on such an unpleasant experience?it was more complex application with 3D data of some points in 2D array. Everywhere was a[d, 0], a[d, 1], a[d, 2], instead a[d] or instead a[d].x, ... I can see a problem if the arrays could possibly have difference dimensionality/cardinality,but regexp_positions() could guarantee they won't, so I don't see a problem here,but there is probably something I'm missing here?I think so the functions based on arrays can work, why not. But the semantic is lost. /Joel",
"msg_date": "Tue, 9 Mar 2021 13:16:10 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPATCH=5D_regexp=5Fpositions_=28_string_text=2C_pattern_?=\n\t=?UTF-8?Q?text=2C_flags_text_=29_=E2=86=92_setof_int4range=5B=5D?="
},
{
"msg_contents": "On Tue, Mar 9, 2021, at 13:16, Pavel Stehule wrote:\n> út 9. 3. 2021 v 11:32 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n>> __On Thu, Mar 4, 2021, at 16:40, Tom Lane wrote:\n>>> My experience with working with parallel arrays in SQL has been unpleasant.\n>> \n>> Could you please give an example on such an unpleasant experience?\n> \n> it was more complex application with 3D data of some points in 2D array. Everywhere was a[d, 0], a[d, 1], a[d, 2], instead a[d] or instead a[d].x, ...\n\nNot sure I understand, but my question was directed to Tom, who wrote about his experiences in a previous message up thread.\n\nTom - can you please give details on your unpleasant experiences with parallel arrays?\n\n/Joel\nOn Tue, Mar 9, 2021, at 13:16, Pavel Stehule wrote:út 9. 3. 2021 v 11:32 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Thu, Mar 4, 2021, at 16:40, Tom Lane wrote:My experience with working with parallel arrays in SQL has been unpleasant.Could you please give an example on such an unpleasant experience?it was more complex application with 3D data of some points in 2D array. Everywhere was a[d, 0], a[d, 1], a[d, 2], instead a[d] or instead a[d].x, ...Not sure I understand, but my question was directed to Tom, who wrote about his experiences in a previous message up thread.Tom - can you please give details on your unpleasant experiences with parallel arrays?/Joel",
"msg_date": "Tue, 09 Mar 2021 14:18:48 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> On Tue, Mar 9, 2021, at 10:18, Pavel Stehule wrote:\n>> you can do unnest(array1, array2, ...)\n\n> Right, I had forgotten about that variant.\n> But isn't this a bit surprising then:\n> ...\n> Should there be an entry there showing the VARIADIC anyelement version as well?\n\nNo, because there's no such pg_proc entry. Multi-argument UNNEST is\nspecial-cased by the parser, cf transformRangeFunction().\n\n(Which is something I'd momentarily forgotten. Forget my suggestion\nthat we could define unnest(anyarray, int) ... it has to be another\nname.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Mar 2021 11:20:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re:\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> Tom - can you please give details on your unpleasant experiences with parallel arrays?\n\nThe problems I can recall running into were basically down to not having\nan easy way to iterate through parallel arrays. There are ways to do\nthat in SQL, certainly, but they all constrain how you write the query,\nand usually force ugly stuff like splitting it into sub-selects.\n\nAs an example, presuming that regexp_positions is defined along the\nlines of\n\nregexp_positions(str text, pat text, out starts int[], out lengths int[])\nreturns setof record\n\nthen to actually get the identified substrings you'd have to do something\nlike\n\nselect\n substring([input string] from starts[i] for lengths[i])\nfrom\n regexp_positions([input string], [pattern]) r,\n lateral\n generate_series(1, array_length(starts, 1)) i;\n\nI think the last time I confronted this, we didn't have multi-array\nUNNEST. Now that we do, we can get rid of the generate_series(),\nbut it's still not beautiful:\n\nselect\n substring([input string] from s for l)\nfrom\n regexp_positions([input string], [pattern]) r,\n lateral\n unnest(starts, lengths) u(s,l);\n\nHaving said that, the other alternative with a 2-D array:\n\nregexp_positions(str text, pat text) returns setof int[]\n\nseems to still need UNNEST, though now it's not the magic multi-array\nUNNEST but this slicing version:\n\nselect\n substring([input string] from u[1] for u[2])\nfrom\n regexp_positions([input string], [pattern]) r,\n lateral\n unnest_slice(r, 1) u;\n\nAnyway, I'd counsel trying to write out SQL implementations\nof regexp_matches() and other useful things based on any\nparticular regexp_positions() API you might be thinking about.\nCan we do anything useful without a LATERAL UNNEST thingie?\nAre some of them more legible than others?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Mar 2021 11:42:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re:\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "On Tue, Mar 9, 2021, at 17:42, Tom Lane wrote:\n> \"Joel Jacobson\" <joel@compiler.org> writes:\n> > Tom - can you please give details on your unpleasant experiences with parallel arrays?\n> \n> The problems I can recall running into were basically down to not having\n> an easy way to iterate through parallel arrays. There are ways to do\n> that in SQL, certainly, but they all constrain how you write the query,\n> and usually force ugly stuff like splitting it into sub-selects.\n\nI see now what you mean, many thanks for explaining.\n\n> \n> As an example, presuming that regexp_positions is defined along the\n> lines of\n> \n> regexp_positions(str text, pat text, out starts int[], out lengths int[])\n> returns setof record\n\n+1\n\nI think this is the most feasible best option so far.\n\nAttached is a patch implementing it this way.\n\nI changed the start to begin at 1, since this is how position ( substring text IN string text ) → integer works.\n\nSELECT * FROM regexp_positions('foobarbequebaz', '^', 'g');\nstarts | lengths\n--------+---------\n{1} | {0}\n(1 row)\n\nSELECT * FROM regexp_positions('foobarbequebaz', 'ba.', 'g');\nstarts | lengths\n--------+---------\n{4} | {3}\n{12} | {3}\n(2 rows)\n\nMark's examples:\n\nSELECT * FROM regexp_positions('foObARbEqUEbAz', $re$(?=beque)$re$, 'i');\nstarts | lengths\n--------+---------\n{7} | {0}\n(1 row)\n\n\nSELECT * FROM regexp_positions('foobarbequebaz', '(?<=z)', 'g');\nstarts | lengths\n--------+---------\n{15} | {0}\n(1 row)\n\nI've also tested your template queries:\n\n> \n> then to actually get the identified substrings you'd have to do something\n> like\n> \n> select\n> substring([input string] from starts[i] for lengths[i])\n> from\n> regexp_positions([input string], [pattern]) r,\n> lateral\n> generate_series(1, array_length(starts, 1)) i;\n\nselect\n substring('foobarbequebaz' from starts[i] for lengths[i])\nfrom\n regexp_positions('foobarbequebaz', 'ba.', 'g') r,\n lateral\n generate_series(1, array_length(starts, 1)) i;\n\nsubstring\n-----------\nbar\nbaz\n(2 rows)\n\n> I think the last time I confronted this, we didn't have multi-array\n> UNNEST. Now that we do, we can get rid of the generate_series(),\n> but it's still not beautiful:\n> \n> select\n> substring([input string] from s for l)\n> from\n> regexp_positions([input string], [pattern]) r,\n> lateral\n> unnest(starts, lengths) u(s,l);\n\nselect\n substring('foobarbequebaz' from s for l)\nfrom\n regexp_positions('foobarbequebaz', 'ba.', 'g') r,\n lateral\n unnest(starts, lengths) u(s,l);\n\nsubstring\n-----------\nbar\nbaz\n(2 rows)\n\n> Having said that, the other alternative with a 2-D array:\n> \n> regexp_positions(str text, pat text) returns setof int[]\n> \n> seems to still need UNNEST, though now it's not the magic multi-array\n> UNNEST but this slicing version:\n> \n> select\n> substring([input string] from u[1] for u[2])\n> from\n> regexp_positions([input string], [pattern]) r,\n> lateral\n> unnest_slice(r, 1) u;\n\nUnable to test this one since there is no unnest_slice() (yet)\n\n> \n> Anyway, I'd counsel trying to write out SQL implementations\n> of regexp_matches() and other useful things based on any\n> particular regexp_positions() API you might be thinking about.\n> Can we do anything useful without a LATERAL UNNEST thingie?\n> Are some of them more legible than others?\n\nHmm, I cannot think of a way.\n\n\n/Joel",
"msg_date": "Tue, 09 Mar 2021 20:30:21 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "> On 9 Mar 2021, at 20:30, Joel Jacobson <joel@compiler.org> wrote:\n\n> Attached is a patch implementing it this way.\n\nThis patch no longer applies, can you please submit a rebased version?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 1 Sep 2021 11:08:16 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "=?utf-8?Q?Re=3A_=5BPATCH=5D_regexp=5Fpositions_=28_string_text=2C?=\n =?utf-8?Q?_pattern_text=2C_flags_text_=29_=E2=86=92_setof_int4range=5B=5D?="
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 9 Mar 2021, at 20:30, Joel Jacobson <joel@compiler.org> wrote:\n>> Attached is a patch implementing it this way.\n\n> This patch no longer applies, can you please submit a rebased version?\n\nAlso, since 642433707 (\"This patch adds new functions regexp_count(),\nregexp_instr(), regexp_like(), and regexp_substr(), and extends\nregexp_replace() with some new optional arguments\") is already in,\nwe need to think about how this interacts with that. Do we even\nstill need any more functionality in this area? Should we try to\nalign the APIs?\n\nThose new function APIs have some Oracle-isms that I don't especially\ncare for, like use of int for what should be a boolean. Still, users\naren't going to give us a pass for wildly inconsistent APIs just because\nsome functions came from Oracle and some didn't.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Sep 2021 10:02:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re:\n =?utf-8?Q?Re=3A_=5BPATCH=5D_regexp=5Fpositions_=28_string_text=2C?=\n =?utf-8?Q?_pattern_text=2C_flags_text_=29_=E2=86=92_setof_int4range=5B=5D?="
},
{
"msg_contents": "> On 1 Sep 2021, at 16:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> On 9 Mar 2021, at 20:30, Joel Jacobson <joel@compiler.org> wrote:\n>>> Attached is a patch implementing it this way.\n> \n>> This patch no longer applies, can you please submit a rebased version?\n\nOn a brief skim, this patch includes the doc stanza for regexp_replace which I\nassume is a copy/pasteo.\n\n+\t\tTupleDescInitEntry(tupdesc, (AttrNumber) 1, \"starts”,\nWhile “start_positions” is awfully verbose, just “starts” doesn’t really roll\noff the tongue. Perhaps “positions” would be more explanatory?\n\n> Also, since 642433707 (\"This patch adds new functions regexp_count(),\n> regexp_instr(), regexp_like(), and regexp_substr(), and extends\n> regexp_replace() with some new optional arguments\") is already in,\n> we need to think about how this interacts with that. Do we even\n> still need any more functionality in this area? Should we try to\n> align the APIs?\n\nI can see value in a function like this one, and the API is AFAICT fairly\naligned with what I as a user would expect it to be given what we already have.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 2 Sep 2021 00:03:58 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "=?utf-8?Q?Re=3A_=5BPATCH=5D_regexp=5Fpositions_=28_string_text=2C?=\n =?utf-8?Q?_pattern_text=2C_flags_text_=29_=E2=86=92_setof_int4range=5B=5D?="
},
{
"msg_contents": "On Thu, Sep 2, 2021, at 00:03, Daniel Gustafsson wrote:\n> I can see value in a function like this one, and the API is AFAICT fairly\n> aligned with what I as a user would expect it to be given what we already have.\n\nGood to hear and thanks for looking at this patch.\n\nI've fixed the problem due to the recent change in setup_regexp_matches(),\nwhich added a new int parameter \"start_search\".\nI pass 0 as start_search, which I think should give the same behaviour as before.\n\nI also changed the assigned oid values in pg_proc.dat for the two new regexp_positions() catalog functions.\n\n$ make check\n\n=======================\nAll 209 tests passed.\n=======================\n\n/Joel",
"msg_date": "Sun, 12 Sep 2021 15:54:15 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> [ 0005-regexp-positions.patch ]\n\nI took a quick look at this patch. I am on board now with the general\nidea of returning match positions, but I think there are still\ndefinitional issues to be discussed.\n\n1. The main idea we might perhaps want to adopt from the Oracle-ish\nregexp functions is a search-start-position argument. I'm not sure\nthat this is exciting --- if we intend this function to be a close\nanalog of regexp_matches(), it'd be better to leave that off. But\nit deserves explicit consideration.\n\n2. It looks like you modeled this on regexp_matches() to the extent\nof returning a set and using the 'g' flag to control whether the\nset can actually contain more than one row. That's pretty ancient\nprecedent ... but we have revisited that design more than once\nbecause regexp_matches() is just too much of a pain to use. I think\nif we're going to do this, we should learn from history, and provide an\nanalog to regexp_match() as well as regexp_matches() right off the bat.\n\n3. The API convention you chose (separate start and length arrays)\nis perhaps still confusing. When I first looked at the test case\n\n+SELECT regexp_positions('foobarbequebaz', $re$(bar)(beque)$re$);\n+ regexp_positions \n+-------------------\n+ (\"{4,7}\",\"{3,5}\")\n+(1 row)\n\nI thought it was all wrong because it seemed to be identifying\nthe substrings 'barbequ' and 'obarb'. If there'd been a different\nnumber of matches than capture groups, maybe I'd not have been\nconfused, but still... I wonder if we'd be better advised to make\nN capture groups produce N two-element arrays, or else mash it all\ninto one array of N*2 elements. But this probably depends on which\nway is the easiest to work with in SQL.\n\n4. Not sure about the handling of sub-matches.\nThere are various plausible definitions we could use:\n\n* We return the position/length of the overall match, never mind\nabout any parenthesized subexpressions. This is simple but I think\nit loses significant functionality. As an example, you might have\na pattern like 'ab*(c*)d+' where what you actually want to know\nis where the 'c's are, but they have to be in the context described\nby the rest of the regexp. Without subexpression-match capability\nthat's painful to do.\n\n* If there's a capturing subexpression, return the position/length\nof the first such subexpression, else return the overall match.\nThis matches the behavior of substring().\n\n* If there are capturing subexpression(s), return the\npositions/lengths of those, otherwise return the overall match.\nThis agrees with the behavior of regexp_match(es), so I'd tend\nto lean to this option, but perhaps it's the hardest to use.\n\n* Return the position/length of the overall match *and* those of\neach capturing subexpression. This is the most flexible choice,\nbut I give it low marks since it matches no existing behaviors.\n\n\nAs for comments on the patch itself:\n\n* The documentation includes an extraneous entry for regexp_replace,\nand it fails to add the promised paragraph to functions-posix-regexp.\n\n* This bit is evidently copied from regexp_matches:\n\n+ /* be sure to copy the input string into the multi-call ctx */\n+ matchctx = setup_regexp_matches(PG_GETARG_TEXT_P_COPY(0), pattern,\n\nregexp_matches needs to save the input string so that\nbuild_regexp_match_result can copy parts of that. But\nregexp_positions has no such need AFAICS, so I think we\ndon't need a long-lived copy of the string.\n\n* I wouldn't use these names in build_regexp_positions_result:\n\n+ ArrayType *so_ary;\n+ ArrayType *eo_ary;\n\n\"so_ary\" isn't awful, but it invites confusion with regex's \"so\"\nfield, which hasn't got the same semantics (off by one).\n\"eo_ary\" is pretty bad because it isn't an \"end offset\" at all, but\na length. I'd go for \"start_ary\" and \"length_ary\" or some such.\n\n* Test cases seem a bit thin, notably there's no coverage of the\nnull-subexpression code path.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 Sep 2021 13:55:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re:\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
},
{
"msg_contents": "This review has gone unanswered for two months, so I'm marking this patch\nReturned with Feedback. Please feel free to resubmit when a new version of the\npatch is available.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 3 Dec 2021 09:45:42 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "=?utf-8?Q?Re=3A_=5BPATCH=5D_regexp=5Fpositions_=28_string_text=2C?=\n =?utf-8?Q?_pattern_text=2C_flags_text_=29_=E2=86=92_setof_int4range=5B=5D?="
},
{
"msg_contents": "On Tue, Sep 21, 2021, at 19:55, Tom Lane wrote:\n> \"Joel Jacobson\" <joel@compiler.org> writes:\n> > [ 0005-regexp-positions.patch ]\n>\n> I took a quick look at this patch. I am on board now with the general\n> idea of returning match positions, but I think there are still\n> definitional issues to be discussed.\n>\n> 1. The main idea we might perhaps want to adopt from the Oracle-ish\n> regexp functions is a search-start-position argument. I'm not sure\n> that this is exciting --- if we intend this function to be a close\n> analog of regexp_matches(), it'd be better to leave that off. But\n> it deserves explicit consideration.\n\nI intentionally made it a close analog of regexp_matches(),\nto make it easy for existing users of regexp_matches() to\nunderstand how regexp_positions() works.\n\n> 2. It looks like you modeled this on regexp_matches() to the extent\n> of returning a set and using the 'g' flag to control whether the\n> set can actually contain more than one row. That's pretty ancient\n> precedent ... but we have revisited that design more than once\n> because regexp_matches() is just too much of a pain to use. I think\n> if we're going to do this, we should learn from history, and provide an\n> analog to regexp_match() as well as regexp_matches() right off the bat.\n\nYes, I modeled it on regexp_matches().\nOK, so what you are suggesting is we should add both a regexp_position() function,\nthat would work like regexp_match() but return the position instead,\nin addition to the already suggested regexp_positions() function.\n\nThat sounds like a good idea to me.\n\n> 3. The API convention you chose (separate start and length arrays)\n> is perhaps still confusing. When I first looked at the test case\n>\n> +SELECT regexp_positions('foobarbequebaz', $re$(bar)(beque)$re$);\n> + regexp_positions \n> +-------------------\n> + (\"{4,7}\",\"{3,5}\")\n> +(1 row)\n>\n> I thought it was all wrong because it seemed to be identifying\n> the substrings 'barbequ' and 'obarb'. If there'd been a different\n> number of matches than capture groups, maybe I'd not have been\n> confused, but still... I wonder if we'd be better advised to make\n> N capture groups produce N two-element arrays, or else mash it all\n> into one array of N*2 elements. But this probably depends on which\n> way is the easiest to work with in SQL.\n\nThe drawbacks of two-element arrays have already been discussed up thread.\nPersonally, I prefer the version suggested in the latest patch, and suggest we stick to it.\n\n> 4. Not sure about the handling of sub-matches.\n> There are various plausible definitions we could use:\n>\n> * We return the position/length of the overall match, never mind\n> about any parenthesized subexpressions. This is simple but I think\n> it loses significant functionality. As an example, you might have\n> a pattern like 'ab*(c*)d+' where what you actually want to know\n> is where the 'c's are, but they have to be in the context described\n> by the rest of the regexp. Without subexpression-match capability\n> that's painful to do.\n>\n> * If there's a capturing subexpression, return the position/length\n> of the first such subexpression, else return the overall match.\n> This matches the behavior of substring().\n>\n> * If there are capturing subexpression(s), return the\n> positions/lengths of those, otherwise return the overall match.\n> This agrees with the behavior of regexp_match(es), so I'd tend\n> to lean to this option, but perhaps it's the hardest to use.\n>\n> * Return the position/length of the overall match *and* those of\n> each capturing subexpression. This is the most flexible choice,\n> but I give it low marks since it matches no existing behaviors.\n\nThe existing behaviour of regexp_match(es) is perhaps a bit surprising to new users,\nbut I think one quickly learn the semantics by trying out a few examples,\nand once understood it's at least not something that bothers me personally.\n\nI think it's best to let regexp_position(s) work the same way as regexp_match(es),\nsince otherwise users would have to learn and remember two different behaviours.\n\n> As for comments on the patch itself:\n>\n> * The documentation includes an extraneous entry for regexp_replace,\n> and it fails to add the promised paragraph to functions-posix-regexp.\n>\n> * This bit is evidently copied from regexp_matches:\n>\n> + /* be sure to copy the input string into the multi-call ctx */\n> + matchctx = setup_regexp_matches(PG_GETARG_TEXT_P_COPY(0), pattern,\n>\n> regexp_matches needs to save the input string so that\n> build_regexp_match_result can copy parts of that. But\n> regexp_positions has no such need AFAICS, so I think we\n> don't need a long-lived copy of the string.\n>\n> * I wouldn't use these names in build_regexp_positions_result:\n>\n> + ArrayType *so_ary;\n> + ArrayType *eo_ary;\n>\n> \"so_ary\" isn't awful, but it invites confusion with regex's \"so\"\n> field, which hasn't got the same semantics (off by one).\n> \"eo_ary\" is pretty bad because it isn't an \"end offset\" at all, but\n> a length. I'd go for \"start_ary\" and \"length_ary\" or some such.\n>\n> * Test cases seem a bit thin, notably there's no coverage of the\n> null-subexpression code path.\n\nThanks for the comments on the patch itself.\nI will work on fixing these, but perhaps we can first see if it's possible to reach a consensus on the API convention and behaviour.\n\n/Joel\nOn Tue, Sep 21, 2021, at 19:55, Tom Lane wrote:> \"Joel Jacobson\" <joel@compiler.org> writes:> > [ 0005-regexp-positions.patch ]>> I took a quick look at this patch. I am on board now with the general> idea of returning match positions, but I think there are still> definitional issues to be discussed.>> 1. The main idea we might perhaps want to adopt from the Oracle-ish> regexp functions is a search-start-position argument. I'm not sure> that this is exciting --- if we intend this function to be a close> analog of regexp_matches(), it'd be better to leave that off. But> it deserves explicit consideration.I intentionally made it a close analog of regexp_matches(),to make it easy for existing users of regexp_matches() tounderstand how regexp_positions() works.> 2. It looks like you modeled this on regexp_matches() to the extent> of returning a set and using the 'g' flag to control whether the> set can actually contain more than one row. That's pretty ancient> precedent ... but we have revisited that design more than once> because regexp_matches() is just too much of a pain to use. I think> if we're going to do this, we should learn from history, and provide an> analog to regexp_match() as well as regexp_matches() right off the bat.Yes, I modeled it on regexp_matches().OK, so what you are suggesting is we should add both a regexp_position() function,that would work like regexp_match() but return the position instead,in addition to the already suggested regexp_positions() function.That sounds like a good idea to me.> 3. The API convention you chose (separate start and length arrays)> is perhaps still confusing. When I first looked at the test case>> +SELECT regexp_positions('foobarbequebaz', $re$(bar)(beque)$re$);> + regexp_positions > +-------------------> + (\"{4,7}\",\"{3,5}\")> +(1 row)>> I thought it was all wrong because it seemed to be identifying> the substrings 'barbequ' and 'obarb'. If there'd been a different> number of matches than capture groups, maybe I'd not have been> confused, but still... I wonder if we'd be better advised to make> N capture groups produce N two-element arrays, or else mash it all> into one array of N*2 elements. But this probably depends on which> way is the easiest to work with in SQL.The drawbacks of two-element arrays have already been discussed up thread.Personally, I prefer the version suggested in the latest patch, and suggest we stick to it.> 4. Not sure about the handling of sub-matches.> There are various plausible definitions we could use:>> * We return the position/length of the overall match, never mind> about any parenthesized subexpressions. This is simple but I think> it loses significant functionality. As an example, you might have> a pattern like 'ab*(c*)d+' where what you actually want to know> is where the 'c's are, but they have to be in the context described> by the rest of the regexp. Without subexpression-match capability> that's painful to do.>> * If there's a capturing subexpression, return the position/length> of the first such subexpression, else return the overall match.> This matches the behavior of substring().>> * If there are capturing subexpression(s), return the> positions/lengths of those, otherwise return the overall match.> This agrees with the behavior of regexp_match(es), so I'd tend> to lean to this option, but perhaps it's the hardest to use.>> * Return the position/length of the overall match *and* those of> each capturing subexpression. This is the most flexible choice,> but I give it low marks since it matches no existing behaviors.The existing behaviour of regexp_match(es) is perhaps a bit surprising to new users,but I think one quickly learn the semantics by trying out a few examples,and once understood it's at least not something that bothers me personally.I think it's best to let regexp_position(s) work the same way as regexp_match(es),since otherwise users would have to learn and remember two different behaviours.> As for comments on the patch itself:>> * The documentation includes an extraneous entry for regexp_replace,> and it fails to add the promised paragraph to functions-posix-regexp.>> * This bit is evidently copied from regexp_matches:>> + /* be sure to copy the input string into the multi-call ctx */> + matchctx = setup_regexp_matches(PG_GETARG_TEXT_P_COPY(0), pattern,>> regexp_matches needs to save the input string so that> build_regexp_match_result can copy parts of that. But> regexp_positions has no such need AFAICS, so I think we> don't need a long-lived copy of the string.>> * I wouldn't use these names in build_regexp_positions_result:>> + ArrayType *so_ary;> + ArrayType *eo_ary;>> \"so_ary\" isn't awful, but it invites confusion with regex's \"so\"> field, which hasn't got the same semantics (off by one).> \"eo_ary\" is pretty bad because it isn't an \"end offset\" at all, but> a length. I'd go for \"start_ary\" and \"length_ary\" or some such.>> * Test cases seem a bit thin, notably there's no coverage of the> null-subexpression code path.Thanks for the comments on the patch itself.I will work on fixing these, but perhaps we can first see if it's possible to reach a consensus on the API convention and behaviour./Joel",
"msg_date": "Sat, 25 Dec 2021 12:02:29 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re:_[PATCH]_regexp=5Fpositions_(_string_text,_pattern_text,_fl?=\n =?UTF-8?Q?ags_text_)_=E2=86=92_setof_int4range[]?="
}
] |
[
{
"msg_contents": "Hi,\n\nAs part of trying to make the aio branch tests on cirrus CI pass with\nsome tap tests I noticed that \"src/tools/msvc/vcregress.pl recoverycheck\"\nhangs. A long phase of remote debugging later I figured out that that's\nnot a fault of the aio branch - it also doesn't pass on master (fixed in\n[1]).\n\nWhich confused me - shouldn't the buildfarm have noticed? But it seems\nthat all msvc animals either don't have tap tests enabled or they\ndisable 'misc-checks' which in turn is what the buildfarm client uses to\ntrigger all the 'recovery' checks.\n\nIt seems we're not just skipping recovery, but also e.g. tap tests in\ncontrib, all the tests in src/test/modules/...\n\nAndrew, what's the reason for that? Is it just that they hung at some\npoint? Were too slow?\n\n\nOn that last point: Running the tap tests on windows appears to be\n*excruciatingly* slow. How does anybody develop on windows without a\nmechanism to actually run tests in parallel?\n\nI think it'd be good if vcregress.pl at least respected PROVE_FLAGS from\nthe environment - it can't currently be passed in for several of the\nvcregress.pl tests, and it does seem to make things to be at least\nsomewhat less painful.\n\n\nThis makes it even clearer to me that we really need a builtin\ntestrunner that runs tests efficiently *and* debuggable on windows.\n\nGreetings,\n\nAndres Freund\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=1e6e40447115ca7b4749d7d117b81b016ee5e2c2\n\n\n",
"msg_date": "Mon, 1 Mar 2021 12:07:15 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "buildfarm windows checks / tap tests on windows"
},
{
"msg_contents": "\nOn 3/1/21 3:07 PM, Andres Freund wrote:\n> Hi,\n>\n> As part of trying to make the aio branch tests on cirrus CI pass with\n> some tap tests I noticed that \"src/tools/msvc/vcregress.pl recoverycheck\"\n> hangs. A long phase of remote debugging later I figured out that that's\n> not a fault of the aio branch - it also doesn't pass on master (fixed in\n> [1]).\n>\n> Which confused me - shouldn't the buildfarm have noticed? But it seems\n> that all msvc animals either don't have tap tests enabled or they\n> disable 'misc-checks' which in turn is what the buildfarm client uses to\n> trigger all the 'recovery' checks.\n>\n> It seems we're not just skipping recovery, but also e.g. tap tests in\n> contrib, all the tests in src/test/modules/...\n>\n> Andrew, what's the reason for that? Is it just that they hung at some\n> point? Were too slow?\n\n\n\nI don't think speed is the issue. I probably disabled misc-tests on\ndrongo and bowerbird (my two animals in question) because I got either\ninstability or errors I was unable to diagnose. I'll go back and take\nanother look to narrow this down. It's possible to disable individual tests.\n\n\n\n>\n>\n> On that last point: Running the tap tests on windows appears to be\n> *excruciatingly* slow. How does anybody develop on windows without a\n> mechanism to actually run tests in parallel?\n\n\nI think most people develop elsewhere and then adapt/test on Windows if\nnecessary.\n\n\n>\n> I think it'd be good if vcregress.pl at least respected PROVE_FLAGS from\n> the environment - it can't currently be passed in for several of the\n> vcregress.pl tests, and it does seem to make things to be at least\n> somewhat less painful.\n\n\n\n+1\n\n\n>\n>\n> This makes it even clearer to me that we really need a builtin\n> testrunner that runs tests efficiently *and* debuggable on windows.\n>\n\n\"show me the code\" :-)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 2 Mar 2021 07:48:28 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm windows checks / tap tests on windows"
},
{
"msg_contents": "Hi,\n\nOn 2021-03-02 07:48:28 -0500, Andrew Dunstan wrote:\n> I don't think speed is the issue. I probably disabled misc-tests on\n> drongo and bowerbird (my two animals in question) because I got� either\n> instability or errors I was unable to diagnose. I'll go back and take\n> another look to narrow this down. It's possible to disable individual tests.\n\nYea, there was one test that hung in 021_row_visibility.pl which was a\nweird perl hang - with weird symptoms. But that should be fixed now.\n\nThere's another failure in recoverycheck that I at first thought was the\nfault of the aio branch. But it also see it on master - but only on one\nof the two machines I use to test. Pretty odd.\n\nt/003_recovery_targets.pl ............ 7/9\n# Failed test 'multiple conflicting settings'\n# at t/003_recovery_targets.pl line 151.\n\n# Failed test 'recovery end before target reached is a fatal error'\n# at t/003_recovery_targets.pl line 177.\nt/003_recovery_targets.pl ............ 9/9 # Looks like you failed 2 tests of 9.\nt/003_recovery_targets.pl ............ Dubious, test returned 2 (wstat 512, 0x200)\nFailed 2/9 subtests\n\nI think it's pretty dangerous if we have a substantial number of tests\nthat aren't run on windows - I think a lot of us just assume that the\nBF would catch windows specific problems...\n\n\n> > On that last point: Running the tap tests on windows appears to be\n> > *excruciatingly* slow. How does anybody develop on windows without a\n> > mechanism to actually run tests in parallel?\n> \n> \n> I think most people develop elsewhere and then adapt/test on Windows if\n> necessary.\n\nYea, but even that overstretches my patience by a good bit. I can't deal\nwith a serial check-world on linux either - I think that's the biggest\ndifferentiator. It's a lot less painful to deal with slow-ish tests if\nthey do all their slowness concurrently. But that's basically impossible\nwith vcregress.pl, and the bf etc can't easily do it either until we\nhave a decent way to see the correct logfiles & output for individual\ntests...\n\n\n> > I think it'd be good if vcregress.pl at least respected PROVE_FLAGS from\n> > the environment - it can't currently be passed in for several of the\n> > vcregress.pl tests, and it does seem to make things to be at least\n> > somewhat less painful.\n> +1\n\nK, will send a patch for that in a bit.\n\n\n> > This makes it even clearer to me that we really need a builtin\n> > testrunner that runs tests efficiently *and* debuggable on windows.\n> >\n> \n> \"show me the code\" :-)\n\nThe biggest obstacle on that front is perl. I started to write one, but\nhit several perl issues within an hour. I think I might write one in\npython, that'd be a lot less painful.\n\n\nOne windows build question I have is why the msvc infrastructure doesn't\naccept msys perl in places like this:\n\t\tguid => $^O eq \"MSWin32\" ? Win32::GuidGen() : 'FAKE',\nIf I change them to accept msys perl then the build ends up working.\n\nThe reason it'd be nice to accept msys perl is that git for windows\nbundles that - and that's already installed on most CI projects...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 2 Mar 2021 12:57:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm windows checks / tap tests on windows"
},
{
"msg_contents": "\nOn 3/2/21 7:48 AM, Andrew Dunstan wrote:\n> On 3/1/21 3:07 PM, Andres Freund wrote:\n>> Hi,\n>>\n>> As part of trying to make the aio branch tests on cirrus CI pass with\n>> some tap tests I noticed that \"src/tools/msvc/vcregress.pl recoverycheck\"\n>> hangs. A long phase of remote debugging later I figured out that that's\n>> not a fault of the aio branch - it also doesn't pass on master (fixed in\n>> [1]).\n>>\n>> Which confused me - shouldn't the buildfarm have noticed? But it seems\n>> that all msvc animals either don't have tap tests enabled or they\n>> disable 'misc-checks' which in turn is what the buildfarm client uses to\n>> trigger all the 'recovery' checks.\n>>\n>> It seems we're not just skipping recovery, but also e.g. tap tests in\n>> contrib, all the tests in src/test/modules/...\n>>\n>> Andrew, what's the reason for that? Is it just that they hung at some\n>> point? Were too slow?\n>\n>\n> I don't think speed is the issue. I probably disabled misc-tests on\n> drongo and bowerbird (my two animals in question) because I got either\n> instability or errors I was unable to diagnose. I'll go back and take\n> another look to narrow this down. It's possible to disable individual tests.\n\n\n\nWell, I though it was, but it wasn't. Fixed now, see\n<https://github.com/PGBuildFarm/client-code/commit/377e9129a08d607bf7aa32a42bcf6ebecb92ba4d>\n\n\nI've deployed this on drongo, which will now run almost all the TAP\ntests. The exception is the recovery tests, because\n021_row_visibility.pl crashes so badly it brings down the whole\nbuildfarm client.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 2 Mar 2021 17:04:16 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm windows checks / tap tests on windows"
},
{
"msg_contents": "Hi,\n\nOn 2021-03-02 17:04:16 -0500, Andrew Dunstan wrote:\n> Well, I though it was, but it wasn't. Fixed now, see\n> <https://github.com/PGBuildFarm/client-code/commit/377e9129a08d607bf7aa32a42bcf6ebecb92ba4d>\n> \n> \n> I've deployed this on drongo, which will now run almost all the TAP\n> tests.\n\nThanks!\n\n\n> The exception is the recovery tests, because\n> 021_row_visibility.pl crashes so badly it brings down the whole\n> buildfarm client.\n\nIt still does, even after\n\ncommit 1e6e40447115ca7b4749d7d117b81b016ee5e2c2 (upstream/master, master)\nAuthor: Andres Freund <andres@anarazel.de>\nDate: 2021-03-01 09:52:15 -0800\n\n Fix recovery test hang in 021_row_visibility.pl on windows.\n\n? I didn't see failures after that?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 2 Mar 2021 16:54:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm windows checks / tap tests on windows"
},
{
"msg_contents": "On Tue, Mar 02, 2021 at 04:54:57PM -0800, Andres Freund wrote:\n> It still does, even after\n> \n> commit 1e6e40447115ca7b4749d7d117b81b016ee5e2c2 (upstream/master, master)\n> Author: Andres Freund <andres@anarazel.de>\n> Date: 2021-03-01 09:52:15 -0800\n> \n> Fix recovery test hang in 021_row_visibility.pl on windows.\n> \n> ? I didn't see failures after that?\n\nYes. Testing this morning on top of 5b2f2af, it fails for me with a\n\"Terminating on signal SIGBREAK\".\n\nHaving a support for PROVE_TESTS would be nice in src/tools/msvc/,\nwrapping any ENV{PROVE_TESTS} value within an extra glob() before\npassing that down to the prove command.\n--\nMichael",
"msg_date": "Wed, 3 Mar 2021 10:27:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm windows checks / tap tests on windows"
},
{
"msg_contents": "\nOn 3/2/21 8:27 PM, Michael Paquier wrote:\n> On Tue, Mar 02, 2021 at 04:54:57PM -0800, Andres Freund wrote:\n>> It still does, even after\n>>\n>> commit 1e6e40447115ca7b4749d7d117b81b016ee5e2c2 (upstream/master, master)\n>> Author: Andres Freund <andres@anarazel.de>\n>> Date: 2021-03-01 09:52:15 -0800\n>>\n>> Fix recovery test hang in 021_row_visibility.pl on windows.\n>>\n>> ? I didn't see failures after that?\n> Yes. Testing this morning on top of 5b2f2af, it fails for me with a\n> \"Terminating on signal SIGBREAK\".\n>\n> Having a support for PROVE_TESTS would be nice in src/tools/msvc/,\n> wrapping any ENV{PROVE_TESTS} value within an extra glob() before\n> passing that down to the prove command.\n\n\n\nYes, I saw similar this morning, which woud have been after that commit.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 2 Mar 2021 21:20:52 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm windows checks / tap tests on windows"
},
{
"msg_contents": "Hi,\n\nOn 2021-03-02 12:57:57 -0800, Andres Freund wrote:\n> t/003_recovery_targets.pl ............ 7/9\n> # Failed test 'multiple conflicting settings'\n> # at t/003_recovery_targets.pl line 151.\n> \n> # Failed test 'recovery end before target reached is a fatal error'\n> # at t/003_recovery_targets.pl line 177.\n> t/003_recovery_targets.pl ............ 9/9 # Looks like you failed 2 tests of 9.\n> t/003_recovery_targets.pl ............ Dubious, test returned 2 (wstat 512, 0x200)\n> Failed 2/9 subtests\n\nThis appears to be caused by stderr in windows docker containers to\nsomehow not work quite right. cirrus-ci uses docker on windows.\n\nIf you look e.g. at https://cirrus-ci.com/task/6111560255930368, and\nspecifically at the relevant log file:\nhttps://api.cirrus-ci.com/v1/artifact/task/6111560255930368/log/src/test/recovery/tmp_check/log/003_recovery_targets_primary.log\nyou can see that it's, uh, less full than we normally expect:\n 1 file(s) copied.\n 1 file(s) copied.\n 1 file(s) copied.\n 1 file(s) copied.\n\nAs that test uses the log file to determine the state of servers:\n> my $logfile = slurp_file($node_standby->logfile());\n> ok($logfile =~ qr/multiple recovery targets specified/,\n> 'multiple conflicting settings');\n\nthat doesn't work.\n\n\nI was *very* confused by this for a while. But finally the cluebait hit\nwhen I discovered that stderr works just fine for *other*\nprograms. Including the programs that evidently log into\n003_recovery_targets_primary.log. The problem is that\npgwin32_is_service() somehow decides that postgres is running as a\nservice. Despite that not really being the case (I guess somehow\ninternally docker containers are started below a service, and that\ncauses the problem).\n\nI hate everything right now. So much.\n\nI think it's quite nasty that postgres just silently starts to log to\nthe event log. Why on earth wasn't the solution instead to hardcode that\nas a server parameter in pg_ctl register?\n\nNot sure what a good fix is for this.\n\n\n\nThe second problem I saw was 001_initdb failing, which appears to have\nbeen caused by some weird permission issue that I don't fully\nunderstand. The directory with PG in it was created by user andres, an\nadministrator. But somehow the inherited permissions lead to the chmod()\nthat initdb does (\"fixing permissions on existing directory %s ...\") to\nfail.\n\nc:\\src\\postgres>icacls c:\\src\\postgres\nc:\\src\\postgres BUILTIN\\Administrators:(F)\n BUILTIN\\Administrators:(I)(OI)(CI)(F)\n NT AUTHORITY\\SYSTEM:(I)(OI)(CI)(F)\n CREATOR OWNER:(I)(OI)(CI)(IO)(F)\n BUILTIN\\Users:(I)(OI)(CI)(RX)\n BUILTIN\\Users:(I)(CI)(AD)\n BUILTIN\\Users:(I)(CI)(WD)\nc:\\src\\postgres>whoami\nandres-build-te\\andres\n\nc:\\src\\postgres>net user andres\nUser name andres\n...\nLocal Group Memberships *Administrators *Users\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 2 Mar 2021 21:20:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm windows checks / tap tests on windows"
},
{
"msg_contents": "Hi,\n\nOn 2021-03-02 21:20:52 -0500, Andrew Dunstan wrote:\n> On 3/2/21 8:27 PM, Michael Paquier wrote:\n> > On Tue, Mar 02, 2021 at 04:54:57PM -0800, Andres Freund wrote:\n> >> It still does, even after\n> >>\n> >> commit 1e6e40447115ca7b4749d7d117b81b016ee5e2c2 (upstream/master, master)\n> >> Author: Andres Freund <andres@anarazel.de>\n> >> Date: 2021-03-01 09:52:15 -0800\n> >>\n> >> Fix recovery test hang in 021_row_visibility.pl on windows.\n> >>\n> >> ? I didn't see failures after that?\n> > Yes. Testing this morning on top of 5b2f2af, it fails for me with a\n> > \"Terminating on signal SIGBREAK\".\n> >\n> \n> Yes, I saw similar this morning, which woud have been after that commit.\n\nI can't reproduce that here - could either (or both) of you send\n\nsrc/test/recovery/tmp_check/log/regress_log_021_row_visibility\nsrc/test/recovery/tmp_check/log/021_row_visibility_standby.log\nsrc/test/recovery/tmp_check/log/021_row_visibility_primary.log\n\nof a failed run? And maybe how you're invoking it?\n\nDoes adding a\n\n$psql_primary{run}->finish;\n$psql_standby{run}->finish;\nbefore\n$psql_primary{run}->kill_kill;\n$psql_standby{run}->kill_kill;\n\nfix the issue?\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Tue, 2 Mar 2021 21:47:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm windows checks / tap tests on windows"
},
{
"msg_contents": "Hi,\n\nOn 2021-03-02 21:20:11 -0800, Andres Freund wrote:\n> On 2021-03-02 12:57:57 -0800, Andres Freund wrote:\n> > t/003_recovery_targets.pl ............ 7/9\n> > # Failed test 'multiple conflicting settings'\n> > # at t/003_recovery_targets.pl line 151.\n> > \n> > # Failed test 'recovery end before target reached is a fatal error'\n> > # at t/003_recovery_targets.pl line 177.\n> > t/003_recovery_targets.pl ............ 9/9 # Looks like you failed 2 tests of 9.\n> > t/003_recovery_targets.pl ............ Dubious, test returned 2 (wstat 512, 0x200)\n> > Failed 2/9 subtests\n> \n> This appears to be caused by stderr in windows docker containers to\n> somehow not work quite right. cirrus-ci uses docker on windows.\n> \n> If you look e.g. at https://cirrus-ci.com/task/6111560255930368, and\n> specifically at the relevant log file:\n> https://api.cirrus-ci.com/v1/artifact/task/6111560255930368/log/src/test/recovery/tmp_check/log/003_recovery_targets_primary.log\n> you can see that it's, uh, less full than we normally expect:\n> 1 file(s) copied.\n> 1 file(s) copied.\n> 1 file(s) copied.\n> 1 file(s) copied.\n> \n> As that test uses the log file to determine the state of servers:\n> > my $logfile = slurp_file($node_standby->logfile());\n> > ok($logfile =~ qr/multiple recovery targets specified/,\n> > 'multiple conflicting settings');\n> \n> that doesn't work.\n> \n> \n> I was *very* confused by this for a while. But finally the cluebait hit\n> when I discovered that stderr works just fine for *other*\n> programs. Including the programs that evidently log into\n> 003_recovery_targets_primary.log. The problem is that\n> pgwin32_is_service() somehow decides that postgres is running as a\n> service. Despite that not really being the case (I guess somehow\n> internally docker containers are started below a service, and that\n> causes the problem).\n> \n> I hate everything right now. So much.\n> \n> I think it's quite nasty that postgres just silently starts to log to\n> the event log. Why on earth wasn't the solution instead to hardcode that\n> as a server parameter in pg_ctl register?\n> \n> Not sure what a good fix is for this.\n\nFWIW, just forcing pgwin32_is_service() to return false seems to get the\ncirrus tests past 003_recovery_targets.pl. Possible it'll not finish due\nto other problems (or too tight timeouts I set), but at least this one\ncan be considered diagnosed I think.\n\nhttps://cirrus-ci.com/task/5049764917018624?command=windows_worker_buf#L132\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 2 Mar 2021 21:56:06 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm windows checks / tap tests on windows"
},
{
"msg_contents": "On Tue, Mar 02, 2021 at 09:47:18PM -0800, Andres Freund wrote:\n> I can't reproduce that here - could either (or both) of you send\n> \n> src/test/recovery/tmp_check/log/regress_log_021_row_visibility\n> src/test/recovery/tmp_check/log/021_row_visibility_standby.log\n> src/test/recovery/tmp_check/log/021_row_visibility_primary.log\n> \n> of a failed run? And maybe how you're invoking it?\n\nI have not checked this stuff in details, but here you go. I have\nsimply invoked that with vcregress taptest src/test/recovery/,\nspeeding up the process by removing temporarily all the other\nscripts.\n--\nMichael",
"msg_date": "Wed, 3 Mar 2021 15:57:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm windows checks / tap tests on windows"
},
{
"msg_contents": "\nOn 3/3/21 12:47 AM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-03-02 21:20:52 -0500, Andrew Dunstan wrote:\n>> On 3/2/21 8:27 PM, Michael Paquier wrote:\n>>> On Tue, Mar 02, 2021 at 04:54:57PM -0800, Andres Freund wrote:\n>>>> It still does, even after\n>>>>\n>>>> commit 1e6e40447115ca7b4749d7d117b81b016ee5e2c2 (upstream/master, master)\n>>>> Author: Andres Freund <andres@anarazel.de>\n>>>> Date: 2021-03-01 09:52:15 -0800\n>>>>\n>>>> Fix recovery test hang in 021_row_visibility.pl on windows.\n>>>>\n>>>> ? I didn't see failures after that?\n>>> Yes. Testing this morning on top of 5b2f2af, it fails for me with a\n>>> \"Terminating on signal SIGBREAK\".\n>>>\n>> Yes, I saw similar this morning, which woud have been after that commit.\n> I can't reproduce that here - could either (or both) of you send\n>\n> src/test/recovery/tmp_check/log/regress_log_021_row_visibility\n> src/test/recovery/tmp_check/log/021_row_visibility_standby.log\n> src/test/recovery/tmp_check/log/021_row_visibility_primary.log\n>\n> of a failed run? And maybe how you're invoking it?\n>\n> Does adding a\n>\n> $psql_primary{run}->finish;\n> $psql_standby{run}->finish;\n> before\n> $psql_primary{run}->kill_kill;\n> $psql_standby{run}->kill_kill;\n>\n> fix the issue?\n>\n\n\n\n\nI will check later on. Note that IPC::Run's kill_kill is known to cause\nproblems on MSWin32 perl, see the other recovery checks where we skip\ntests involving it - crash_recovery and logical_decoding.\n\n\nMaybe we need a wrapper for it that dies if called on MSWin32 perl. That\nat least would not crash the invoking service with a nasty signal, and\nthe buildfarm would actually tell us about the issue.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 3 Mar 2021 07:57:27 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm windows checks / tap tests on windows"
},
{
"msg_contents": "\nOn 3/2/21 3:57 PM, Andres Freund wrote:\n>\n>>> This makes it even clearer to me that we really need a builtin\n>>> testrunner that runs tests efficiently *and* debuggable on windows.\n>>>\n>> \"show me the code\" :-)\n> The biggest obstacle on that front is perl. I started to write one, but\n> hit several perl issues within an hour. I think I might write one in\n> python, that'd be a lot less painful.\n\n\n\nWithout knowing details I'm skeptical. Over nearly three decades of\nusing perl I have found very little I wanted to do that I could not.\n\n\n\n>\n>\n> One windows build question I have is why the msvc infrastructure doesn't\n> accept msys perl in places like this:\n> \t\tguid => $^O eq \"MSWin32\" ? Win32::GuidGen() : 'FAKE',\n> If I change them to accept msys perl then the build ends up working.\n>\n> The reason it'd be nice to accept msys perl is that git for windows\n> bundles that - and that's already installed on most CI projects...\n\n\nNice idea, but we can't run prove under Git's msys perl, because it's\nmissing some stuff, at least on drongo:\n\nC:\\prog>bin\\prove --version\n\nC:\\prog>\"c:\\Program Files\\Git\\usr\\bin\\perl\" \"c:\\Program\nFiles\\Git\\usr\\bin\\core_perl\\prove\" --version\nCan't locate TAP/Harness/Env.pm in @INC (you may need to install the\nTAP::Harness::Env module) (@INC contains: /usr/lib/perl5/site_perl\n/usr/share/perl5/site_perl /usr/lib/perl5/vendor_perl\n/usr/share/perl5/vendor_perl /usr/lib/perl5/core_perl\n/usr/share/perl5/core_perl) at /usr/share/perl5/core_perl/App/Prove.pm\nline 6.\nBEGIN failed--compilation aborted at\n/usr/share/perl5/core_perl/App/Prove.pm line 6.\nCompilation failed in require at c:\\Program\nFiles\\Git\\usr\\bin\\core_perl\\prove line 9.\nBEGIN failed--compilation aborted at c:\\Program\nFiles\\Git\\usr\\bin\\core_perl\\prove line 9.\n\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 3 Mar 2021 10:18:44 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm windows checks / tap tests on windows"
},
{
"msg_contents": "\nOn 3/3/21 7:57 AM, Andrew Dunstan wrote:\n> On 3/3/21 12:47 AM, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2021-03-02 21:20:52 -0500, Andrew Dunstan wrote:\n>>> On 3/2/21 8:27 PM, Michael Paquier wrote:\n>>>> On Tue, Mar 02, 2021 at 04:54:57PM -0800, Andres Freund wrote:\n>>>>> It still does, even after\n>>>>>\n>>>>> commit 1e6e40447115ca7b4749d7d117b81b016ee5e2c2 (upstream/master, master)\n>>>>> Author: Andres Freund <andres@anarazel.de>\n>>>>> Date: 2021-03-01 09:52:15 -0800\n>>>>>\n>>>>> Fix recovery test hang in 021_row_visibility.pl on windows.\n>>>>>\n>>>>> ? I didn't see failures after that?\n>>>> Yes. Testing this morning on top of 5b2f2af, it fails for me with a\n>>>> \"Terminating on signal SIGBREAK\".\n>>>>\n>>> Yes, I saw similar this morning, which woud have been after that commit.\n>> I can't reproduce that here - could either (or both) of you send\n>>\n>> src/test/recovery/tmp_check/log/regress_log_021_row_visibility\n>> src/test/recovery/tmp_check/log/021_row_visibility_standby.log\n>> src/test/recovery/tmp_check/log/021_row_visibility_primary.log\n>>\n>> of a failed run? And maybe how you're invoking it?\n>>\n>> Does adding a\n>>\n>> $psql_primary{run}->finish;\n>> $psql_standby{run}->finish;\n>> before\n>> $psql_primary{run}->kill_kill;\n>> $psql_standby{run}->kill_kill;\n>>\n>> fix the issue?\n>>\n>\n>\n>\n> I will check later on. Note that IPC::Run's kill_kill is known to cause\n> problems on MSWin32 perl, see the other recovery checks where we skip\n> tests involving it - crash_recovery and logical_decoding.\n\n\nHere's what I actually got working. Rip out the calls to kill_kill and\nreplace them with:\n\n\n $psql_primary{stdin} .= \"\\\\q\\n\";\n $psql_primary{run}->pump_nb();\n $psql_standby{stdin} .= \"\\\\q\\n\";\n $psql_standby{run}->pump_nb();\n sleep 2; # give them time to quit\n\n\nNo hang or signal now.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 3 Mar 2021 16:07:13 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm windows checks / tap tests on windows"
},
{
"msg_contents": "Hi,\n\nOn 2021-03-03 16:07:13 -0500, Andrew Dunstan wrote:\n> Here's what I actually got working. Rip out the calls to kill_kill and\n> replace them with:\n> \n> \n> $psql_primary{stdin} .= \"\\\\q\\n\";\n> $psql_primary{run}->pump_nb();\n> $psql_standby{stdin} .= \"\\\\q\\n\";\n> $psql_standby{run}->pump_nb();\n> sleep 2; # give them time to quit\n> \n> \n> No hang or signal now.\n\nHm. I wonder if we can avoid the sleep 2 by doing something like\n->pump(); ->finish(); instead of one pump_nb()? One pump() is needed to\nsend the \\q to psql, and then we need to wait for the process to finish?\nI'll try that, but given that I can't reproduce any problems...\n\nI suspect that at least the crash recovery test suffers from exactly the\nsame problem and that we can re-enable it on windows after doign an\nequivalent change...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 3 Mar 2021 13:42:09 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm windows checks / tap tests on windows"
},
{
"msg_contents": "\nOn 3/3/21 4:42 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-03-03 16:07:13 -0500, Andrew Dunstan wrote:\n>> Here's what I actually got working. Rip out the calls to kill_kill and\n>> replace them with:\n>>\n>>\n>> $psql_primary{stdin} .= \"\\\\q\\n\";\n>> $psql_primary{run}->pump_nb();\n>> $psql_standby{stdin} .= \"\\\\q\\n\";\n>> $psql_standby{run}->pump_nb();\n>> sleep 2; # give them time to quit\n>>\n>>\n>> No hang or signal now.\n> Hm. I wonder if we can avoid the sleep 2 by doing something like\n> ->pump(); ->finish(); instead of one pump_nb()? One pump() is needed to\n> send the \\q to psql, and then we need to wait for the process to finish?\n> I'll try that, but given that I can't reproduce any problems...\n\n\nLooking at the examples in the IPC:Run docco, it looks like we might\njust be able to replace the pump_nb above with finish, and leave the\nsleep out. I'll try that.\n\n\n>\n> I suspect that at least the crash recovery test suffers from exactly the\n> same problem and that we can re-enable it on windows after doign an\n> equivalent change...\n>\n\nYes, possibly - it would be good to remove those skips.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 3 Mar 2021 17:32:29 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm windows checks / tap tests on windows"
},
{
"msg_contents": "\nOn 3/3/21 5:32 PM, Andrew Dunstan wrote:\n> On 3/3/21 4:42 PM, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2021-03-03 16:07:13 -0500, Andrew Dunstan wrote:\n>>> Here's what I actually got working. Rip out the calls to kill_kill and\n>>> replace them with:\n>>>\n>>>\n>>> $psql_primary{stdin} .= \"\\\\q\\n\";\n>>> $psql_primary{run}->pump_nb();\n>>> $psql_standby{stdin} .= \"\\\\q\\n\";\n>>> $psql_standby{run}->pump_nb();\n>>> sleep 2; # give them time to quit\n>>>\n>>>\n>>> No hang or signal now.\n>> Hm. I wonder if we can avoid the sleep 2 by doing something like\n>> ->pump(); ->finish(); instead of one pump_nb()? One pump() is needed to\n>> send the \\q to psql, and then we need to wait for the process to finish?\n>> I'll try that, but given that I can't reproduce any problems...\n>\n> Looking at the examples in the IPC:Run docco, it looks like we might\n> just be able to replace the pump_nb above with finish, and leave the\n> sleep out. I'll try that.\n\n\nOK, this worked fine. I'll try it s a recipe in the other places where\nkill_kill is forcing us to skip tests under MSwin32 perl, and see how we go.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 3 Mar 2021 19:21:54 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm windows checks / tap tests on windows"
},
{
"msg_contents": "On 3/3/21 7:21 PM, Andrew Dunstan wrote:\n> On 3/3/21 5:32 PM, Andrew Dunstan wrote:\n>> On 3/3/21 4:42 PM, Andres Freund wrote:\n>>> Hi,\n>>>\n>>> On 2021-03-03 16:07:13 -0500, Andrew Dunstan wrote:\n>>>> Here's what I actually got working. Rip out the calls to kill_kill and\n>>>> replace them with:\n>>>>\n>>>>\n>>>> $psql_primary{stdin} .= \"\\\\q\\n\";\n>>>> $psql_primary{run}->pump_nb();\n>>>> $psql_standby{stdin} .= \"\\\\q\\n\";\n>>>> $psql_standby{run}->pump_nb();\n>>>> sleep 2; # give them time to quit\n>>>>\n>>>>\n>>>> No hang or signal now.\n>>> Hm. I wonder if we can avoid the sleep 2 by doing something like\n>>> ->pump(); ->finish(); instead of one pump_nb()? One pump() is needed to\n>>> send the \\q to psql, and then we need to wait for the process to finish?\n>>> I'll try that, but given that I can't reproduce any problems...\n>> Looking at the examples in the IPC:Run docco, it looks like we might\n>> just be able to replace the pump_nb above with finish, and leave the\n>> sleep out. I'll try that.\n>\n> OK, this worked fine. I'll try it s a recipe in the other places where\n> kill_kill is forcing us to skip tests under MSwin32 perl, and see how we go.\n>\n>\n\n\nHere's the patch. I didn't see a convenient way of handling the\npg_recvlogical case, so that's unchanged.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 4 Mar 2021 11:10:19 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm windows checks / tap tests on windows"
},
{
"msg_contents": "Hi,\n\nOn 2021-03-04 11:10:19 -0500, Andrew Dunstan wrote:\n> Here's the patch.\n\nAwesome. Will you commit it?\n\n\n> I didn't see a convenient way of handling the pg_recvlogical case, so\n> that's unchanged.\n\nIs the problem actually the kill_kill() itself, or just doing\n->kill_kill() without a subsequent ->finish()?\n\nBut anyway, that seems like a less critical test...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 Mar 2021 09:56:50 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm windows checks / tap tests on windows"
},
{
"msg_contents": "\nOn 3/4/21 12:56 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-03-04 11:10:19 -0500, Andrew Dunstan wrote:\n>> Here's the patch.\n> Awesome. Will you commit it?\n\n\n\nDone\n\n\n>\n>\n>> I didn't see a convenient way of handling the pg_recvlogical case, so\n>> that's unchanged.\n> Is the problem actually the kill_kill() itself, or just doing\n> ->kill_kill() without a subsequent ->finish()?\n\n\nPretty sure it's the kill_kill that causes the awful crash Michael and I\nhave seen.\n\n\n\n>\n> But anyway, that seems like a less critical test...\n\n\n\nright\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 4 Mar 2021 13:31:39 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm windows checks / tap tests on windows"
}
] |
[
{
"msg_contents": "Hello,\n\nI saw this failure after a recent commit (though the next build\nsucceeded, and I don't yet have any particular reason to believe that\nthe commits it blamed are at fault, we'll see...):\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gombessa&dt=2021-03-01%2004%3A58%3A17\n\nStrangely, it didn't spit out the regression.diffs so I don't know\nanything more than \"commit_ts failed\". I emailed the owner asking for\nthat log but I see now it's too late. I wonder if there is some\nscripting in there that doesn't work correctly on OpenBSD for whatever\nreason... I know that we *sometimes* get regression.diffs from\nfailures in here, but I haven't tried to figure out the pattern (which\nanimals etc). Here's a similar failure that does show the .diffs I\nwish I could see, same BF client version (11), on Windows:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2020-07-13%2000:47:47\n\n\n",
"msg_date": "Tue, 2 Mar 2021 10:05:41 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Why does the BF sometimes not dump regression.diffs?"
},
{
"msg_contents": "\nOn 3/1/21 4:05 PM, Thomas Munro wrote:\n> Hello,\n>\n> I saw this failure after a recent commit (though the next build\n> succeeded, and I don't yet have any particular reason to believe that\n> the commits it blamed are at fault, we'll see...):\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gombessa&dt=2021-03-01%2004%3A58%3A17\n>\n> Strangely, it didn't spit out the regression.diffs so I don't know\n> anything more than \"commit_ts failed\". I emailed the owner asking for\n> that log but I see now it's too late. I wonder if there is some\n> scripting in there that doesn't work correctly on OpenBSD for whatever\n> reason... I know that we *sometimes* get regression.diffs from\n> failures in here, but I haven't tried to figure out the pattern (which\n> animals etc). Here's a similar failure that does show the .diffs I\n> wish I could see, same BF client version (11), on Windows:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2020-07-13%2000:47:47\n>\n>\n\n\nThe version numbering is a bit misleading on fairywren, which, as it's a\nmachine I control runs from a git checkout, which clearly is later than\nREL_11 even though that's what the version string says. Commit 13d2143\nfixed this. It was included in Release 12.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 1 Mar 2021 19:32:28 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Why does the BF sometimes not dump regression.diffs?"
},
{
"msg_contents": "On Tue, Mar 2, 2021 at 1:32 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> The version numbering is a bit misleading on fairywren, which, as it's a\n> machine I control runs from a git checkout, which clearly is later than\n> REL_11 even though that's what the version string says. Commit 13d2143\n> fixed this. It was included in Release 12.\n\nOh, thanks!\n\n\n",
"msg_date": "Tue, 2 Mar 2021 13:40:11 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why does the BF sometimes not dump regression.diffs?"
}
] |
[
{
"msg_contents": "Hi.\n\nI found some redundant code in psql/help.c, so I propose a patch to fix \nit.\nIn the current help.c, the variable \"user\" is set to the value of \n$PGUSER (or get_user_name).\nHowever, $PGUSER is referenced again in the code that follows.\nWe can replace this part with \"user\", I think.\n\nRegards.\n--\nKota Miyake",
"msg_date": "Tue, 02 Mar 2021 11:57:37 +0900",
"msg_from": "miyake_kouta <miyake_kouta@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] psql : Improve code for help option"
},
{
"msg_contents": "\n\nOn 2021/03/02 11:57, miyake_kouta wrote:\n> Hi.\n> \n> I found some redundant code in psql/help.c, so I propose a patch to fix it.\n> In the current help.c, the variable \"user\" is set to the value of $PGUSER (or get_user_name).\n> However, $PGUSER is referenced again in the code that follows.\n> We can replace this part with \"user\", I think.\n\nGood catch!\n\n> \tfprintf(output, _(\" -U, --username=USERNAME database user name (default: \\\"%s\\\")\\n\"), env);\n\nWe can simplify the code more and remove \"env = user\"\nby just using \"user\" instead of \"env\" in the above?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 3 Mar 2021 00:09:18 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql : Improve code for help option"
},
{
"msg_contents": "2021-03-03 00:09, Fujii Masao wrote:\n> We can simplify the code more and remove \"env = user\"\n> by just using \"user\" instead of \"env\" in the above?\n\nThank you for your comment.\nI updated my patch and replaced \"env\" with \"user\".\nPlease check.\n\nRegards.\n--\nKota Miyake",
"msg_date": "Wed, 03 Mar 2021 12:27:11 +0900",
"msg_from": "miyake_kouta <miyake_kouta@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] psql : Improve code for help option"
},
{
"msg_contents": "\n\nOn 2021/03/03 12:27, miyake_kouta wrote:\n> 2021-03-03 00:09, Fujii Masao wrote:\n>> We can simplify the code more and remove \"env = user\"\n>> by just using \"user\" instead of \"env\" in the above?\n> \n> Thank you for your comment.\n> I updated my patch and replaced \"env\" with \"user\".\n> Please check.\n\nThanks for updating the patch!\nLGTM. Barrying any objection, I will commit this patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 3 Mar 2021 16:39:01 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql : Improve code for help option"
},
{
"msg_contents": "Hi,\n\nI have reviewed the patch and it looks good to me.\n\nThanks and Regards,\nNitin Jadhav\n\nOn Wed, Mar 3, 2021 at 1:09 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n>\n>\n> On 2021/03/03 12:27, miyake_kouta wrote:\n> > 2021-03-03 00:09, Fujii Masao wrote:\n> >> We can simplify the code more and remove \"env = user\"\n> >> by just using \"user\" instead of \"env\" in the above?\n> >\n> > Thank you for your comment.\n> > I updated my patch and replaced \"env\" with \"user\".\n> > Please check.\n>\n> Thanks for updating the patch!\n> LGTM. Barrying any objection, I will commit this patch.\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n>\n>\n>\n\nHi,I have reviewed the patch and it looks good to me.Thanks and Regards,Nitin JadhavOn Wed, Mar 3, 2021 at 1:09 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\nOn 2021/03/03 12:27, miyake_kouta wrote:\n> 2021-03-03 00:09, Fujii Masao wrote:\n>> We can simplify the code more and remove \"env = user\"\n>> by just using \"user\" instead of \"env\" in the above?\n> \n> Thank you for your comment.\n> I updated my patch and replaced \"env\" with \"user\".\n> Please check.\n\nThanks for updating the patch!\nLGTM. Barrying any objection, I will commit this patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 3 Mar 2021 13:35:04 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql : Improve code for help option"
},
{
"msg_contents": "\n\nOn 2021/03/03 17:05, Nitin Jadhav wrote:\n> Hi,\n> \n> I have reviewed the patch and it looks good to me.\n\nThanks! Pushed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 4 Mar 2021 18:25:31 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql : Improve code for help option"
}
] |
[
{
"msg_contents": "hi\r\n\r\n\r\nDuring installation from source code, there are many crashes for psql while executing core regression tests, all the crashes are similar, the backtrace info as follows:\r\n\r\n\r\nCore was generated by 'psql'.\r\nProgram terminated with signal 11, Segmentation fault.\r\n# 0 0x000000000043f140 in slash_yylex()\r\n(gdb) bt\r\n#0 0x000000000043f140 in slash_yylex()\r\n#1 0x00000000004430fc in psql_scan_slash_command()\r\n#2 0x000000000043f140 in HandleSlashCmds()\r\n#3 0x000000000043f140 in MainLoop()\r\n#4 0x000000000043f140 in main()\r\n\r\n\r\nI did more compared testing about this scenario, as follows:\r\n1. install from local source code(xxx.tar.gz)\r\n1) switch to source tree directory, and build there ---- no crash generated\r\n2) create a build directory, and build there ---- no crash generated\r\n\r\n\r\n2. install from git source code\r\n1) switch to source tree directory, and build there ---- no crash generated\r\n2) create a build directory, and build there ---- many crashes generated, but if install newer version of flex, e.g. 2.6.4, the problem doesn't exist. Any suggestions about this behavior?\r\n\r\n\r\n\r\nNOTES:\r\ntest commands are same, as follows:\r\nconfigure --enable-coverage --enable-tap-tests\r\nmake\r\nmake check\r\n\r\n\r\ntesting environment:\r\nPostgreSQL: 13.2\r\nredhat 7.4, 3.10.0-693.e17.x86_64\r\nflex: 2.5.37\r\n\r\n\r\nthanks\r\nwalker\nhiDuring installation from source code, there are many crashes for psql while executing core regression tests, all the crashes are similar, the backtrace info as follows:Core was generated by 'psql'.Program terminated with signal 11, Segmentation fault.# 0 0x000000000043f140 in slash_yylex()(gdb) bt#0 0x000000000043f140 in slash_yylex()#1 0x00000000004430fc in psql_scan_slash_command()#2 0x000000000043f140 in HandleSlashCmds()#3 0x000000000043f140 in MainLoop()#4 0x000000000043f140 in main()I did more compared testing about this scenario, as follows:1. install from local source code(xxx.tar.gz)1) switch to source tree directory, and build there ---- no crash generated2) create a build directory, and build there ---- no crash generated2. install from git source code1) switch to source tree directory, and build there ---- no crash generated2) create a build directory, and build there ---- many crashes generated, but if install newer version of flex, e.g. 2.6.4, the problem doesn't exist. Any suggestions about this behavior?NOTES:test commands are same, as follows:configure --enable-coverage --enable-tap-testsmakemake checktesting environment:PostgreSQL: 13.2redhat 7.4, 3.10.0-693.e17.x86_64flex: 2.5.37thankswalker",
"msg_date": "Tue, 2 Mar 2021 11:35:42 +0800",
"msg_from": "\"=?ISO-8859-1?B?d2Fsa2Vy?=\" <failaway@qq.com>",
"msg_from_op": true,
"msg_subject": "psql crash while executing core regression tests"
},
{
"msg_contents": "I use CentOS 7 with flex 2.5.37 quite extensively have never come across a\npsql crash. This seems more like an environment related issue on your\nsystem.\n\n\n\nOn Tue, Mar 2, 2021 at 1:53 PM walker <failaway@qq.com> wrote:\n\n> hi\n>\n> During installation from source code, there are many crashes for psql\n> while executing core regression tests, all the crashes are similar, the\n> backtrace info as follows:\n>\n> Core was generated by 'psql'.\n> Program terminated with signal 11, Segmentation fault.\n> # 0 0x000000000043f140 in slash_yylex()\n> (gdb) bt\n> #0 0x000000000043f140 in slash_yylex()\n> #1 0x00000000004430fc in psql_scan_slash_command()\n> #2 0x000000000043f140 in HandleSlashCmds()\n> #3 0x000000000043f140 in MainLoop()\n> #4 0x000000000043f140 in main()\n>\n> I did more compared testing about this scenario, as follows:\n> 1. install from local source code(xxx.tar.gz)\n> 1) switch to source tree directory, and build there ---- no crash\n> generated\n> 2) create a build directory, and build there ---- no crash generated\n>\n> 2. install from git source code\n> 1) switch to source tree directory, and build there ---- no crash generated\n> 2) create a build directory, and build there ---- many crashes generated,\n> but if install newer version of flex, e.g. 2.6.4, the problem doesn't\n> exist. Any suggestions about this behavior?\n>\n> NOTES:\n> test commands are same, as follows:\n> configure --enable-coverage --enable-tap-tests\n> make\n> make check\n>\n> testing environment:\n> PostgreSQL: 13.2\n> redhat 7.4, 3.10.0-693.e17.x86_64\n> flex: 2.5.37\n>\n> thanks\n> walker\n>\n\nI use CentOS 7 with flex 2.5.37 quite extensively have never come across a psql crash. This seems more like an environment related issue on your system.On Tue, Mar 2, 2021 at 1:53 PM walker <failaway@qq.com> wrote:hiDuring installation from source code, there are many crashes for psql while executing core regression tests, all the crashes are similar, the backtrace info as follows:Core was generated by 'psql'.Program terminated with signal 11, Segmentation fault.# 0 0x000000000043f140 in slash_yylex()(gdb) bt#0 0x000000000043f140 in slash_yylex()#1 0x00000000004430fc in psql_scan_slash_command()#2 0x000000000043f140 in HandleSlashCmds()#3 0x000000000043f140 in MainLoop()#4 0x000000000043f140 in main()I did more compared testing about this scenario, as follows:1. install from local source code(xxx.tar.gz)1) switch to source tree directory, and build there ---- no crash generated2) create a build directory, and build there ---- no crash generated2. install from git source code1) switch to source tree directory, and build there ---- no crash generated2) create a build directory, and build there ---- many crashes generated, but if install newer version of flex, e.g. 2.6.4, the problem doesn't exist. Any suggestions about this behavior?NOTES:test commands are same, as follows:configure --enable-coverage --enable-tap-testsmakemake checktesting environment:PostgreSQL: 13.2redhat 7.4, 3.10.0-693.e17.x86_64flex: 2.5.37thankswalker",
"msg_date": "Tue, 2 Mar 2021 19:33:01 +0500",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql crash while executing core regression tests"
},
{
"msg_contents": "Hamid Akhtar <hamid.akhtar@gmail.com> writes:\n> I use CentOS 7 with flex 2.5.37 quite extensively have never come across a\n> psql crash. This seems more like an environment related issue on your\n> system.\n\nYeah ... also, there are more than a dozen buildfarm animals using 2.5.37,\nand none of them have shown any sign of distress. We have also got\nanimals using just about every other flex version under the sun, and they\nall work. So I'm inclined to guess that the apparent dependence on flex\nversion is a mirage, and the real reason why it worked or didn't work is\nelsewhere. We don't have enough info here to identify the problem, but\nI'd suggest a couple of things:\n\n* make sure you're starting from a completely clean tree (\"git clean -dfx\"\nis your friend)\n\n* avoid changing PATH or other important environment variables between\nconfigure and build\n\n* watch out for conflicts between different PG installations on the same\nmachine\n\nOn the last point, it can be a really bad idea to have any preinstalled\nPG programs or libraries on a machine where you're trying to do PG\ndevelopment. It's too easy to pick up a library out of /usr/lib or a\nheader out of /usr/include when you wanted to use the version in your\nbuild tree.\n\nWhether any of this explains your problem remains to be seen, but that's\nthe kind of issue I'd suggest looking for.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Mar 2021 10:23:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql crash while executing core regression tests"
}
] |
[
{
"msg_contents": "Hi all,\n\nI've started this new thread separated from the thread[1] to discuss\nremoving vacuum_cleanup_index_scale_factor GUC parameter proposed by\nPeter Geoghegan.\n\nbtvacuumcleanup() has been playing two roles: recycling deleted pages\nand collecting index statistics. This discussion focuses on the\nlatter. Since PG11, btvacuumcleanup() uses\nvacuum_cleanup_index_scale_factor as a threshold to do an index scan\nto update index statistics (pg_class.reltuples and pg_class.relpages).\nTherefore, even if there is no update on the btree index at all (e.g.,\nbtbulkdelete() was not called earlier), btvacuumcleanup() scans the\nwhole index to collect the index statistics if the number of newly\ninserted tuples exceeds the vacuum_cleanup_index_scale_factor fraction\nof the total number of heap tuples detected by the previous statistics\ncollection. On the other hand, those index statistics are updated also\nby ANALYZE and autoanalyze. pg_class.reltuples calculated by ANALYZE\nis an estimation whereas the value returned by btvacuumcleanup() is an\naccurate value. But perhaps we can rely on ANALYZE and autoanalyze to\nupdate those index statistics. The points of this discussion are what\nwe really need to do in btvacuumcleanup() and whether\nbtvacuumcleanup() really needs to do an index scan for the purpose of\nindex statistics update.\n\nThe original design that made VACUUM set\npg_class.reltuples/pg_class.relpages in indexes (from 15+ years ago)\nassumed that it was cheap to handle statistics in passing. Even if we\nhave btvacuumcleanup() not do an index scan at all, this is 100%\nallowed by the amvacuumcleanup contract described in the\ndocumentation:\n\n\"It is OK to return NULL if the index was not changed at all during\nthe VACUUM operation, but otherwise correct stats should be returned.\"\n\nThe above description was added by commit e57345975cf in 2006 and\nhasn't changed for now.\n\nFor instance, looking at hash indexes, it hasn't really changed since\n2006 in terms of amvacuumcleanup(). hashvacuumcleanup() simply sets\nstats->num_pages and stats->num_index_tuples without an index scan.\nI'd like to quote the in-depth analysis by Peter Geoghegan:\n\n-----\n/*\n * Post-VACUUM cleanup.\n *\n * Result: a palloc'd struct containing statistical info for VACUUM displays.\n */\nIndexBulkDeleteResult *\nhashvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats)\n{\n Relation rel = info->index;\n BlockNumber num_pages;\n\n /* If hashbulkdelete wasn't called, return NULL signifying no change */\n /* Note: this covers the analyze_only case too */\n if (stats == NULL)\n return NULL;\n\n /* update statistics */\n num_pages = RelationGetNumberOfBlocks(rel);\n stats->num_pages = num_pages;\n\n return stats;\n}\n\nClearly hashvacuumcleanup() was considered by Tom when he revised the\ndocumentation in 2006. Here are some observations about\nhashvacuumcleanup() that seem relevant now:\n\n* There is no \"analyze_only\" handling, just like nbtree.\n\n\"analyze_only\" is only used by GIN, even now, 15+ years after it was\nadded. GIN uses it to make autovacuum workers (never VACUUM outside of\nan AV worker) do pending list insertions for ANALYZE -- just to make\nit happen more often. This is a niche thing -- clearly we don't have\nto care about it in nbtree, even if we make btvacuumcleanup() (almost)\nalways return NULL when there was no btbulkdelete() call.\n\n* num_pages (which will become pg_class.relpages for the index) is not\nset when we return NULL -- hashvacuumcleanup() assumes that ANALYZE\nwill get to it eventually in the case where VACUUM does no real work\n(when it just returns NULL).\n\n* We also use RelationGetNumberOfBlocks() to set pg_class.relpages for\nindex relations during ANALYZE -- it's called when we call\nvac_update_relstats() (I quoted this do_analyze_rel() code to you\ndirectly in a recent email).\n\n* In general, pg_class.relpages isn't an estimate (because we use\nRelationGetNumberOfBlocks(), both in the VACUUM-updates case and the\nANALYZE-updates case) -- only pg_class.reltuples is truly an estimate\nduring ANALYZE, and so getting a \"true count\" seems to have only\nlimited practical importance.\n\nI think that this sets a precedent in support of my view that we can\nsimply get rid of vacuum_cleanup_index_scale_factor without any\nspecial effort to maintain pg_class.reltuples. As I said before, we\ncan safely make btvacuumcleanup() just like hashvacuumcleanup(),\nexcept when there are known deleted-but-not-recycled pages, where a\nfull index scan really is necessary for reasons that are not related\nto statistics at all (of course we still need the *logic* that was\nadded to nbtree by the vacuum_cleanup_index_scale_factor commit --\nthat is clearly necessary). My guess is that Tom would have made\nbtvacuumcleanup() look identical to hashvacuumcleanup() in 2006 if\nnbtree didn't have page deletion to consider -- but that had to be\nconsidered.\n-----\n\nThe above discussions make sense to me as a support for the \"removing\nvacuum_cleanup_index_scale_factor GUC\" proposal. The difference\nbetween the index statistics taken by ANALYZE and btvacuumcleanup() is\nthat the former statistics is always an estimation. That’s calculated\nby compute_index_stats() whereas the latter uses the result of an\nindex scan. If btvacuumcleanup() doesn’t scan the index and always\nreturns NULL, it would become hard to collect accurate index\nstatistics, for example in a static table case. But if collecting an\naccurate pg_class.reltuples is not important in practice, I agree that\nwe don't need btvacuumcleanup() to do an index scan for collecting\nstatistics purposes.\n\nWhat do you think about removing vacuum_cleanup_index_scale_factor\nparameter? and we'd like to ask the design principles of\namvacuumcleanup() considered in 2006 by various hackers (mostly Tom).\nWhat do you think, Tom?\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAH2-WznUWHOL%2BnYYT2PLsn%2B3OWcq8OBfA1sB3FX885rE%3DZQVvA%40mail.gmail.com\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 2 Mar 2021 15:33:14 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Removing vacuum_cleanup_index_scale_factor"
},
{
"msg_contents": "On Mon, Mar 1, 2021 at 10:33 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> The original design that made VACUUM set\n> pg_class.reltuples/pg_class.relpages in indexes (from 15+ years ago)\n> assumed that it was cheap to handle statistics in passing. Even if we\n> have btvacuumcleanup() not do an index scan at all, this is 100%\n> allowed by the amvacuumcleanup contract described in the\n> documentation:\n>\n> \"It is OK to return NULL if the index was not changed at all during\n> the VACUUM operation, but otherwise correct stats should be returned.\"\n>\n> The above description was added by commit e57345975cf in 2006 and\n> hasn't changed for now.\n\nThe intention here is not to revise the amvacuumcleanup() contract --\nthe contract already allows us to do what we want inside nbtree. We\nwant to teach btvacuumcleanup() to not do any real work, at least\noutside of rare cases where we have known deleted pages that must\nstill be placed in the FSM for recycling -- btvacuumcleanup() would\ngenerally just return NULL when there was no btbulkdelete() call\nduring the same VACUUM operation (the main thing that prevents this\ntoday is vacuum_cleanup_index_scale_factor). More generally, we would\nlike to change the *general* expectations that we make of index AMs in\nplaces like vacuumlazy.c and analyze.c. But we're worried about\ndependencies that aren't formalized anywhere but still matter -- code\nmay have evolved to assume that index AMs behaved a certain way in\ncommon and important cases (probably also in code like vacuumlazy.c).\nThat's what we want to avoid breaking.\n\nMasahiko has already given an example of such a problem: currently,\nVACUUM ANALYZE simply assumes that its VACUUM will call each indexes'\namvacuumcleanup() routine in all cases, and will have each call set\npg_class.reltuples and pg_class.relpages in respect of each index.\nANALYZE therefore avoids overwriting indexes' pg_class stats inside\ndo_analyze_rel() (at the point where it calls vac_update_relstats()\nfor each index). That approach is already wrong with hash indexes, but\nunder this new directly for btvacuumcleanup(), it would become wrong\nin just the same way with nbtree indexes (if left unaddressed).\n\nClearly \"the letter of the law\" and \"the spirit of the law\" must both\nbe considered. We want to talk about the latter on this thread.\nConcretely, here are specific questions (perhaps Tom can weigh in on\nthese as the principal designer of the relevant interfaces):\n\n1. Any objections to the idea of teaching VACUUM ANALYZE to\ndistinguish between the cases where VACUUM ran and performed \"real\nindex vacuuming\", to make it more intelligent about overwriting\npg_class stats for indexes?\n\nI define \"real index vacuuming\" as calling any indexes ambulkdelete() routine.\n\n2. Does anybody anticipate any other issues? Possibly an issue that\nresembles this existing known VACUUM ANALYZE issue?\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 2 Mar 2021 18:01:58 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Removing vacuum_cleanup_index_scale_factor"
},
{
"msg_contents": "On Tue, Mar 2, 2021 at 6:01 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> 1. Any objections to the idea of teaching VACUUM ANALYZE to\n> distinguish between the cases where VACUUM ran and performed \"real\n> index vacuuming\", to make it more intelligent about overwriting\n> pg_class stats for indexes?\n\nI think that a simpler approach would work better: When\nANALYZE/do_analyze_rel() decides whether or not it should call\nvac_update_relstats() for each index, it should simply not care\nwhether or not this is a VACUUM ANALYZE (as opposed to a simple\nANALYZE). This is already what we do for the heap relation itself. Why\nshouldn't we do something similar for indexes?\n\nWhat do you think, Tom? Your bugfix commit b4b6923e03f from 2011\ntaught do_analyze_rel() to not care about whether VACUUM took place\nearlier in the same command -- though only in the case of the heap\nrelation (not in the case of its indexes). That decision now seems a\nbit arbitrary to me.\n\nI should point out that this is the *opposite* of what we did from\n2004 - 2011 (following Tom's 2004 commit f0c9397f808). During that\ntime the policy was to not update pg_class.reltuples inside\ndo_analyze_rel() when we knew that VACUUM ran. The policy was at least\nthe same for indexes and the heap/table during this period, so it was\nconsistent in that sense. However, I don't think that we should\nreintroduce that policy now. Doing so would be contrary to the API\ncontract for index AMs established by Tom's 2006 commit e57345975cf --\nthat allowed amvacuumcleanup() to be a no-op when there was no\nambulkdelete() call (it also taught hashvacuumcleanup() to do just\nthat).\n\nTo recap, our ultimate goal here is to make btvacuumcleanup() close to\nhashvacuumcleanup() -- it should be able to skip all cleanup when\nthere was no btbulkdelete() call during the same VACUUM (nbtree page\ndeletion still has cases that force us to do real work in the absence\nof a btbulkdelete() call for the VACUUM, but that remaining exception\nshould be very rare).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 8 Mar 2021 13:00:56 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Removing vacuum_cleanup_index_scale_factor"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> I think that a simpler approach would work better: When\n> ANALYZE/do_analyze_rel() decides whether or not it should call\n> vac_update_relstats() for each index, it should simply not care\n> whether or not this is a VACUUM ANALYZE (as opposed to a simple\n> ANALYZE). This is already what we do for the heap relation itself. Why\n> shouldn't we do something similar for indexes?\n\n> What do you think, Tom? Your bugfix commit b4b6923e03f from 2011\n> taught do_analyze_rel() to not care about whether VACUUM took place\n> earlier in the same command -- though only in the case of the heap\n> relation (not in the case of its indexes). That decision now seems a\n> bit arbitrary to me.\n\nWell, nobody had complained about the index stats at that point,\nso I don't think I was thinking about that aspect of it.\n\nAs you say, the history here is a bit convoluted, but it seems like\na good principle to avoid interconnections between VACUUM and ANALYZE\nas much as we can. I haven't been paying enough attention to this\nthread to have more insight than that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Mar 2021 16:38:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Removing vacuum_cleanup_index_scale_factor"
},
{
"msg_contents": "On Mon, Mar 8, 2021 at 1:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> As you say, the history here is a bit convoluted, but it seems like\n> a good principle to avoid interconnections between VACUUM and ANALYZE\n> as much as we can. I haven't been paying enough attention to this\n> thread to have more insight than that.\n\nThe attached patch does what I proposed earlier today: it teaches\ndo_analyze_rel() to always set pg_class.reltuples for indexes when it\nwould do the same thing for the heap/table relation already. It's now\nuniform in that sense.\n\nAlso included is a patch that removes the\nvacuum_cleanup_index_scale_factor mechanism for triggering an index\nscan during VACUUM -- that's what the second patch does (this depends\non the first patch, really).\n\nDo you think that a backpatch to Postgres 13 for both of these patches\nwould be acceptable? There are two main concerns that I have in mind\nhere, both of which are only issues in Postgres 13:\n\n1. Arguably the question of skipping scanning the index should have been\nconsidered by the autovacuum_vacuum_insert_scale_factor patch when it\nwas committed for Postgres 13 -- but it wasn't. There is a regression\nthat was tied to autovacuum_vacuum_insert_scale_factor in Postgres 13\nby Mark Callaghan:\n\nhttps://smalldatum.blogspot.com/2021/01/insert-benchmark-postgres-is-still.html\n\nThe blog post says: \"Updates - To understand the small regression\nmentioned above for the l.i1 test (more CPU & write IO) I repeated the\ntest with 100M rows using 2 configurations: one disabled index\ndeduplication and the other disabled insert-triggered autovacuum.\nDisabling index deduplication had no effect and disabling\ninsert-triggered autovacuum resolves the regression.\"\n\nI think that this regression is almost entirely explainable by the\nneed to unnecessarily scan indexes for autovacuum VACUUMs that just\nneed to set the visibility map. This issue is basically avoidable,\njust by removing the vacuum_cleanup_index_scale_factor cleanup-only\nVACUUM criteria (per my second patch).\n\n2. I fixed a bug in nbtree deduplication btvacuumcleanup() stats in\ncommit 48e12913. This fix still left things in kind of a bad state:\nthere are still cases where the btvacuumcleanup()-only VACUUM case\nwill set pg_class.reltuples to a value that is significantly below\nwhat it should be (it all depends on how effective deduplication is\nwith the data). These remaining cases are effectively fixed by the\nsecond patch.\n\nI probably should have made btvacuumcleanup()-only VACUUMs set\n\"stats->estimate_count = true\" when I was working on the fix that\nbecame commit 48e12913. Purely because my approach was inherently\napproximate with posting list tuples, and so shouldn't be trusted for\nanything important (num_index_tuples is suitable for VACUUM VERBOSE\noutput only in affected cases). I didn't set \"stats->estimate_count =\ntrue\" in affected cases because I was worried about unforeseen\nconsequences. But this seems defensible now, all things considered.\n\nThere are other things that are slightly broken but will be fixed by\nthe first patch. But I'm really just worried about these two cases in\nPostgres 13.\n\nThanks for weighing in\n--\nPeter Geoghegan",
"msg_date": "Mon, 8 Mar 2021 14:35:03 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Removing vacuum_cleanup_index_scale_factor"
},
{
"msg_contents": "On Tue, Mar 9, 2021 at 7:35 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Mar 8, 2021 at 1:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > As you say, the history here is a bit convoluted, but it seems like\n> > a good principle to avoid interconnections between VACUUM and ANALYZE\n> > as much as we can. I haven't been paying enough attention to this\n> > thread to have more insight than that.\n>\n> The attached patch does what I proposed earlier today: it teaches\n> do_analyze_rel() to always set pg_class.reltuples for indexes when it\n> would do the same thing for the heap/table relation already. It's now\n> uniform in that sense.\n\nThank you for the patches. I looked at 0001 patch and have a comment:\n\n+ * We don't report to the stats collector here because the stats collector\n+ * only tracks per-table stats. Reset the changes_since_analyze counter\n+ * only if we analyzed all columns; otherwise, there is still work for\n+ * auto-analyze to do.\n\nI think the comment becomes clearer if we add \"if doing inherited\nstats\" at top of the above paragraph since we actually report to the\nstats collector in !inh case.\n\n>\n> Also included is a patch that removes the\n> vacuum_cleanup_index_scale_factor mechanism for triggering an index\n> scan during VACUUM -- that's what the second patch does (this depends\n> on the first patch, really).\n\n0002 patch looks good to me.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 9 Mar 2021 15:21:13 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Removing vacuum_cleanup_index_scale_factor"
},
{
"msg_contents": "On Mon, Mar 8, 2021 at 10:21 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> Thank you for the patches. I looked at 0001 patch and have a comment:\n>\n> + * We don't report to the stats collector here because the stats collector\n> + * only tracks per-table stats. Reset the changes_since_analyze counter\n> + * only if we analyzed all columns; otherwise, there is still work for\n> + * auto-analyze to do.\n>\n> I think the comment becomes clearer if we add \"if doing inherited\n> stats\" at top of the above paragraph since we actually report to the\n> stats collector in !inh case.\n\nI messed the comment up. Oops. Fixed now.\n\n> > Also included is a patch that removes the\n> > vacuum_cleanup_index_scale_factor mechanism for triggering an index\n> > scan during VACUUM -- that's what the second patch does (this depends\n> > on the first patch, really).\n>\n> 0002 patch looks good to me.\n\nGreat.\n\nAttached revision has a bit more polish. It includes new commit\nmessages which explains what we're really trying to fix here. I also\nincluded backpatchable versions for Postgres 13 -- that's the other\nsignificant change compared to the last version.\n\nMy current plan is to commit everything within the next day or two.\nThis includes backpatching to Postgres 13 only. I am now leaning\nagainst doing anything in Postgres 11 and 12, for the closely related\nbtm_last_cleanup_num_heap_tuples VACUUM accuracy issue. There have\nbeen no complaints from users using Postgres 11 or 12, so I'll leave\nthem alone. (Sorry for changing my mind again and again.)\n\nTo be clear: I plan on disabling (though not removing) the\nvacuum_cleanup_index_scale_factor GUC and storage parameter on\nPostgres 13, even though that is a stable release. This approach is\nunorthodox, but it has a kind of a precedent -- the recheck_on_update\nstorage param was disabled on the Postgres 11 branch by commit\n5d28c9bd. More importantly, it just happens to make sense, given the\nspecifics here.\n\n--\nPeter Geoghegan",
"msg_date": "Tue, 9 Mar 2021 19:42:55 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Removing vacuum_cleanup_index_scale_factor"
},
{
"msg_contents": "On Tue, Mar 9, 2021 at 7:42 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> My current plan is to commit everything within the next day or two.\n> This includes backpatching to Postgres 13 only.\n\nPushed, thanks.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 10 Mar 2021 17:11:58 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Removing vacuum_cleanup_index_scale_factor"
},
{
"msg_contents": "On Thu, Mar 11, 2021 at 10:12 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Mar 9, 2021 at 7:42 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > My current plan is to commit everything within the next day or two.\n> > This includes backpatching to Postgres 13 only.\n>\n> Pushed, thanks.\n\nGreat! Thank you!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 11 Mar 2021 11:00:38 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Removing vacuum_cleanup_index_scale_factor"
}
] |
[
{
"msg_contents": "Hi,\n\nAs discussed in the separate thread \"[PATCH] regexp_positions ( string text, pattern text, flags text ) → setof int4range[]\" [1]\nit's currently not possible to create an empty range with bounds information.\n\nThis patch tries to improve the situation by keeping the bounds information,\nand allow accessing it via lower() and upper().\n\nNo other semantics have been changed.\nAll tests passes without any changes.\n\nAll examples below give the same result before/after this patch:\n\nSELECT int4range(6,6,'[)');\nempty\nSELECT isempty(int4range(6,6,'[)'));\nTRUE\nSELECT int4range(6,6,'[)') = int4range(7,7,'[)');\nTRUE\nSELECT 'empty'::int4range;\nempty\nSELECT lower('empty'::int4range);\nNULL\nSELECT upper('empty'::int4range);\nNULL\nSELECT isempty('empty'::int4range);\nTRUE\nSELECT 'empty'::int4range = 'empty'::int4range;\nTRUE\nSELECT int4range(6,6,'[)') = 'empty'::int4range;\nTRUE\n\nThe only semantic change is lower() and upper()\nnow returns the lower and upper bounds\nfor empty ranges created with bounds:\n \nSELECT lower(int4range(6,6,'[)'));\n 6\nSELECT upper(int4range(6,6,'[)'));\n 6\n\nIsaac Morland asked an interesting question in the other thread [1]:\n>Doing this would introduce numerous questions which would have to be resolved.\n>For example, where/when is the empty range resulting from an intersection operation?\n\nThe result of intersection is with this patch unchanged,\nthe resulting empty range has no bounds information, e.g:\n\nSELECT lower(int4range(10,10,'[)') * int4range(20,20,'[)'));\nNULL\n\nPatch explained below:\n\nI've made use of the two previously not used null flags:\n\n-#define RANGE_LB_NULL 0x20 /* lower bound is null (NOT USED) */\n-#define RANGE_UB_NULL 0x40 /* upper bound is null (NOT USED) */\n+#define RANGE_LB_NULL 0x20 /* lower bound is null */\n+#define RANGE_UB_NULL 0x40 /* upper bound is null */\n\nI've changed the RANGE_HAS_LBOUND and RANGE_HAS_UBOUND macros\nto not look at RANGE_EMPTY:\n\n-#define RANGE_HAS_LBOUND(flags) (!((flags) & (RANGE_EMPTY | \\\n- RANGE_LB_NULL | \\\n- RANGE_LB_INF)))\n+#define RANGE_HAS_LBOUND(flags) (!((flags) & (RANGE_LB_NULL | RANGE_LB_INF)))\n\nThe definition for RANGE_HAS_UBOUND has been changed in the same way.\n\nThese NULL-flags are now set to explicitly indicate there is no bounds information,\nwhen parsing a text string containing the \"empty\" literal in range_parse(),\nor when the caller of make_range() passes empty=true:\n\n- flags |= RANGE_EMPTY;\n+ flags |= RANGE_EMPTY | RANGE_LB_NULL | RANGE_UB_NULL;\n\nIn the range_lower() and range_upper() functions,\nthe RANGE_HAS_...BOUND() macros are used,\ninstead of the old hard-coded expression, e.g.:\n\n- if (empty || lower.infinite)\n+ if (!RANGE_HAS_LBOUND(flags))\n\nFinally, in range_recv() we must not mask out the NULL flags,\nsince they are now used:\n\n flags &= (RANGE_EMPTY |\n RANGE_LB_INC |\n RANGE_LB_INF |\n+ RANGE_LB_NULL |\n RANGE_UB_INC |\n- RANGE_UB_INF);\n+ RANGE_UB_INF |\n+ RANGE_UB_NULL);\n\nThat's all of it.\n\nI think this little change would make range types more intuitive useful in practise.\n\n/Joel\n\n[1] https://www.postgresql.org/message-id/5eae8911-241a-4432-accc-80e6ffecedfa%40www.fastmail.com",
"msg_date": "Tue, 02 Mar 2021 14:20:45 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "[PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> As discussed in the separate thread \"[PATCH] regexp_positions ( string text, pattern text, flags text ) → setof int4range[]\" [1]\n> it's currently not possible to create an empty range with bounds information.\n\n> This patch tries to improve the situation by keeping the bounds information,\n> and allow accessing it via lower() and upper().\n\nI think this is an actively bad idea. We had a clean set-theoretic\ndefinition of ranges as sets of points, and with this we would not.\nWe should not be whacking around the fundamental semantics of a\nwhole class of data types on the basis that it'd be cute to make\nregexp_position return its result as int4range rather than int[].\n\nIf we did go forward with this, what would the implications be for\nmultiranges?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Mar 2021 09:42:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "\n\n> On Mar 2, 2021, at 5:20 AM, Joel Jacobson <joel@compiler.org> wrote:\n> \n> it's currently not possible to create an empty range with bounds information.\n> \n> This patch tries to improve the situation by keeping the bounds information,\n> and allow accessing it via lower() and upper().\n> \n> No other semantics have been changed.\n> All tests passes without any changes.\n\nI recall this issue of empty ranges not keeping any bounds information being discussed back when range types were developed, and the design choice was intentional. Searching the archives for that discussion, I don't find anything, probably because I'm not searching for the right keywords. Anybody have a link to that discussion?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 2 Mar 2021 07:28:05 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "On Tue, Mar 2, 2021 at 3:28 PM Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n>\n>\n> > On Mar 2, 2021, at 5:20 AM, Joel Jacobson <joel@compiler.org> wrote:\n> >\n> > it's currently not possible to create an empty range with bounds\n> information.\n> >\n> > This patch tries to improve the situation by keeping the bounds\n> information,\n> > and allow accessing it via lower() and upper().\n> >\n> > No other semantics have been changed.\n> > All tests passes without any changes.\n>\n> I recall this issue of empty ranges not keeping any bounds information\n> being discussed back when range types were developed, and the design choice\n> was intentional. Searching the archives for that discussion, I don't find\n> anything, probably because I'm not searching for the right keywords.\n> Anybody have a link to that discussion?\n>\n> —\n> Mark Dilger\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n Marc, perhaps you were referring to this discussion?\nhttps://www.postgresql.org/message-id/4D5534D0020000250003A87E@gw.wicourts.gov\n\nOn Tue, Mar 2, 2021 at 3:28 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n\n> On Mar 2, 2021, at 5:20 AM, Joel Jacobson <joel@compiler.org> wrote:\n> \n> it's currently not possible to create an empty range with bounds information.\n> \n> This patch tries to improve the situation by keeping the bounds information,\n> and allow accessing it via lower() and upper().\n> \n> No other semantics have been changed.\n> All tests passes without any changes.\n\nI recall this issue of empty ranges not keeping any bounds information being discussed back when range types were developed, and the design choice was intentional. Searching the archives for that discussion, I don't find anything, probably because I'm not searching for the right keywords. Anybody have a link to that discussion?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company Marc, perhaps you were referring to this discussion?https://www.postgresql.org/message-id/4D5534D0020000250003A87E@gw.wicourts.gov",
"msg_date": "Tue, 2 Mar 2021 16:51:01 +0000",
"msg_from": "Pantelis Theodosiou <ypercube@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "\n\n> On Mar 2, 2021, at 8:51 AM, Pantelis Theodosiou <ypercube@gmail.com> wrote:\n> \n> \n> \n> On Tue, Mar 2, 2021 at 3:28 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> \n> > On Mar 2, 2021, at 5:20 AM, Joel Jacobson <joel@compiler.org> wrote:\n> > \n> > it's currently not possible to create an empty range with bounds information.\n> > \n> > This patch tries to improve the situation by keeping the bounds information,\n> > and allow accessing it via lower() and upper().\n> > \n> > No other semantics have been changed.\n> > All tests passes without any changes.\n> \n> I recall this issue of empty ranges not keeping any bounds information being discussed back when range types were developed, and the design choice was intentional. Searching the archives for that discussion, I don't find anything, probably because I'm not searching for the right keywords. Anybody have a link to that discussion?\n> \n> —\n> Mark Dilger\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n> \n> \n> Marc, perhaps you were referring to this discussion?\n> https://www.postgresql.org/message-id/4D5534D0020000250003A87E@gw.wicourts.gov\n\nYes, I believe so. Thank you for the link.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 2 Mar 2021 08:57:52 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "On Tue, Mar 2, 2021 at 4:57 PM Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n>\n>\n> > On Mar 2, 2021, at 8:51 AM, Pantelis Theodosiou <ypercube@gmail.com>\n> wrote:\n> >\n> >\n> >\n> > On Tue, Mar 2, 2021 at 3:28 PM Mark Dilger <mark.dilger@enterprisedb.com>\n> wrote:\n> >\n> >\n> > > On Mar 2, 2021, at 5:20 AM, Joel Jacobson <joel@compiler.org> wrote:\n> > >\n> > > it's currently not possible to create an empty range with bounds\n> information.\n> > >\n> > > This patch tries to improve the situation by keeping the bounds\n> information,\n> > > and allow accessing it via lower() and upper().\n> > >\n> > > No other semantics have been changed.\n> > > All tests passes without any changes.\n> >\n> > I recall this issue of empty ranges not keeping any bounds information\n> being discussed back when range types were developed, and the design choice\n> was intentional. Searching the archives for that discussion, I don't find\n> anything, probably because I'm not searching for the right keywords.\n> Anybody have a link to that discussion?\n> >\n> > —\n> > Mark Dilger\n> > EnterpriseDB: http://www.enterprisedb.com\n> > The Enterprise PostgreSQL Company\n> >\n> >\n> > Marc, perhaps you were referring to this discussion?\n> >\n> https://www.postgresql.org/message-id/4D5534D0020000250003A87E@gw.wicourts.gov\n>\n> Yes, I believe so. Thank you for the link.\n>\n\nWelcome. Also this message, where I found the link and has an overview of\nthe different views at the time (and more links):\n\nhttps://www.postgresql.org/message-id/1299865026.3474.58.camel%40jdavis\n\nOn Fri, 2011-03-11 at 08:37 -0500, Bruce Momjian wrote:\n> > Where are we on this? The options are: 1. Rip out empty ranges. Several\n> people have been skeptical of their\n> usefulness, but I don't recall anyone directly saying that they should\n> be removed. Robert Haas made the point that range types aren't closed\n> under UNION:\n> http://archives.postgresql.org/pgsql-hackers/2011-02/msg01045.php So the\n> additional nice mathematical properties provided by empty ranges\n> are not as important (because it wouldn't be perfect anyway). 2. Change\n> the semantics. Erik Rijkers suggested that we define all\n> operators for empty ranges, perhaps using NULL semantics:\n> http://archives.postgresql.org/pgsql-hackers/2011-02/msg00942.php And\n> Kevin Grittner suggested that there could be discrete ranges of zero\n> length yet a defined starting point:\n> http://archives.postgresql.org/pgsql-hackers/2011-02/msg01042.php 3.\n> Leave empty ranges with the existing \"empty set\" semantics. Nathan\n> Boley made a good point here:\n> http://archives.postgresql.org/pgsql-hackers/2011-02/msg01108.php Right\n> now it's #3, and I lean pretty strongly toward keeping it. Without\n> #3, people will get confused when fairly simple operations fail in a\n> data-dependent way (at runtime). With #3, people will run into problems\n> only in situations where it is fairly dubious to have an empty range\n> anyway (and therefore likely a real error), such as finding ranges \"left\n> of\" an empty range. Otherwise, I'd prefer #1 to #2. I think #2 is a bad\n> path to take, and\n> we'll end up with a lot of unintuitive and error-prone operators. Regards,\n> Jeff Davis\n\nOn Tue, Mar 2, 2021 at 4:57 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n\n> On Mar 2, 2021, at 8:51 AM, Pantelis Theodosiou <ypercube@gmail.com> wrote:\n> \n> \n> \n> On Tue, Mar 2, 2021 at 3:28 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> \n> > On Mar 2, 2021, at 5:20 AM, Joel Jacobson <joel@compiler.org> wrote:\n> > \n> > it's currently not possible to create an empty range with bounds information.\n> > \n> > This patch tries to improve the situation by keeping the bounds information,\n> > and allow accessing it via lower() and upper().\n> > \n> > No other semantics have been changed.\n> > All tests passes without any changes.\n> \n> I recall this issue of empty ranges not keeping any bounds information being discussed back when range types were developed, and the design choice was intentional. Searching the archives for that discussion, I don't find anything, probably because I'm not searching for the right keywords. Anybody have a link to that discussion?\n> \n> —\n> Mark Dilger\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n> \n> \n> Marc, perhaps you were referring to this discussion?\n> https://www.postgresql.org/message-id/4D5534D0020000250003A87E@gw.wicourts.gov\n\nYes, I believe so. Thank you for the link. Welcome. Also this message, where I found the link and has an overview of the different views at the time (and more links): https://www.postgresql.org/message-id/1299865026.3474.58.camel%40jdavisOn Fri, 2011-03-11 at 08:37 -0500, Bruce Momjian wrote:> Where are we on this?\nThe options are:\n1. Rip out empty ranges. Several people have been skeptical of theirusefulness, but I don't recall anyone directly saying that they shouldbe removed. Robert Haas made the point that range types aren't closedunder UNION:\nhttp://archives.postgresql.org/pgsql-hackers/2011-02/msg01045.php\nSo the additional nice mathematical properties provided by empty rangesare not as important (because it wouldn't be perfect anyway).\n2. Change the semantics. Erik Rijkers suggested that we define alloperators for empty ranges, perhaps using NULL semantics:\nhttp://archives.postgresql.org/pgsql-hackers/2011-02/msg00942.php\nAnd Kevin Grittner suggested that there could be discrete ranges of zerolength yet a defined starting point:\nhttp://archives.postgresql.org/pgsql-hackers/2011-02/msg01042.php\n3. Leave empty ranges with the existing \"empty set\" semantics. NathanBoley made a good point here:\nhttp://archives.postgresql.org/pgsql-hackers/2011-02/msg01108.php\nRight now it's #3, and I lean pretty strongly toward keeping it. Without#3, people will get confused when fairly simple operations fail in adata-dependent way (at runtime). With #3, people will run into problemsonly in situations where it is fairly dubious to have an empty rangeanyway (and therefore likely a real error), such as finding ranges \"leftof\" an empty range.\nOtherwise, I'd prefer #1 to #2. I think #2 is a bad path to take, andwe'll end up with a lot of unintuitive and error-prone operators.\nRegards,\tJeff Davis",
"msg_date": "Tue, 2 Mar 2021 17:14:09 +0000",
"msg_from": "Pantelis Theodosiou <ypercube@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "On Tue, Mar 2, 2021, at 15:42, Tom Lane wrote:\n> \"Joel Jacobson\" <joel@compiler.org> writes:\n> > As discussed in the separate thread \"[PATCH] regexp_positions ( string text, pattern text, flags text ) → setof int4range[]\" [1]\n> > it's currently not possible to create an empty range with bounds information.\n> \n> > This patch tries to improve the situation by keeping the bounds information,\n> > and allow accessing it via lower() and upper().\n> \n> I think this is an actively bad idea. We had a clean set-theoretic\n> definition of ranges as sets of points, and with this we would not.\n> We should not be whacking around the fundamental semantics of a\n> whole class of data types on the basis that it'd be cute to make\n> regexp_position return its result as int4range rather than int[].\n\nI think there are *lots* of other use-cases where the current semantics of range types are very problematic.\n\nThe regexp_positions() patch just demonstrates one concrete example\non when real-life zero-length ranges can definitively have positions,\nregardless of what the mathematicians thinks.\n(I use the term \"position\" here since if lower=upper bound,\nthen we're talking about a position, since it has zero length.)\n\nI think there is a risk lots of users coming from other programming environments\nwill misunderstand how ranges work, start implementing something using them,\nonly to later have to rewrite all their code using ranges due to eventually encountering the\nzero-length corner-case for which there is no work-around (except not using ranges).\n\nImagine e.g. a Rust user, who has learned how ranges work in Rust,\nand thinks the program below is valid and and expects it to output\n\"start 3 end 3 is_empty true\".\n\nfn main() {\n let r = std::ops::Range { start: 3, end: 3 };\n\n println!(\n \"start {} end {} is_empty {}\",\n r.start,\n r.end,\n r.is_empty()\n );\n}\n\nI think it would be a challenge to explain how PostgreSQL's range semantics\nto this user, why you get NULL when trying to\naccess the start and end values for this range.\n\nI feel this is a perfect example of when theory and practise has since long parted ways,\nand the theory is only cute until you face the ugly reality.\n\nThat said, subtle changes are of course possibly dangerous,\nand since I'm not a huge range type user myself,\nI can't have an opinion on how many rely on the current null semantics for lower() and upper().\n\nArgh! I wish we would have access to a large set of real-life real-time statistics on PostgreSQL SQL queries\nand result sets from lots of different users around the world, similar to the regex test corpus.\nIt's very unfair e.g. Amazon with their Aurora could in theory collect such statistics on all their users,\nso their Aurora-hackers could answers questions like\n\n \"Do lower() and upper() ever return NULL in real-life for ranges?\"\n\nWhile all we can do is to rely on user reports and our imagination.\nMaybe we should allow opting-in to contribute with statistics from production servers,\nto help us better understand how PostgreSQL is used in real-life?\nI see lots of problems with data privacy, business secrets etc, but perhaps there are something that can be done.\n\nOh well. At least it was fun to learn about how ranges are implemented behind the scenes.\n\nIf we cannot do a subtle change, then I think we should consider an entirely new range class,\njust like multi-ranges are added in v14. Maybe \"negrange\" could be a good name?\n\n> \n> If we did go forward with this, what would the implications be for\n> multiranges?\n\nNone. It would only affect lower()/upper() for a single range created with bounds.\n\nBefore patch:\n\nSELECT numrange(3,4) + numrange(5,5);\n [3,4)\nSELECT upper(numrange(3,4) + numrange(5,5));\n 4\nSELECT numrange(5,5);\n empty\nSELECT upper(numrange(5,5));\nNULL\n\nAfter patch:\n\nSELECT numrange(3,4) + numrange(5,5);\n [3,4)\nSELECT upper(numrange(3,4) + numrange(5,5));\n 4\nSELECT numrange(5,5);\n empty\nSELECT upper(numrange(5,5));\n 5\n\nAt the very least, I think we should in any case add test coverage of what lower()/upper() returns for empty ranges.\n\n/Joel\n\nOn Tue, Mar 2, 2021, at 15:42, Tom Lane wrote:\"Joel Jacobson\" <joel@compiler.org> writes:> As discussed in the separate thread \"[PATCH] regexp_positions ( string text, pattern text, flags text ) → setof int4range[]\" [1]> it's currently not possible to create an empty range with bounds information.> This patch tries to improve the situation by keeping the bounds information,> and allow accessing it via lower() and upper().I think this is an actively bad idea. We had a clean set-theoreticdefinition of ranges as sets of points, and with this we would not.We should not be whacking around the fundamental semantics of awhole class of data types on the basis that it'd be cute to makeregexp_position return its result as int4range rather than int[].I think there are *lots* of other use-cases where the current semantics of range types are very problematic.The regexp_positions() patch just demonstrates one concrete exampleon when real-life zero-length ranges can definitively have positions,regardless of what the mathematicians thinks.(I use the term \"position\" here since if lower=upper bound,then we're talking about a position, since it has zero length.)I think there is a risk lots of users coming from other programming environmentswill misunderstand how ranges work, start implementing something using them,only to later have to rewrite all their code using ranges due to eventually encountering thezero-length corner-case for which there is no work-around (except not using ranges).Imagine e.g. a Rust user, who has learned how ranges work in Rust,and thinks the program below is valid and and expects it to output\"start 3 end 3 is_empty true\".fn main() { let r = std::ops::Range { start: 3, end: 3 }; println!( \"start {} end {} is_empty {}\", r.start, r.end, r.is_empty() );}I think it would be a challenge to explain how PostgreSQL's range semanticsto this user, why you get NULL when trying toaccess the start and end values for this range.I feel this is a perfect example of when theory and practise has since long parted ways,and the theory is only cute until you face the ugly reality.That said, subtle changes are of course possibly dangerous,and since I'm not a huge range type user myself,I can't have an opinion on how many rely on the current null semantics for lower() and upper().Argh! I wish we would have access to a large set of real-life real-time statistics on PostgreSQL SQL queriesand result sets from lots of different users around the world, similar to the regex test corpus.It's very unfair e.g. Amazon with their Aurora could in theory collect such statistics on all their users,so their Aurora-hackers could answers questions like \"Do lower() and upper() ever return NULL in real-life for ranges?\"While all we can do is to rely on user reports and our imagination.Maybe we should allow opting-in to contribute with statistics from production servers,to help us better understand how PostgreSQL is used in real-life?I see lots of problems with data privacy, business secrets etc, but perhaps there are something that can be done.Oh well. At least it was fun to learn about how ranges are implemented behind the scenes.If we cannot do a subtle change, then I think we should consider an entirely new range class,just like multi-ranges are added in v14. Maybe \"negrange\" could be a good name?If we did go forward with this, what would the implications be formultiranges?None. It would only affect lower()/upper() for a single range created with bounds.Before patch:SELECT numrange(3,4) + numrange(5,5); [3,4)SELECT upper(numrange(3,4) + numrange(5,5)); 4SELECT numrange(5,5); emptySELECT upper(numrange(5,5));NULLAfter patch:SELECT numrange(3,4) + numrange(5,5); [3,4)SELECT upper(numrange(3,4) + numrange(5,5)); 4SELECT numrange(5,5); emptySELECT upper(numrange(5,5)); 5At the very least, I think we should in any case add test coverage of what lower()/upper() returns for empty ranges./Joel",
"msg_date": "Tue, 02 Mar 2021 18:52:09 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "\n\n> On Mar 2, 2021, at 9:52 AM, Joel Jacobson <joel@compiler.org> wrote:\n> \n> On Tue, Mar 2, 2021, at 15:42, Tom Lane wrote:\n>> \"Joel Jacobson\" <joel@compiler.org> writes:\n>> > As discussed in the separate thread \"[PATCH] regexp_positions ( string text, pattern text, flags text ) → setof int4range[]\" [1]\n>> > it's currently not possible to create an empty range with bounds information.\n>> \n>> > This patch tries to improve the situation by keeping the bounds information,\n>> > and allow accessing it via lower() and upper().\n>> \n>> I think this is an actively bad idea. We had a clean set-theoretic\n>> definition of ranges as sets of points, and with this we would not.\n>> We should not be whacking around the fundamental semantics of a\n>> whole class of data types on the basis that it'd be cute to make\n>> regexp_position return its result as int4range rather than int[].\n> \n> I think there are *lots* of other use-cases where the current semantics of range types are very problematic.\n\nI'm inclined to think that this conversation is ten years too late. Range semantics are already relied upon in our code, but also in the code of others. It might be very hard to debug code that was correct when written but broken by this proposed change. The problem is not just with lower() and upper(), but with equality testing (mentioned upthread), since code may rely on two different \"positions\" (your word) both being equal, and both sorting the same.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 2 Mar 2021 10:01:44 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "On Tue, Mar 2, 2021, at 17:51, Pantelis Theodosiou wrote:\n> \n> Marc, perhaps you were referring to this discussion?\n> https://www.postgresql.org/message-id/4D5534D0020000250003A87E@gw.wicourts.gov\n\nThanks for the link to the discussion.\n\nI will real with great interest and learn the arguments,\nso I can explain them to future versions of myself\nwhen they as the same question ten years from now on this list.\n\n/Joel\n\nOn Tue, Mar 2, 2021, at 17:51, Pantelis Theodosiou wrote: Marc, perhaps you were referring to this discussion?https://www.postgresql.org/message-id/4D5534D0020000250003A87E@gw.wicourts.govThanks for the link to the discussion.I will real with great interest and learn the arguments,so I can explain them to future versions of myselfwhen they as the same question ten years from now on this list./Joel",
"msg_date": "Tue, 02 Mar 2021 19:03:59 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "On Tue, Mar 2, 2021, at 19:01, Mark Dilger wrote:\n> The problem is not just with lower() and upper(), but with equality testing (mentioned upthread), since code may rely on two different \"positions\" (your word) both being equal, and both sorting the same.\n\nThat's why the patch doesn't change equality.\n\n/Joel\nOn Tue, Mar 2, 2021, at 19:01, Mark Dilger wrote:The problem is not just with lower() and upper(), but with equality testing (mentioned upthread), since code may rely on two different \"positions\" (your word) both being equal, and both sorting the same.That's why the patch doesn't change equality./Joel",
"msg_date": "Tue, 02 Mar 2021 19:08:41 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "On 03/02/21 13:01, Mark Dilger wrote:\n> The problem is not just with lower() and upper(), but with equality testing\n> (mentioned upthread), since code may rely on two different \"positions\"\n> (your word) both being equal, and both sorting the same.\n\nCould those concerns be addressed perhaps, not by adding an entirely new\njust-like-a-range-but-remembers-position-when-zero-width type (which would\nfeel wartlike to me), but by tweaking ranges to /secretly/ remember the\nposition when zero width?\n\nSecretly, in the sense that upper(), lower(), and the default sort\noperator would keep their established behavior, but new functions like\nupper_or_pos(), lower_or_pos() would return the non-NULL value even for\nan empty range, and another sort operator could be provided for use\nwhen one wants the ordering to reflect it?\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Tue, 2 Mar 2021 13:16:03 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "\n\n> On Mar 2, 2021, at 10:08 AM, Joel Jacobson <joel@compiler.org> wrote:\n> \n> On Tue, Mar 2, 2021, at 19:01, Mark Dilger wrote:\n>> The problem is not just with lower() and upper(), but with equality testing (mentioned upthread), since code may rely on two different \"positions\" (your word) both being equal, and both sorting the same.\n> \n> That's why the patch doesn't change equality.\n\nHow does that work if I SELECT DISTINCT ON (nr) ... and then take upper(nr). It's just random which values I get? \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 2 Mar 2021 10:17:10 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> On Mar 2, 2021, at 10:08 AM, Joel Jacobson <joel@compiler.org> wrote:\n>> On Tue, Mar 2, 2021, at 19:01, Mark Dilger wrote:\n>>> The problem is not just with lower() and upper(), but with equality testing (mentioned upthread), since code may rely on two different \"positions\" (your word) both being equal, and both sorting the same.\n\n>> That's why the patch doesn't change equality.\n\n> How does that work if I SELECT DISTINCT ON (nr) ... and then take upper(nr). It's just random which values I get? \n\nMore generally, values that are visibly different yet compare equal\nare user-unfriendly in the extreme. We do have cases like that\n(IEEE-float minus zero, area-based compare of some geometric types\ncome to mind) but they are all warts, not things to be emulated.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Mar 2021 13:28:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "\n\n> On Mar 2, 2021, at 10:16 AM, Chapman Flack <chap@anastigmatix.net> wrote:\n> \n> On 03/02/21 13:01, Mark Dilger wrote:\n>> The problem is not just with lower() and upper(), but with equality testing\n>> (mentioned upthread), since code may rely on two different \"positions\"\n>> (your word) both being equal, and both sorting the same.\n> \n> Could those concerns be addressed perhaps, not by adding an entirely new\n> just-like-a-range-but-remembers-position-when-zero-width type (which would\n> feel wartlike to me), but by tweaking ranges to /secretly/ remember the\n> position when zero width?\n> \n> Secretly, in the sense that upper(), lower(), and the default sort\n> operator would keep their established behavior, but new functions like\n> upper_or_pos(), lower_or_pos() would return the non-NULL value even for\n> an empty range, and another sort operator could be provided for use\n> when one wants the ordering to reflect it?\n\nI vaguely recall that ten years ago I wanted zero-width range types to not collapse into an empty range. I can't recall if I ever expressed that opinion -- I just remember thinking it would be nice, and for reasons similar to what Joel is arguing here. But I can't see having compares-equal-but-not-truly-equal ranges as a good idea. I think Tom is right about that.\n\nI also think the regexp work that inspired this thread could return something other than a range, so the motivation for creating a frankenstein range implementation doesn't really exist.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 2 Mar 2021 11:02:10 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "On Tue, Mar 2, 2021, at 17:51, Pantelis Theodosiou wrote:\n> Marc, perhaps you were referring to this discussion?\n> https://www.postgresql.org/message-id/4D5534D0020000250003A87E@gw.wicourts.gov\n\nI've read through the \"Re: Range Types: empty ranges\" thread from 2011.\n\nMy comments:\n\nJeff Davis wrote:\n>The cost, of course, is that not all operations are well-defined for\n>empty ranges. I think those are mostly operators like those mentioned in\n>the other thread: \">>\" (strictly right of), \"<<\" (strictly left of), and\n>\"-|-\" (adjacent); and perhaps \"&>\" and \"&<\". These are probably used a\n>little less frequently, and should probably not be used in a context\n>where empty ranges are permitted (if they are, it's likely a mistake and\n>an error should be thrown).\n\nInteresting. I realize all of these operators would actually be well defined for empty ranges *with* bounds information.\n\n\"Kevin Grittner\" wrote:\n>Perhaps it was a mistake to get so concrete rather than conceptual\n>-- basically, it seems like it could be a useful concept for any\n>planned or scheduled range with an indeterminate end point, which\n>you want to \"reserve\" up front and record in progress until\n>complete. The alternative would be that such \"ranges to be\" have a\n>parallel \"planned start value\" column of the same type as the range,\n>to be used as the start of the range once it is not empty. Or, as\n>another way to put it, it seems potentially useful to me to have an\n>empty range which is pinned to a location, in *addition* to the\n>unpinned empty ranges such as would be needed to represent the\n>intersection of two non-overlapping ranges.\n\nI fully agree with this. Such \"pinned to a location\" empty range\nis exactly what I needed for regexp_positions() and is what\nthe patch implements.\n\nIt seems that the consequences of allowing empty ranges with bounds information\nwasn't really discussed in detail in this thread. Nobody commented on Kevin's idea, as far as I can see when reading the thread.\n\nInstead, the discussion focused on the consequences of\nallowing empty ranges without bounds information,\nwhich apparently was finally accepted, since that's what we have now.\n\n/Joel\n\nOn Tue, Mar 2, 2021, at 17:51, Pantelis Theodosiou wrote: Marc, perhaps you were referring to this discussion?https://www.postgresql.org/message-id/4D5534D0020000250003A87E@gw.wicourts.govI've read through the \"Re: Range Types: empty ranges\" thread from 2011.My comments:Jeff Davis wrote:>The cost, of course, is that not all operations are well-defined for>empty ranges. I think those are mostly operators like those mentioned in>the other thread: \">>\" (strictly right of), \"<<\" (strictly left of), and>\"-|-\" (adjacent); and perhaps \"&>\" and \"&<\". These are probably used a>little less frequently, and should probably not be used in a context>where empty ranges are permitted (if they are, it's likely a mistake and>an error should be thrown).Interesting. I realize all of these operators would actually be well defined for empty ranges *with* bounds information.\"Kevin Grittner\" wrote:>Perhaps it was a mistake to get so concrete rather than conceptual>-- basically, it seems like it could be a useful concept for any>planned or scheduled range with an indeterminate end point, which>you want to \"reserve\" up front and record in progress until>complete. The alternative would be that such \"ranges to be\" have a>parallel \"planned start value\" column of the same type as the range,>to be used as the start of the range once it is not empty. Or, as>another way to put it, it seems potentially useful to me to have an>empty range which is pinned to a location, in *addition* to the>unpinned empty ranges such as would be needed to represent the>intersection of two non-overlapping ranges.I fully agree with this. Such \"pinned to a location\" empty rangeis exactly what I needed for regexp_positions() and is whatthe patch implements.It seems that the consequences of allowing empty ranges with bounds informationwasn't really discussed in detail in this thread. Nobody commented on Kevin's idea, as far as I can see when reading the thread.Instead, the discussion focused on the consequences ofallowing empty ranges without bounds information,which apparently was finally accepted, since that's what we have now./Joel",
"msg_date": "Tue, 02 Mar 2021 20:14:25 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "On Tue, Mar 2, 2021, at 19:16, Chapman Flack wrote:\n> On 03/02/21 13:01, Mark Dilger wrote:\n> > The problem is not just with lower() and upper(), but with equality testing\n> > (mentioned upthread), since code may rely on two different \"positions\"\n> > (your word) both being equal, and both sorting the same.\n> \n> Could those concerns be addressed perhaps, not by adding an entirely new\n> just-like-a-range-but-remembers-position-when-zero-width type (which would\n> feel wartlike to me), but by tweaking ranges to /secretly/ remember the\n> position when zero width?\n\nThis is actually how it's implemented. The patch doesn't affect equality.\nIt just stores the lower/upper bounds, if available, upon creation.\n\n> \n> Secretly, in the sense that upper(), lower(), and the default sort\n> operator would keep their established behavior, but new functions like\n> upper_or_pos(), lower_or_pos() would return the non-NULL value even for\n> an empty range, and another sort operator could be provided for use\n> when one wants the ordering to reflect it?\n\nThis is a great idea!\n\nThis would solve the potential problems of users relying\non upper()/lower() to always return null when range isempty().\n\nSuch new functions could then be used by new users who have read the documentation\nand understand how they work.\nThis would effectively mean there would be absolutely no semantic changes at all.\n\nI will work on a new patch to try out this idea.\n\n/Joel\nOn Tue, Mar 2, 2021, at 19:16, Chapman Flack wrote:On 03/02/21 13:01, Mark Dilger wrote:> The problem is not just with lower() and upper(), but with equality testing> (mentioned upthread), since code may rely on two different \"positions\"> (your word) both being equal, and both sorting the same.Could those concerns be addressed perhaps, not by adding an entirely newjust-like-a-range-but-remembers-position-when-zero-width type (which wouldfeel wartlike to me), but by tweaking ranges to /secretly/ remember theposition when zero width?This is actually how it's implemented. The patch doesn't affect equality.It just stores the lower/upper bounds, if available, upon creation.Secretly, in the sense that upper(), lower(), and the default sortoperator would keep their established behavior, but new functions likeupper_or_pos(), lower_or_pos() would return the non-NULL value even foran empty range, and another sort operator could be provided for usewhen one wants the ordering to reflect it?This is a great idea!This would solve the potential problems of users relyingon upper()/lower() to always return null when range isempty().Such new functions could then be used by new users who have read the documentationand understand how they work.This would effectively mean there would be absolutely no semantic changes at all.I will work on a new patch to try out this idea./Joel",
"msg_date": "Tue, 02 Mar 2021 20:20:59 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "On Tue, Mar 2, 2021, at 19:17, Mark Dilger wrote:\n> > On Mar 2, 2021, at 10:08 AM, Joel Jacobson <joel@compiler.org> wrote:\n> > That's why the patch doesn't change equality.\n> \n> How does that work if I SELECT DISTINCT ON (nr) ... and then take upper(nr). It's just random which values I get? \n\nYes. It's random, since equality isn't changed, the sort operation cannot tell the difference, and nor could a user who isn't aware of upper() / lower() could reveal differences.\n\nDemo:\n\nCREATE TABLE t AS SELECT int4range(i,i+FLOOR(random()*2)::integer,'[)') AS nr FROM generate_series(1,10) AS i;\n\nSELECT nr, lower(nr), upper(nr) FROM t ORDER BY 1;\n nr | lower | upper\n--------+-------+-------\nempty | 10 | 10\nempty | 4 | 4\nempty | 6 | 6\nempty | 7 | 7\nempty | 1 | 1\nempty | 3 | 3\n[2,3) | 2 | 3\n[5,6) | 5 | 6\n[8,9) | 8 | 9\n[9,10) | 9 | 10\n(10 rows)\n\nSELECT DISTINCT ON (nr) nr, lower(nr), upper(nr) FROM t ORDER BY 1;\n nr | lower | upper\n--------+-------+-------\nempty | 10 | 10\n[2,3) | 2 | 3\n[5,6) | 5 | 6\n[8,9) | 8 | 9\n[9,10) | 9 | 10\n(5 rows)\n\n/Joel\n\n\nOn Tue, Mar 2, 2021, at 19:17, Mark Dilger wrote:> On Mar 2, 2021, at 10:08 AM, Joel Jacobson <joel@compiler.org> wrote:> That's why the patch doesn't change equality.How does that work if I SELECT DISTINCT ON (nr) ... and then take upper(nr). It's just random which values I get? Yes. It's random, since equality isn't changed, the sort operation cannot tell the difference, and nor could a user who isn't aware of upper() / lower() could reveal differences.Demo:CREATE TABLE t AS SELECT int4range(i,i+FLOOR(random()*2)::integer,'[)') AS nr FROM generate_series(1,10) AS i;SELECT nr, lower(nr), upper(nr) FROM t ORDER BY 1; nr | lower | upper--------+-------+-------empty | 10 | 10empty | 4 | 4empty | 6 | 6empty | 7 | 7empty | 1 | 1empty | 3 | 3[2,3) | 2 | 3[5,6) | 5 | 6[8,9) | 8 | 9[9,10) | 9 | 10(10 rows)SELECT DISTINCT ON (nr) nr, lower(nr), upper(nr) FROM t ORDER BY 1; nr | lower | upper--------+-------+-------empty | 10 | 10[2,3) | 2 | 3[5,6) | 5 | 6[8,9) | 8 | 9[9,10) | 9 | 10(5 rows)/Joel",
"msg_date": "Tue, 02 Mar 2021 20:34:36 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "\n\n> On Mar 2, 2021, at 11:34 AM, Joel Jacobson <joel@compiler.org> wrote:\n> \n> Yes. It's random, since equality isn't changed, the sort operation cannot tell the difference, and nor could a user who isn't aware of upper() / lower() could reveal differences.\n\nThis sounds unworkable even just in light of the original motivation for this whole thread. If I use your proposed regexp_positions(string text, pattern text, flags text) function to parse a large number of \"positions\" from a document, store all those positions in a table, and do a join of those positions against something else, it's not going to work. Positions will randomly vanish from the results of that join, which is going to be really surprising. I'm sure there are other examples of Tom's general point about compares-equal-but-not-equal datatypes.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 2 Mar 2021 11:42:30 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "\n\n> On Mar 2, 2021, at 11:42 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> \n> \n>> On Mar 2, 2021, at 11:34 AM, Joel Jacobson <joel@compiler.org> wrote:\n>> \n>> Yes. It's random, since equality isn't changed, the sort operation cannot tell the difference, and nor could a user who isn't aware of upper() / lower() could reveal differences.\n> \n> This sounds unworkable even just in light of the original motivation for this whole thread. If I use your proposed regexp_positions(string text, pattern text, flags text) function to parse a large number of \"positions\" from a document, store all those positions in a table, and do a join of those positions against something else, it's not going to work. Positions will randomly vanish from the results of that join, which is going to be really surprising. I'm sure there are other examples of Tom's general point about compares-equal-but-not-equal datatypes.\n\nI didn't phrase that clearly enough. I'm thinking about whether you include the bounds information in the hash function. The current implementation of hash_range(PG_FUNCTION_ARGS) is going to hash the lower and upper bounds, since you didn't change it to do otherwise, so \"equal\" values won't always hash the same. I haven't tested this out, but it seems you could get a different set of rows depending on whether the planner selects a hash join.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 2 Mar 2021 11:57:53 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "On Tue, Mar 2, 2021, at 20:20, Joel Jacobson wrote:\n> On Tue, Mar 2, 2021, at 19:16, Chapman Flack wrote:\n>> Secretly, in the sense that upper(), lower(), and the default sort\n>> operator would keep their established behavior, but new functions like\n>> upper_or_pos(), lower_or_pos() would return the non-NULL value even for\n>> an empty range, and another sort operator could be provided for use\n>> when one wants the ordering to reflect it?\n> \n> I will work on a new patch to try out this idea.\n\nHere is a patch implementing this idea.\n\nlower() and upper() are now restored to their originals.\n\nThe new functions range_start() and range_end()\nworks just like lower() and upper(),\nexcept they also return bounds information for empty ranges,\nif available, otherwise they return null.\n\nThis means, there is no risk of affecting any current users of ranges.\n\nI think this a good pragmatical solution to many real-world problems that can be solved efficiently with ranges.\n\nI've also added test coverage of lower() and upper() for null range values.\n\n/Joel",
"msg_date": "Tue, 02 Mar 2021 21:00:42 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "On Tue, Mar 2, 2021, at 20:57, Mark Dilger wrote:\n> I didn't phrase that clearly enough. I'm thinking about whether you include the bounds information in the hash function. The current implementation of hash_range(PG_FUNCTION_ARGS) is going to hash the lower and upper bounds, since you didn't change it to do otherwise, so \"equal\" values won't always hash the same. I haven't tested this out, but it seems you could get a different set of rows depending on whether the planner selects a hash join.\n\nI think this issue is solved by the empty-ranges-with-bounds-information-v2.patch I just sent,\nsince with it, there are no semantic changes at all, lower() and upper() works like before.\n\n/Joel\nOn\nTue, Mar 2, 2021, at 20:57, Mark Dilger wrote:I didn't phrase that clearly\nenough. I'm thinking about whether you include the bounds\ninformation in the hash function. The current implementation of\nhash_range(PG_FUNCTION_ARGS) is going to hash the lower and upper\nbounds, since you didn't change it to do otherwise, so \"equal\" values\nwon't always hash the same. I haven't tested this out, but it\nseems you could get a different set of rows depending on whether the\nplanner selects a hash\njoin.I think this issue is\nsolved by the empty-ranges-with-bounds-information-v2.patch I just\nsent,since with it, there are no semantic changes at\nall, lower() and upper() works like\nbefore./Joel",
"msg_date": "Tue, 02 Mar 2021 21:04:04 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "On 03/02/21 14:20, Joel Jacobson wrote:\n> This would effectively mean there would be absolutely no semantic changes at all.\n> \n> I will work on a new patch to try out this idea.\n\nI may have been assuming a degree of orthogonality in SQL that isn't\nreally there ... only in a few situations (like creating an index) do\nyou easily get to specify a non-default operator class to use when\ncomparing or hashing a value.\n\nSo perhaps a solution could be built on the same range machinery, but\nrequiring some new syntax in CREATE TYPE ... AS RANGE, something like\nWITH POSITIONS. A new concrete range type created that way would not\nbe a whole different kind of a thing, and would share most machinery\nwith other range types, but would have the position-remembering\nbehavior. Given a range type created over int4 in that way, maybe\nnamed int4prange, the regexp_positions function could return one of those.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Tue, 2 Mar 2021 15:07:33 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "On Tue, Mar 2, 2021, at 19:28, Tom Lane wrote:\n> More generally, values that are visibly different yet compare equal\n> are user-unfriendly in the extreme. We do have cases like that\n> (IEEE-float minus zero, area-based compare of some geometric types\n> come to mind) but they are all warts, not things to be emulated.\n\nI almost agree with you, but if forced to choose,\nI would rather have two wart feet I can use, rather than limping around on only one wart free foot.\n\n/Joel\nOn Tue, Mar 2, 2021, at 19:28, Tom Lane wrote:More generally, values that are visibly different yet compare equalare user-unfriendly in the extreme. We do have cases like that(IEEE-float minus zero, area-based compare of some geometric typescome to mind) but they are all warts, not things to be emulated.I almost agree with you, but if forced to choose,I would rather have two wart feet I can use, rather than limping around on only one wart free foot./Joel",
"msg_date": "Tue, 02 Mar 2021 21:26:50 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "\n\n> On Mar 2, 2021, at 12:04 PM, Joel Jacobson <joel@compiler.org> wrote:\n> \n> On Tue, Mar 2, 2021, at 20:57, Mark Dilger wrote:\n>> I didn't phrase that clearly enough. I'm thinking about whether you include the bounds information in the hash function. The current implementation of hash_range(PG_FUNCTION_ARGS) is going to hash the lower and upper bounds, since you didn't change it to do otherwise, so \"equal\" values won't always hash the same. I haven't tested this out, but it seems you could get a different set of rows depending on whether the planner selects a hash join.\n> \n> I think this issue is solved by the empty-ranges-with-bounds-information-v2.patch I just sent,\n> since with it, there are no semantic changes at all, lower() and upper() works like before.\n\nThere are semantic differences, because hash_range() doesn't call lower() and upper(), it uses RANGE_HAS_LBOUND and RANGE_HAS_UBOUND, the behavior of which you have changed. I created a regression test and expected results and checked after applying your patch, and your patch breaks the hash function behavior. Notice that before your patch, all three ranges hashed to the same value, but not so after:\n\n\n@@ -1,18 +1,18 @@\n select hash_range('[a,a)'::textrange);\n hash_range\n ------------\n- 484847245\n+ -590102690\n (1 row)\n\n select hash_range('[b,b)'::textrange);\n hash_range\n ------------\n- 484847245\n+ 281562732\n (1 row)\n\n select hash_range('[c,c)'::textrange);\n- hash_range \n-------------\n- 484847245\n+ hash_range \n+-------------\n+ -1887445565\n (1 row)\n\n\nYou might change how hash_range() works to get all \"equal\" values to hash the same value, but that just gets back to the problem that non-equal things appear to be equal. I guess that's your two-warty-feet preference, but not everyone is going to be in agreement on that.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 2 Mar 2021 12:40:17 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "On Tue, Mar 2, 2021, at 21:40, Mark Dilger wrote:\n> \n> \n> > On Mar 2, 2021, at 12:04 PM, Joel Jacobson <joel@compiler.org> wrote:\n> > \n> > On Tue, Mar 2, 2021, at 20:57, Mark Dilger wrote:\n> >> I didn't phrase that clearly enough. I'm thinking about whether you include the bounds information in the hash function. The current implementation of hash_range(PG_FUNCTION_ARGS) is going to hash the lower and upper bounds, since you didn't change it to do otherwise, so \"equal\" values won't always hash the same. I haven't tested this out, but it seems you could get a different set of rows depending on whether the planner selects a hash join.\n> > \n> > I think this issue is solved by the empty-ranges-with-bounds-information-v2.patch I just sent,\n> > since with it, there are no semantic changes at all, lower() and upper() works like before.\n> \n> There are semantic differences, because hash_range() doesn't call lower() and upper(), it uses RANGE_HAS_LBOUND and RANGE_HAS_UBOUND, the behavior of which you have changed. I created a regression test and expected results and checked after applying your patch, and your patch breaks the hash function behavior. Notice that before your patch, all three ranges hashed to the same value, but not so after:\n> \n> \n> @@ -1,18 +1,18 @@\n> select hash_range('[a,a)'::textrange);\n> hash_range\n> ------------\n> - 484847245\n> + -590102690\n> (1 row)\n> \n> select hash_range('[b,b)'::textrange);\n> hash_range\n> ------------\n> - 484847245\n> + 281562732\n> (1 row)\n> \n> select hash_range('[c,c)'::textrange);\n> - hash_range \n> -------------\n> - 484847245\n> + hash_range \n> +-------------\n> + -1887445565\n> (1 row)\n> \n> \n> You might change how hash_range() works to get all \"equal\" values to hash the same value, but that just gets back to the problem that non-equal things appear to be equal. I guess that's your two-warty-feet preference, but not everyone is going to be in agreement on that.\n\nYikes. Here be dragons. I think I want my wart free foot back please.\n\nMany thanks for explaining. I think I’ll abandon this patch. I guess implementing an entirely new range type could be an acceptable solution, but that’s too big of a project for me to manage on my own. If any more experienced hackers are interested in such a project, I would love to help if I can.\n\n> \n> —\n> Mark Dilger\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n> \n> \n> \n> \n\nKind regards,\n\nJoel\n\nOn Tue, Mar 2, 2021, at 21:40, Mark Dilger wrote:> On Mar 2, 2021, at 12:04 PM, Joel Jacobson <joel@compiler.org> wrote:> > On Tue, Mar 2, 2021, at 20:57, Mark Dilger wrote:>> I didn't phrase that clearly enough. I'm thinking about whether you include the bounds information in the hash function. The current implementation of hash_range(PG_FUNCTION_ARGS) is going to hash the lower and upper bounds, since you didn't change it to do otherwise, so \"equal\" values won't always hash the same. I haven't tested this out, but it seems you could get a different set of rows depending on whether the planner selects a hash join.> > I think this issue is solved by the empty-ranges-with-bounds-information-v2.patch I just sent,> since with it, there are no semantic changes at all, lower() and upper() works like before.There are semantic differences, because hash_range() doesn't call lower() and upper(), it uses RANGE_HAS_LBOUND and RANGE_HAS_UBOUND, the behavior of which you have changed. I created a regression test and expected results and checked after applying your patch, and your patch breaks the hash function behavior. Notice that before your patch, all three ranges hashed to the same value, but not so after:@@ -1,18 +1,18 @@select hash_range('[a,a)'::textrange); hash_range------------- 484847245+ -590102690(1 row)select hash_range('[b,b)'::textrange); hash_range------------- 484847245+ 281562732(1 row)select hash_range('[c,c)'::textrange);- hash_range -------------- 484847245+ hash_range +-------------+ -1887445565(1 row)You might change how hash_range() works to get all \"equal\" values to hash the same value, but that just gets back to the problem that non-equal things appear to be equal. I guess that's your two-warty-feet preference, but not everyone is going to be in agreement on that.Yikes. Here be dragons. I think I want my wart free foot back please.Many thanks for explaining. I think I’ll abandon this patch. I guess implementing an entirely new range type could be an acceptable solution, but that’s too big of a project for me to manage on my own. If any more experienced hackers are interested in such a project, I would love to help if I can.—Mark DilgerEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL CompanyKind regards,Joel",
"msg_date": "Tue, 02 Mar 2021 21:51:29 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "\n\n> On Mar 2, 2021, at 12:51 PM, Joel Jacobson <joel@compiler.org> wrote:\n> \n> Yikes. Here be dragons. I think I want my wart free foot back please.\n> \n> Many thanks for explaining. I think I’ll abandon this patch. I guess implementing an entirely new range type could be an acceptable solution, but that’s too big of a project for me to manage on my own. If any more experienced hackers are interested in such a project, I would love to help if I can.\n\nPart of what was strange about arguing against your patch is that I kind of wanted the feature to work that way back when it originally got written. (Not to say that it *should* have worked that way, just that part of me wanted it.)\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 2 Mar 2021 12:58:14 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "On Tue, Mar 2, 2021, at 21:58, Mark Dilger wrote:\n> Part of what was strange about arguing against your patch is that I kind of wanted the feature to work that way back when it originally got written. (Not to say that it *should* have worked that way, just that part of me wanted it.)\n\nThat's encouraging to hear. I've marked the patch as Rejected, since it was a dead end.\n\nI would accept things as they are, if there was nothing that could be done,\nbut since Multiranges have not been released yet,\nI think it's worth thinking intensively about possible problems until it's too late.\n\nFor discrete types, Multiranges <=> Sets should be true,\ni.e. they should be equivalent, since there cannot be any values\nin between two discrete adjacent values.\n\nDue to the internals of ranges, it's not possible to create a range\nthat covers all possible valid values for a discrete type,\nsince the very last value cannot be included.\n\nExample for int4:\n\nSELECT int4range(-2147483647,2147483647,'[]');\nERROR: integer out of range\n\nSELECT int4range(0,2147483647,'[]');\nERROR: integer out of range\n\nSELECT int4range(-2147483647,0,'[]');\n int4range\n-----------------\n[-2147483647,1)\n(1 row)\n\nHowever, 2147483647 is a valid int4 value.\n\nThis is due to the unfortunate decision to use [) as the canonical form,\nsince it must then always be able to calculate the next adjacent value.\n\nIf instead [] would have been used as the canonical form,\nwe would not have this problem.\n\nNot a biggie for int4 maybe, but imagine a very small discrete type,\nwhere it's actually necessary to create a range including its very last value.\n\nSuggestion #1: Use [] as the canonical form for discrete types.\nThis would allow creating ranges for all values for discrete types.\n\n/Joel\n\nOn Tue, Mar 2, 2021, at 21:58, Mark Dilger wrote:Part of what was strange about arguing against your patch is that I kind of wanted the feature to work that way back when it originally got written. (Not to say that it *should* have worked that way, just that part of me wanted it.)That's encouraging to hear. I've marked the patch as Rejected, since it was a dead end.I would accept things as they are, if there was nothing that could be done,but since Multiranges have not been released yet,I think it's worth thinking intensively about possible problems until it's too late.For discrete types, Multiranges <=> Sets should be true,i.e. they should be equivalent, since there cannot be any valuesin between two discrete adjacent values.Due to the internals of ranges, it's not possible to create a rangethat covers all possible valid values for a discrete type,since the very last value cannot be included.Example for int4:SELECT int4range(-2147483647,2147483647,'[]');ERROR: integer out of rangeSELECT int4range(0,2147483647,'[]');ERROR: integer out of rangeSELECT int4range(-2147483647,0,'[]'); int4range-----------------[-2147483647,1)(1 row)However, 2147483647 is a valid int4 value.This is due to the unfortunate decision to use [) as the canonical form,since it must then always be able to calculate the next adjacent value.If instead [] would have been used as the canonical form,we would not have this problem.Not a biggie for int4 maybe, but imagine a very small discrete type,where it's actually necessary to create a range including its very last value.Suggestion #1: Use [] as the canonical form for discrete types.This would allow creating ranges for all values for discrete types./Joel",
"msg_date": "Thu, 04 Mar 2021 07:24:45 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "On Tue, 2021-03-02 at 18:52 +0100, Joel Jacobson wrote:\n> and the theory is only cute until you face the ugly reality.\n\nThere are a lot of range functions and operators, as well as different\nkinds of ranges (continuous and discrete), and multiranges too. That\nmeans a lot of ways to combine operations in novel ways, and (if we\naren't careful) a lot of ways to produce very unexpected results. \n\nThe benefit of falling back on theory is that there's an answer ready\nwhen we need to define or explain the semantics. It might not match\neveryone's intuition, but usually avoids the worst kinds of surprises.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Wed, 03 Mar 2021 23:05:13 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "On Thu, 4 Mar 2021 at 01:25, Joel Jacobson <joel@compiler.org> wrote:\n\nSuggestion #1: Use [] as the canonical form for discrete types.\n> This would allow creating ranges for all values for discrete types.\n>\n\nI won't reiterate here, but there are fundamental reasons why [) is\ndefinitely the right default and canonical form.\n\nIn any case, you can create a range containing the last value:\n\nodyssey=> select 2147483647 <@ int4range (0, null);\n ?column?\n----------\n t\n(1 row)\n\nodyssey=>\n\nIt does seem reasonable to me to change it so that specifying the last\nvalue as the right end with ] would use a null endpoint instead of\nproducing an error when it tries to increment the bound.\n\nOn Thu, 4 Mar 2021 at 01:25, Joel Jacobson <joel@compiler.org> wrote:Suggestion #1: Use [] as the canonical form for discrete types.This would allow creating ranges for all values for discrete types.I won't reiterate here, but there are fundamental reasons why [) is definitely the right default and canonical form.In any case, you can create a range containing the last value:odyssey=> select 2147483647 <@ int4range (0, null); ?column? ---------- t(1 row)odyssey=> It does seem reasonable to me to change it so that specifying the last value as the right end with ] would use a null endpoint instead of producing an error when it tries to increment the bound.",
"msg_date": "Thu, 4 Mar 2021 10:21:46 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
},
{
"msg_contents": "On Thu, Mar 4, 2021, at 16:21, Isaac Morland wrote:\n> On Thu, 4 Mar 2021 at 01:25, Joel Jacobson <joel@compiler.org> wrote:\n> \n>> __\n>> Suggestion #1: Use [] as the canonical form for discrete types.\n>> This would allow creating ranges for all values for discrete types.\n> \n> I won't reiterate here, but there are fundamental reasons why [) is definitely the right default and canonical form.\n\nIt would be interesting to hear the reasons.\n\nFor discrete types, there are only exact values, there is nothing in between two adjacent discrete values.\nSo if we mean a range covering only the integer 5, why can't we just say [5,5] which simply means \"5\"?\nWhy is it necessary to express it as [5,6) which I interpret as the much more complex \"all integers from 5 up until just before the integer 6\".\n\nWe know for sure nothing can exist after 5 before 6, it's void, so why is it necessary to be explicit about including this space which we know can't contain any values?\n\nFor discrete types, we wouldn't even need the inclusive/exclusive features at all.\n\n> \n> In any case, you can create a range containing the last value:\n> \n> odyssey=> select 2147483647 <@ int4range (0, null);\n> ?column? \n> ----------\n> t\n> (1 row)\n> \n> odyssey=> \n> \n> It does seem reasonable to me to change it so that specifying the last value as the right end with ] would use a null endpoint instead of producing an error when it tries to increment the bound.\n\nNeat hack, thanks.\n\n/Joel\n\nOn Thu, Mar 4, 2021, at 16:21, Isaac Morland wrote:On Thu, 4 Mar 2021 at 01:25, Joel Jacobson <joel@compiler.org> wrote:Suggestion #1: Use [] as the canonical form for discrete types.This would allow creating ranges for all values for discrete types.I won't reiterate here, but there are fundamental reasons why [) is definitely the right default and canonical form.It would be interesting to hear the reasons.For discrete types, there are only exact values, there is nothing in between two adjacent discrete values.So if we mean a range covering only the integer 5, why can't we just say [5,5] which simply means \"5\"?Why is it necessary to express it as [5,6) which I interpret as the much more complex \"all integers from 5 up until just before the integer 6\".We know for sure nothing can exist after 5 before 6, it's void, so why is it necessary to be explicit about including this space which we know can't contain any values?For discrete types, we wouldn't even need the inclusive/exclusive features at all.In any case, you can create a range containing the last value:odyssey=> select 2147483647 <@ int4range (0, null); ?column? ---------- t(1 row)odyssey=> It does seem reasonable to me to change it so that specifying the last value as the right end with ] would use a null endpoint instead of producing an error when it tries to increment the bound.Neat hack, thanks./Joel",
"msg_date": "Thu, 04 Mar 2021 19:05:09 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Support empty ranges with bounds information"
}
] |
[
{
"msg_contents": "In talking to Teodor this week, I have written the attached C comment\npatch which improves our description of GiST's NSN and GistBuildLSN\nvalues.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee",
"msg_date": "Tue, 2 Mar 2021 11:40:21 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "GiST comment improvement"
},
{
"msg_contents": "\n\n> 2 марта 2021 г., в 21:40, Bruce Momjian <bruce@momjian.us> написал(а):\n> \n> In talking to Teodor this week, I have written the attached C comment\n> patch which improves our description of GiST's NSN and GistBuildLSN\n> values.\n\nI'd suggest also add NSN acronym description to Concurrency section of src/backend/access/gist/README. And maybe even to doc/src/sgml/acronyms.sgml. Just for connectivity.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Tue, 2 Mar 2021 22:25:23 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: GiST comment improvement"
},
{
"msg_contents": "On Tue, Mar 2, 2021 at 10:25:23PM +0500, Andrey Borodin wrote:\n> \n> \n> > 2 марта 2021 г., в 21:40, Bruce Momjian <bruce@momjian.us> написал(а):\n> > \n> > In talking to Teodor this week, I have written the attached C comment\n> > patch which improves our description of GiST's NSN and GistBuildLSN\n> > values.\n> \n> I'd suggest also add NSN acronym description to Concurrency section of src/backend/access/gist/README. And maybe even to doc/src/sgml/acronyms.sgml. Just for connectivity.\n\nUh, NSN is only something that exists in the source code. Do we document\nthose cases?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Tue, 2 Mar 2021 13:12:51 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: GiST comment improvement"
},
{
"msg_contents": "On Tue, Mar 2, 2021 at 11:40:21AM -0500, Bruce Momjian wrote:\n> In talking to Teodor this week, I have written the attached C comment\n> patch which improves our description of GiST's NSN and GistBuildLSN\n> values.\n\nPatch applied.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n",
"msg_date": "Wed, 10 Mar 2021 17:03:26 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: GiST comment improvement"
}
] |
[
{
"msg_contents": "PFA a simple patch that implements support for the PROXY protocol.\n\nThis is a protocol common and very light weight in proxies and load\nbalancers (haproxy is one common example, but also for example the AWS\ncloud load balancers). Basically this protocol prefixes the normal\nconnection with a header and a specification of what the original host\nwas, allowing the server to unwrap that and get the correct client\naddress instead of just the proxy ip address. It is a one-way protocol\nin that there is no response from the server, it's just purely a\nprefix of the IP information.\n\nUsing this when PostgreSQL is behind a proxy allows us to keep using\npg_hba.conf rules based on the original ip address, as well as track\nthe original address in log messages and pg_stat_activity etc.\n\nThe implementation adds a parameter named proxy_servers which lists\nthe ips or ip+cidr mask to be trusted. Since a proxy can decide what\nthe origin is, and this is used for security decisions, it's very\nimportant to not just trust any server, only those that are\nintentionally used. By default, no servers are listed, and thus the\nprotocol is disabled.\n\nWhen specified, and the connection on the normal port has the proxy\nprefix on it, and the connection comes in from one of the addresses\nlisted as valid proxy servers, we will replace the actual IP address\nof the client with the one specified in the proxy packet.\n\nCurrently there is no information about the proxy server in the\npg_stat_activity view, it's only available as a log message. But maybe\nit should go in pg_stat_activity as well? Or in a separate\npg_stat_proxy view?\n\n(In passing, I note that pq_discardbytes were in pqcomm.h, yet listed\nas static in pqcomm.c -- but now made non-static)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 2 Mar 2021 18:43:07 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "PROXY protocol support"
},
{
"msg_contents": "Hi,\n\nOn Tue, 2 Mar 2021 at 14:43, Magnus Hagander <magnus@hagander.net> wrote:\n> PFA a simple patch that implements support for the PROXY protocol.\n\nNice. I didn't know I needed this. But in hindsight, I would've used\nit quite a few times in the past if I could have.\n\n> The implementation adds a parameter named proxy_servers which lists\n> the ips or ip+cidr mask to be trusted. Since a proxy can decide what\n> the origin is, and this is used for security decisions, it's very\n> important to not just trust any server, only those that are\n> intentionally used. By default, no servers are listed, and thus the\n> protocol is disabled.\n\nMight make sense to add special cases for 'samehost' and 'samenet', as\nin hba rules, as proxy servers are commonly on the same machine or\nshare one of the same internal networks.\n\nDespite the security issues, I'm sure people will soon try and set\nproxy_servers='*' or 'all' if they think this setting works as\nlisten_addresses or as pg_hba. But I don't think I'd make these use\ncases easier.\n\nTureba - Arthur Nascimento\n\n\n",
"msg_date": "Tue, 2 Mar 2021 15:42:27 -0300",
"msg_from": "Arthur Nascimento <tureba@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Tue, 2021-03-02 at 18:43 +0100, Magnus Hagander wrote:\r\n> PFA a simple patch that implements support for the PROXY protocol.\r\n\r\nI'm not all the way through the patch yet, but this part jumped out at\r\nme:\r\n\r\n> +\tif (memcmp(proxyheader.sig, \"\\x0d\\x0a\\x0d\\x0a\\x00\\x0d\\x0a\\x51\\x55\\x49\\x54\\x0a\", sizeof(proxyheader.sig)) != 0)\r\n> +\t{\r\n> +\t\t/*\r\n> +\t\t * Data is there but it wasn't a proxy header. Also fall through to\r\n> +\t\t * normal processing\r\n> +\t\t */\r\n> +\t\tpq_endmsgread();\r\n> +\t\treturn STATUS_OK;\r\n\r\nFrom my reading, the spec explicitly disallows this sort of fallback\r\nbehavior:\r\n\r\n> The receiver MUST be configured to only receive the protocol described in this\r\n> specification and MUST not try to guess whether the protocol header is present\r\n> or not. This means that the protocol explicitly prevents port sharing between\r\n> public and private access.\r\n\r\nYou might say, \"if we already trust the proxy server, why should we\r\ncare?\" but I think the point is that you want to catch\r\nmisconfigurations where the middlebox is forwarding bare TCP without\r\nadding a PROXY header of its own, which will \"work\" for innocent\r\nclients but in reality is a ticking timebomb. If you've decided to\r\ntrust an intermediary to use PROXY connections, then you must _only_\r\naccept PROXY connections from that intermediary. Does that seem like a\r\nreasonable interpretation?\r\n\r\n--Jacob\r\n",
"msg_date": "Wed, 3 Mar 2021 00:50:15 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Wed, Mar 3, 2021 at 1:50 AM Jacob Champion <pchampion@vmware.com> wrote:\n>\n> On Tue, 2021-03-02 at 18:43 +0100, Magnus Hagander wrote:\n> > PFA a simple patch that implements support for the PROXY protocol.\n>\n> I'm not all the way through the patch yet, but this part jumped out at\n> me:\n>\n> > + if (memcmp(proxyheader.sig, \"\\x0d\\x0a\\x0d\\x0a\\x00\\x0d\\x0a\\x51\\x55\\x49\\x54\\x0a\", sizeof(proxyheader.sig)) != 0)\n> > + {\n> > + /*\n> > + * Data is there but it wasn't a proxy header. Also fall through to\n> > + * normal processing\n> > + */\n> > + pq_endmsgread();\n> > + return STATUS_OK;\n>\n> From my reading, the spec explicitly disallows this sort of fallback\n> behavior:\n>\n> > The receiver MUST be configured to only receive the protocol described in this\n> > specification and MUST not try to guess whether the protocol header is present\n> > or not. This means that the protocol explicitly prevents port sharing between\n> > public and private access.\n>\n> You might say, \"if we already trust the proxy server, why should we\n> care?\" but I think the point is that you want to catch\n> misconfigurations where the middlebox is forwarding bare TCP without\n> adding a PROXY header of its own, which will \"work\" for innocent\n> clients but in reality is a ticking timebomb. If you've decided to\n> trust an intermediary to use PROXY connections, then you must _only_\n> accept PROXY connections from that intermediary. Does that seem like a\n> reasonable interpretation?\n\nI definitely missed that part of the spec. Ugh.\n\nThat said, I'm not sure it's *actually* an issue in the case of\nPostgreSQL. Given that doing what you're suggesting, accidentally\npassing connections without PROXY, will get caught in pg_hba.conf.\n\nThat said, I agree with your interpretation, and it's pretty easy to\nchange it to that. Basically we just have to do the IP check *before*\ndoing the PROXY protocol check. It makes testing a bit more difficult\nthough, but maybe worth it?\n\nI've attached a POC that does that. Note that I have *not* updated the docs!\n\nAnother option would of course be to listen on a separate port for it,\nwhich seems to be the \"haproxy way\". That would be slightly more code\n(we'd still want to keep the code for validating the list of trusted\nproxies I'd say), but maybe worth doing?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 3 Mar 2021 10:00:25 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Wed, Mar 3, 2021 at 10:00 AM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Wed, Mar 3, 2021 at 1:50 AM Jacob Champion <pchampion@vmware.com> wrote:\n> >\n> > On Tue, 2021-03-02 at 18:43 +0100, Magnus Hagander wrote:\n> > > PFA a simple patch that implements support for the PROXY protocol.\n> >\n> > I'm not all the way through the patch yet, but this part jumped out at\n> > me:\n> >\n> > > + if (memcmp(proxyheader.sig, \"\\x0d\\x0a\\x0d\\x0a\\x00\\x0d\\x0a\\x51\\x55\\x49\\x54\\x0a\", sizeof(proxyheader.sig)) != 0)\n> > > + {\n> > > + /*\n> > > + * Data is there but it wasn't a proxy header. Also fall through to\n> > > + * normal processing\n> > > + */\n> > > + pq_endmsgread();\n> > > + return STATUS_OK;\n> >\n> > From my reading, the spec explicitly disallows this sort of fallback\n> > behavior:\n> >\n> > > The receiver MUST be configured to only receive the protocol described in this\n> > > specification and MUST not try to guess whether the protocol header is present\n> > > or not. This means that the protocol explicitly prevents port sharing between\n> > > public and private access.\n> >\n> > You might say, \"if we already trust the proxy server, why should we\n> > care?\" but I think the point is that you want to catch\n> > misconfigurations where the middlebox is forwarding bare TCP without\n> > adding a PROXY header of its own, which will \"work\" for innocent\n> > clients but in reality is a ticking timebomb. If you've decided to\n> > trust an intermediary to use PROXY connections, then you must _only_\n> > accept PROXY connections from that intermediary. Does that seem like a\n> > reasonable interpretation?\n>\n> I definitely missed that part of the spec. Ugh.\n>\n> That said, I'm not sure it's *actually* an issue in the case of\n> PostgreSQL. Given that doing what you're suggesting, accidentally\n> passing connections without PROXY, will get caught in pg_hba.conf.\n>\n> That said, I agree with your interpretation, and it's pretty easy to\n> change it to that. Basically we just have to do the IP check *before*\n> doing the PROXY protocol check. It makes testing a bit more difficult\n> though, but maybe worth it?\n>\n> I've attached a POC that does that. Note that I have *not* updated the docs!\n>\n> Another option would of course be to listen on a separate port for it,\n> which seems to be the \"haproxy way\". That would be slightly more code\n> (we'd still want to keep the code for validating the list of trusted\n> proxies I'd say), but maybe worth doing?\n\nIn order to figure that out, I hacked up a poc on that. Once again\nwithout updates to the docs, but shows approximately how much code\ncomplexity it adds (not much).\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 3 Mar 2021 10:39:55 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "+10 on this one!\n\nHosting a farm of read replicas and r/w endpoint behind an HAproxy makes\nthe powerful pg_hba purpose by hiding the real source address... which is\nbad for some environments with strict conformance and audit requirements\n\n\nLe mar. 2 mars 2021 à 12:43, Magnus Hagander <magnus@hagander.net> a écrit :\n\n> PFA a simple patch that implements support for the PROXY protocol.\n>\n> This is a protocol common and very light weight in proxies and load\n> balancers (haproxy is one common example, but also for example the AWS\n> cloud load balancers). Basically this protocol prefixes the normal\n> connection with a header and a specification of what the original host\n> was, allowing the server to unwrap that and get the correct client\n> address instead of just the proxy ip address. It is a one-way protocol\n> in that there is no response from the server, it's just purely a\n> prefix of the IP information.\n>\n> Using this when PostgreSQL is behind a proxy allows us to keep using\n> pg_hba.conf rules based on the original ip address, as well as track\n> the original address in log messages and pg_stat_activity etc.\n>\n> The implementation adds a parameter named proxy_servers which lists\n> the ips or ip+cidr mask to be trusted. Since a proxy can decide what\n> the origin is, and this is used for security decisions, it's very\n> important to not just trust any server, only those that are\n> intentionally used. By default, no servers are listed, and thus the\n> protocol is disabled.\n>\n> When specified, and the connection on the normal port has the proxy\n> prefix on it, and the connection comes in from one of the addresses\n> listed as valid proxy servers, we will replace the actual IP address\n> of the client with the one specified in the proxy packet.\n>\n> Currently there is no information about the proxy server in the\n> pg_stat_activity view, it's only available as a log message. But maybe\n> it should go in pg_stat_activity as well? Or in a separate\n> pg_stat_proxy view?\n>\n> (In passing, I note that pq_discardbytes were in pqcomm.h, yet listed\n> as static in pqcomm.c -- but now made non-static)\n>\n> --\n> Magnus Hagander\n> Me: https://www.hagander.net/\n> Work: https://www.redpill-linpro.com/\n>\n\n+10 on this one!Hosting a farm of read replicas and r/w endpoint behind an HAproxy makes the powerful pg_hba purpose by hiding the real source address... which is bad for some environments with strict conformance and audit requirementsLe mar. 2 mars 2021 à 12:43, Magnus Hagander <magnus@hagander.net> a écrit :PFA a simple patch that implements support for the PROXY protocol.\n\nThis is a protocol common and very light weight in proxies and load\nbalancers (haproxy is one common example, but also for example the AWS\ncloud load balancers). Basically this protocol prefixes the normal\nconnection with a header and a specification of what the original host\nwas, allowing the server to unwrap that and get the correct client\naddress instead of just the proxy ip address. It is a one-way protocol\nin that there is no response from the server, it's just purely a\nprefix of the IP information.\n\nUsing this when PostgreSQL is behind a proxy allows us to keep using\npg_hba.conf rules based on the original ip address, as well as track\nthe original address in log messages and pg_stat_activity etc.\n\nThe implementation adds a parameter named proxy_servers which lists\nthe ips or ip+cidr mask to be trusted. Since a proxy can decide what\nthe origin is, and this is used for security decisions, it's very\nimportant to not just trust any server, only those that are\nintentionally used. By default, no servers are listed, and thus the\nprotocol is disabled.\n\nWhen specified, and the connection on the normal port has the proxy\nprefix on it, and the connection comes in from one of the addresses\nlisted as valid proxy servers, we will replace the actual IP address\nof the client with the one specified in the proxy packet.\n\nCurrently there is no information about the proxy server in the\npg_stat_activity view, it's only available as a log message. But maybe\nit should go in pg_stat_activity as well? Or in a separate\npg_stat_proxy view?\n\n(In passing, I note that pq_discardbytes were in pqcomm.h, yet listed\nas static in pqcomm.c -- but now made non-static)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 3 Mar 2021 09:13:43 -0500",
"msg_from": "Bruno Lavoie <bl@brunol.com>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "> PFA a simple patch that implements support for the PROXY protocol.\n> \n> This is a protocol common and very light weight in proxies and load\n> balancers (haproxy is one common example, but also for example the AWS\n> cloud load balancers). Basically this protocol prefixes the normal\n> connection with a header and a specification of what the original host\n> was, allowing the server to unwrap that and get the correct client\n> address instead of just the proxy ip address. It is a one-way protocol\n> in that there is no response from the server, it's just purely a\n> prefix of the IP information.\n\nIs there any formal specification for the \"a protocol common and very\nlight weight in proxies\"? I am asking because I was expecting that is\nexplained in your patch (hopefully in \"Frontend/Backend Protocol\"\nchapter) but I couldn't find it in your patch.\n\nAlso we need a regression test for this feature.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 04 Mar 2021 10:42:36 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Thu, 2021-03-04 at 10:42 +0900, Tatsuo Ishii wrote:\r\n> Is there any formal specification for the \"a protocol common and very\r\n> light weight in proxies\"?\r\n\r\nSee\r\n\r\n https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt\r\n\r\nwhich is maintained by HAProxy Technologies.\r\n\r\n--Jacob\r\n",
"msg_date": "Thu, 4 Mar 2021 19:45:37 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Wed, 2021-03-03 at 10:39 +0100, Magnus Hagander wrote:\r\n> On Wed, Mar 3, 2021 at 10:00 AM Magnus Hagander <magnus@hagander.net> wrote:\r\n> > Another option would of course be to listen on a separate port for it,\r\n> > which seems to be the \"haproxy way\". That would be slightly more code\r\n> > (we'd still want to keep the code for validating the list of trusted\r\n> > proxies I'd say), but maybe worth doing?\r\n> \r\n> In order to figure that out, I hacked up a poc on that. Once again\r\n> without updates to the docs, but shows approximately how much code\r\n> complexity it adds (not much).\r\n\r\nFrom a configuration perspective, I like that the separate-port\r\napproach can shift the burden of verifying trust to an external\r\nfirewall, and that it seems to match the behavior of other major server\r\nsoftware. But I don't have any insight into the relative security of\r\nthe two options in practice; hopefully someone else can chime in.\r\n\r\n> memset((char *) &hints, 0, sizeof(hints));\r\n> hints.ai_flags = AI_NUMERICHOST;\r\n> hints.ai_family = AF_UNSPEC;\r\n> \r\n> ret = pg_getaddrinfo_all(tok, NULL, &hints, &gai_result);\r\n\r\nIdle thought I had while setting up a local test rig: Are there any\r\ncompelling cases for allowing PROXY packets to arrive over Unix\r\nsockets? (By which I mean, the proxy is running on the same machine as\r\nPostgres, and connects to it using the .s.PGSQL socket file instead of\r\nTCP.) Are there cases where you want some other software to interact\r\nwith the TCP stack instead of Postgres, but it'd still be nice to have\r\nthe original connection information available?\r\n\r\n--Jacob\r\n",
"msg_date": "Thu, 4 Mar 2021 20:07:45 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On 3/4/21 2:45 PM, Jacob Champion wrote:\n> On Thu, 2021-03-04 at 10:42 +0900, Tatsuo Ishii wrote:\n>> Is there any formal specification for the \"a protocol common and very\n>> light weight in proxies\"?\n> \n> See\n> \n> https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt\n> \n> which is maintained by HAProxy Technologies.\n> \n> --Jacob\n> \n\nThis looks like it would only need a few extra protocol messages to be \nunderstood by the backend. It might be possible to implement that with \nthe loadable wire protocol extensions proposed here:\n\nhttps://commitfest.postgresql.org/32/3018/\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Thu, 4 Mar 2021 15:29:28 -0500",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Thu, Mar 4, 2021 at 9:29 PM Jan Wieck <jan@wi3ck.info> wrote:\n>\n> On 3/4/21 2:45 PM, Jacob Champion wrote:\n> > On Thu, 2021-03-04 at 10:42 +0900, Tatsuo Ishii wrote:\n> >> Is there any formal specification for the \"a protocol common and very\n> >> light weight in proxies\"?\n> >\n> > See\n> >\n> > https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt\n> >\n> > which is maintained by HAProxy Technologies.\n> >\n> > --Jacob\n> >\n>\n> This looks like it would only need a few extra protocol messages to be\n> understood by the backend. It might be possible to implement that with\n> the loadable wire protocol extensions proposed here:\n>\n> https://commitfest.postgresql.org/32/3018/\n\nActually the whole point of it is that it *doesn't* need any new\nprotocol messages. And that it *wraps* whatever is there, definitely\ndoesn't replace it. It should equally be wrapping whatever an\nextension uses.\n\nSo while the base topic is not unrelated, I don't think there is any\noverlap between these.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 4 Mar 2021 21:40:37 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Thu, Mar 4, 2021 at 9:07 PM Jacob Champion <pchampion@vmware.com> wrote:\n>\n> On Wed, 2021-03-03 at 10:39 +0100, Magnus Hagander wrote:\n> > On Wed, Mar 3, 2021 at 10:00 AM Magnus Hagander <magnus@hagander.net> wrote:\n> > > Another option would of course be to listen on a separate port for it,\n> > > which seems to be the \"haproxy way\". That would be slightly more code\n> > > (we'd still want to keep the code for validating the list of trusted\n> > > proxies I'd say), but maybe worth doing?\n> >\n> > In order to figure that out, I hacked up a poc on that. Once again\n> > without updates to the docs, but shows approximately how much code\n> > complexity it adds (not much).\n>\n> From a configuration perspective, I like that the separate-port\n> approach can shift the burden of verifying trust to an external\n> firewall, and that it seems to match the behavior of other major server\n> software. But I don't have any insight into the relative security of\n> the two options in practice; hopefully someone else can chime in.\n\nYeah I think that and the argument that the spec explicitly says it\nshould be on it's own port is the advantage. The disadvantage is,\nwell, more ports and more configuration. But it does definitely make a\nmore clean separation of concerns.\n\n\n> > memset((char *) &hints, 0, sizeof(hints));\n> > hints.ai_flags = AI_NUMERICHOST;\n> > hints.ai_family = AF_UNSPEC;\n> >\n> > ret = pg_getaddrinfo_all(tok, NULL, &hints, &gai_result);\n>\n> Idle thought I had while setting up a local test rig: Are there any\n> compelling cases for allowing PROXY packets to arrive over Unix\n> sockets? (By which I mean, the proxy is running on the same machine as\n> Postgres, and connects to it using the .s.PGSQL socket file instead of\n> TCP.) Are there cases where you want some other software to interact\n> with the TCP stack instead of Postgres, but it'd still be nice to have\n> the original connection information available?\n\nI'm uncertain what that usecase would be for something like haproxy,\ntbh. It can't do connection pooling, so adding it on the same machine\nas postgres itself wouldn't really add anything, I think?\n\nIid think about the other end, if you had a proxy on a different\nmachine accepting unix connections and passing them on over\nPROXY-over-tcp. But I doubt it's useful to know it was unix in that\ncase (since it still couldn't do peer or such for the auth) --\ninstead, that seems like an argument where it'd be better to proxy\nwithout using PROXY and just letting the IP address be.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 4 Mar 2021 21:45:33 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Thu, Mar 4, 2021 at 8:45 PM Jacob Champion <pchampion@vmware.com> wrote:\n>\n> On Thu, 2021-03-04 at 10:42 +0900, Tatsuo Ishii wrote:\n> > Is there any formal specification for the \"a protocol common and very\n> > light weight in proxies\"?\n>\n> See\n>\n> https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt\n\nYeah, it's currently in one of the comments, but should probably be\nadded to the docs side as well.\n\nAnd yes tests :) Probably not a regression test, but some level of tap\ntesting should definitely be added. We'll just have to find a way to\ndo that without making haproxy a dependency to run the tests :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 4 Mar 2021 21:47:10 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On 3/4/21 3:40 PM, Magnus Hagander wrote:\n> On Thu, Mar 4, 2021 at 9:29 PM Jan Wieck <jan@wi3ck.info> wrote:\n>> This looks like it would only need a few extra protocol messages to be\n>> understood by the backend. It might be possible to implement that with\n>> the loadable wire protocol extensions proposed here:\n>>\n>> https://commitfest.postgresql.org/32/3018/\n> \n> Actually the whole point of it is that it *doesn't* need any new\n> protocol messages. And that it *wraps* whatever is there, definitely\n> doesn't replace it. It should equally be wrapping whatever an\n> extension uses.\n> \n> So while the base topic is not unrelated, I don't think there is any\n> overlap between these.\n\nI might be missing something here, but isn't sending some extra, \ninformational *header*, which is understood by the backend, in essence a \nprotocol extension?\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Thu, 4 Mar 2021 16:01:15 -0500",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Thu, Mar 4, 2021 at 10:01 PM Jan Wieck <jan@wi3ck.info> wrote:\n>\n> On 3/4/21 3:40 PM, Magnus Hagander wrote:\n> > On Thu, Mar 4, 2021 at 9:29 PM Jan Wieck <jan@wi3ck.info> wrote:\n> >> This looks like it would only need a few extra protocol messages to be\n> >> understood by the backend. It might be possible to implement that with\n> >> the loadable wire protocol extensions proposed here:\n> >>\n> >> https://commitfest.postgresql.org/32/3018/\n> >\n> > Actually the whole point of it is that it *doesn't* need any new\n> > protocol messages. And that it *wraps* whatever is there, definitely\n> > doesn't replace it. It should equally be wrapping whatever an\n> > extension uses.\n> >\n> > So while the base topic is not unrelated, I don't think there is any\n> > overlap between these.\n>\n> I might be missing something here, but isn't sending some extra,\n> informational *header*, which is understood by the backend, in essence a\n> protocol extension?\n\nBad choice of words, I guess.\n\nThe points being, there is a single packet sent ahead of the normal\nstream. There are no new messages in \"the postgresql protocol\" or \"the\nfebe protocol\" or whatever we call it. And it doesn't change the\nproperties of any part of that protocol. And, importantly for the\nsimplicity, there is no negotiation and there are no packets going the\nother way.\n\nBut sure, you can call it a protocol extension if you want. And yes,\nit could probably be built on top of part of the ideas in that other\npatch, but most of it would be useless (the abstraction of the listen\nfunctionality into listen_have_free_slot/listen_add_socket would be\nthe big thing that could be used)\n\n--\n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 4 Mar 2021 23:38:03 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": ">> On Thu, 2021-03-04 at 10:42 +0900, Tatsuo Ishii wrote:\n>> > Is there any formal specification for the \"a protocol common and very\n>> > light weight in proxies\"?\n>>\n>> See\n>>\n>> https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt\n>\n> Yeah, it's currently in one of the comments, but should probably be\n> added to the docs side as well.\n\nIt seems the protocol is HAproxy product specific and I think it would\nbe better to be mentioned in the docs.\n\n> And yes tests :) Probably not a regression test, but some level of tap\n> testing should definitely be added. We'll just have to find a way to\n> do that without making haproxy a dependency to run the tests :)\n\nAgreed.\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Fri, 05 Mar 2021 08:08:37 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Thu, 2021-03-04 at 21:45 +0100, Magnus Hagander wrote:\r\n> On Thu, Mar 4, 2021 at 9:07 PM Jacob Champion <pchampion@vmware.com> wrote:\r\n> > Idle thought I had while setting up a local test rig: Are there any\r\n> > compelling cases for allowing PROXY packets to arrive over Unix\r\n> > sockets? (By which I mean, the proxy is running on the same machine as\r\n> > Postgres, and connects to it using the .s.PGSQL socket file instead of\r\n> > TCP.) Are there cases where you want some other software to interact\r\n> > with the TCP stack instead of Postgres, but it'd still be nice to have\r\n> > the original connection information available?\r\n> \r\n> I'm uncertain what that usecase would be for something like haproxy,\r\n> tbh. It can't do connection pooling, so adding it on the same machine\r\n> as postgres itself wouldn't really add anything, I think?\r\n\r\nYeah, I wasn't thinking HAproxy so much as some unspecified software\r\nappliance that's performing Some Task before allowing a TCP client to\r\nspeak to Postgres. But it'd be better to hear from someone that has an\r\nactual use case, instead of me spitballing.\r\n\r\n> Iid think about the other end, if you had a proxy on a different\r\n> machine accepting unix connections and passing them on over\r\n> PROXY-over-tcp. But I doubt it's useful to know it was unix in that\r\n> case (since it still couldn't do peer or such for the auth) --\r\n> instead, that seems like an argument where it'd be better to proxy\r\n> without using PROXY and just letting the IP address be.\r\n\r\nYou could potentially design a system that lets you proxy a \"local all\r\nall trust\" setup from a different (trusted) machine, without having to\r\nactually let people onto the machine that's running Postgres. That\r\nwould require some additional authentication on the PROXY connection\r\n(i.e. something stronger than host-based auth) to actually be useful.\r\n\r\n-- other notes --\r\n\r\nA small nitpick on the current separate-port PoC is that I'm forced to\r\nset up a \"regular\" TCP port, even if I only want the PROXY behavior.\r\n\r\nThe original-host logging isn't working for me:\r\n\r\n WARNING: pg_getnameinfo_all() failed: ai_family not supported\r\n LOG: proxy connection from: host=??? port=???\r\n\r\nand I think the culprit is this:\r\n\r\n> /* Store a copy of the original address, for logging */ \r\n> memcpy(&raddr_save, &port->raddr, port->raddr.salen);\r\n\r\nport->raddr.salen is the length of port->raddr.addr; we want the length\r\nof the copy to be sizeof(port->raddr) here, no?\r\n\r\n--Jacob\r\n",
"msg_date": "Thu, 4 Mar 2021 23:21:14 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "The current proposal seems to miss the case of transaction pooling\n(and statement pooling) where the same established connection\nmultiplexes transactions / statements from multiple remote clients.\n\nWhat we would need for that case would be a functionl\n\npg_set_remote_client_address( be_key, remote_ip, remote_hostname)\n\nwhere only be_key and remote_ip are required, but any string (up to a\ncertain length) would be accepted as hostname.\n\nIt would be really nice if we could send this request at protocol level but\nif that is hard to do then having a function would get us half way there.\n\nthe be_key in the function is the key from PGcancel, which is stored\n by libpq when making the connection, and it is there, to make sure\nthat only the directly connecting proxy can successfully call the function.\n\nCheers\nHannu\n\nOn Fri, Mar 5, 2021 at 12:21 AM Jacob Champion <pchampion@vmware.com> wrote:\n>\n> On Thu, 2021-03-04 at 21:45 +0100, Magnus Hagander wrote:\n> > On Thu, Mar 4, 2021 at 9:07 PM Jacob Champion <pchampion@vmware.com> wrote:\n> > > Idle thought I had while setting up a local test rig: Are there any\n> > > compelling cases for allowing PROXY packets to arrive over Unix\n> > > sockets? (By which I mean, the proxy is running on the same machine as\n> > > Postgres, and connects to it using the .s.PGSQL socket file instead of\n> > > TCP.) Are there cases where you want some other software to interact\n> > > with the TCP stack instead of Postgres, but it'd still be nice to have\n> > > the original connection information available?\n> >\n> > I'm uncertain what that usecase would be for something like haproxy,\n> > tbh. It can't do connection pooling, so adding it on the same machine\n> > as postgres itself wouldn't really add anything, I think?\n>\n> Yeah, I wasn't thinking HAproxy so much as some unspecified software\n> appliance that's performing Some Task before allowing a TCP client to\n> speak to Postgres. But it'd be better to hear from someone that has an\n> actual use case, instead of me spitballing.\n>\n> > Iid think about the other end, if you had a proxy on a different\n> > machine accepting unix connections and passing them on over\n> > PROXY-over-tcp. But I doubt it's useful to know it was unix in that\n> > case (since it still couldn't do peer or such for the auth) --\n> > instead, that seems like an argument where it'd be better to proxy\n> > without using PROXY and just letting the IP address be.\n>\n> You could potentially design a system that lets you proxy a \"local all\n> all trust\" setup from a different (trusted) machine, without having to\n> actually let people onto the machine that's running Postgres. That\n> would require some additional authentication on the PROXY connection\n> (i.e. something stronger than host-based auth) to actually be useful.\n>\n> -- other notes --\n>\n> A small nitpick on the current separate-port PoC is that I'm forced to\n> set up a \"regular\" TCP port, even if I only want the PROXY behavior.\n>\n> The original-host logging isn't working for me:\n>\n> WARNING: pg_getnameinfo_all() failed: ai_family not supported\n> LOG: proxy connection from: host=??? port=???\n>\n> and I think the culprit is this:\n>\n> > /* Store a copy of the original address, for logging */\n> > memcpy(&raddr_save, &port->raddr, port->raddr.salen);\n>\n> port->raddr.salen is the length of port->raddr.addr; we want the length\n> of the copy to be sizeof(port->raddr) here, no?\n>\n> --Jacob\n\n\n",
"msg_date": "Fri, 5 Mar 2021 00:57:27 +0100",
"msg_from": "Hannu Krosing <hannuk@google.com>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "\n\nOn 5/3/21 0:21, Jacob Champion wrote:\n> On Thu, 2021-03-04 at 21:45 +0100, Magnus Hagander wrote:\n>> On Thu, Mar 4, 2021 at 9:07 PM Jacob Champion <pchampion@vmware.com> wrote:\n>>> Idle thought I had while setting up a local test rig: Are there any\n>>> compelling cases for allowing PROXY packets to arrive over Unix\n>>> sockets? (By which I mean, the proxy is running on the same machine as\n>>> Postgres, and connects to it using the .s.PGSQL socket file instead of\n>>> TCP.) Are there cases where you want some other software to interact\n>>> with the TCP stack instead of Postgres, but it'd still be nice to have\n>>> the original connection information available?\n>> I'm uncertain what that usecase would be for something like haproxy,\n>> tbh. It can't do connection pooling, so adding it on the same machine\n>> as postgres itself wouldn't really add anything, I think?\n> Yeah, I wasn't thinking HAproxy so much as some unspecified software\n> appliance that's performing Some Task before allowing a TCP client to\n> speak to Postgres. But it'd be better to hear from someone that has an\n> actual use case, instead of me spitballing.\n\n Here's a use case: Envoy's Postgres filter (see [1], [2]). Right now\nis able to capture protocol-level metrics and send them to a metrics\ncollector (eg. Prometheus) while proxying the traffic. More capabilities\nare being added as of today, and will eventually manage HBA too. It\nwould greatly benefit from this proposal, since it proxies the traffic\nwith, obviously, its IP, not the client's. It may be used (we do)\nlocally fronting Postgres, via UDS (so it can be easily trusted).\n\n\n Álvaro\n\n\n[1]\nhttps://www.envoyproxy.io/docs/envoy/latest/configuration/listeners/network_filters/postgres_proxy_filter\n[2]\nhttps://www.cncf.io/blog/2020/08/13/envoy-1-15-introduces-a-new-postgres-extension-with-monitoring-support/\n\n-- \n\nAlvaro Hernandez\n\n\n-----------\nOnGres\n\n\n\n\n",
"msg_date": "Fri, 5 Mar 2021 01:33:21 +0100",
"msg_from": "=?UTF-8?B?w4FsdmFybyBIZXJuw6FuZGV6?= <aht@ongres.com>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Fri, Mar 5, 2021 at 12:57 AM Hannu Krosing <hannuk@google.com> wrote:\n>\n> The current proposal seems to miss the case of transaction pooling\n> (and statement pooling) where the same established connection\n> multiplexes transactions / statements from multiple remote clients.\n\nNot at all.\n\nThe current proposal is there to implement the PROXY protocol. It\ndoesn't try to do anything with connection pooling at all.\n\nSolving a similar problem for connection poolers would also definitely\nbe a useful thing, but it is entirely out of scope of this patch, and\nis a completely separate implementation.\n\nI'd definitely like to see that one solved as well, but let's look at\nit on a different thread so we don't derail this one.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Fri, 5 Mar 2021 09:59:46 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Fri, Mar 5, 2021 at 1:33 AM Álvaro Hernández <aht@ongres.com> wrote:\n>\n>\n>\n> On 5/3/21 0:21, Jacob Champion wrote:\n> > On Thu, 2021-03-04 at 21:45 +0100, Magnus Hagander wrote:\n> >> On Thu, Mar 4, 2021 at 9:07 PM Jacob Champion <pchampion@vmware.com> wrote:\n> >>> Idle thought I had while setting up a local test rig: Are there any\n> >>> compelling cases for allowing PROXY packets to arrive over Unix\n> >>> sockets? (By which I mean, the proxy is running on the same machine as\n> >>> Postgres, and connects to it using the .s.PGSQL socket file instead of\n> >>> TCP.) Are there cases where you want some other software to interact\n> >>> with the TCP stack instead of Postgres, but it'd still be nice to have\n> >>> the original connection information available?\n> >> I'm uncertain what that usecase would be for something like haproxy,\n> >> tbh. It can't do connection pooling, so adding it on the same machine\n> >> as postgres itself wouldn't really add anything, I think?\n> > Yeah, I wasn't thinking HAproxy so much as some unspecified software\n> > appliance that's performing Some Task before allowing a TCP client to\n> > speak to Postgres. But it'd be better to hear from someone that has an\n> > actual use case, instead of me spitballing.\n>\n> Here's a use case: Envoy's Postgres filter (see [1], [2]). Right now\n> is able to capture protocol-level metrics and send them to a metrics\n> collector (eg. Prometheus) while proxying the traffic. More capabilities\n> are being added as of today, and will eventually manage HBA too. It\n> would greatly benefit from this proposal, since it proxies the traffic\n> with, obviously, its IP, not the client's. It may be used (we do)\n> locally fronting Postgres, via UDS (so it can be easily trusted).\n\nYeah, Envoy is definitely a great example of a usecase for the proxy\nprotocol in general.\n\nSpecifically about the Unix socket though -- doesn't envoy normally\nrun on a different instance (or in a different container at least),\nthus normally uses tcp between envoy and postgres? Or would it be a\nreasonable usecase that you ran it locally on the postgres server,\nhaving it speak IP to the clients but unix sockets to the postgres\nbackend? I guess maybe it is outside of the containerized world?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Fri, 5 Mar 2021 10:03:46 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Fri, Mar 5, 2021 at 12:08 AM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>\n> >> On Thu, 2021-03-04 at 10:42 +0900, Tatsuo Ishii wrote:\n> >> > Is there any formal specification for the \"a protocol common and very\n> >> > light weight in proxies\"?\n> >>\n> >> See\n> >>\n> >> https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt\n> >\n> > Yeah, it's currently in one of the comments, but should probably be\n> > added to the docs side as well.\n>\n> It seems the protocol is HAproxy product specific and I think it would\n> be better to be mentioned in the docs.\n\nIt's definitely not HAProxy specific, it's more or less an industry\nstandard. It's just maintained by them. That said, yes, it should be\nreferenced in the docs.\n\n--\n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Fri, 5 Mar 2021 10:14:49 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Fri, Mar 5, 2021 at 12:21 AM Jacob Champion <pchampion@vmware.com> wrote:\n>\n> On Thu, 2021-03-04 at 21:45 +0100, Magnus Hagander wrote:\n> > On Thu, Mar 4, 2021 at 9:07 PM Jacob Champion <pchampion@vmware.com> wrote:\n> > > Idle thought I had while setting up a local test rig: Are there any\n> > > compelling cases for allowing PROXY packets to arrive over Unix\n> > > sockets? (By which I mean, the proxy is running on the same machine as\n> > > Postgres, and connects to it using the .s.PGSQL socket file instead of\n> > > TCP.) Are there cases where you want some other software to interact\n> > > with the TCP stack instead of Postgres, but it'd still be nice to have\n> > > the original connection information available?\n> >\n> > I'm uncertain what that usecase would be for something like haproxy,\n> > tbh. It can't do connection pooling, so adding it on the same machine\n> > as postgres itself wouldn't really add anything, I think?\n>\n> Yeah, I wasn't thinking HAproxy so much as some unspecified software\n> appliance that's performing Some Task before allowing a TCP client to\n> speak to Postgres. But it'd be better to hear from someone that has an\n> actual use case, instead of me spitballing.\n>\n> > Iid think about the other end, if you had a proxy on a different\n> > machine accepting unix connections and passing them on over\n> > PROXY-over-tcp. But I doubt it's useful to know it was unix in that\n> > case (since it still couldn't do peer or such for the auth) --\n> > instead, that seems like an argument where it'd be better to proxy\n> > without using PROXY and just letting the IP address be.\n>\n> You could potentially design a system that lets you proxy a \"local all\n> all trust\" setup from a different (trusted) machine, without having to\n> actually let people onto the machine that's running Postgres. That\n> would require some additional authentication on the PROXY connection\n> (i.e. something stronger than host-based auth) to actually be useful.\n>\n> -- other notes --\n>\n> A small nitpick on the current separate-port PoC is that I'm forced to\n> set up a \"regular\" TCP port, even if I only want the PROXY behavior.\n\nYeah. I'm not sure there's a good way to avoid that without making\nconfiguations a lot more complex.\n\n\n> The original-host logging isn't working for me:\n>\n> WARNING: pg_getnameinfo_all() failed: ai_family not supported\n> LOG: proxy connection from: host=??? port=???\n>\n> and I think the culprit is this:\n>\n> > /* Store a copy of the original address, for logging */\n> > memcpy(&raddr_save, &port->raddr, port->raddr.salen);\n>\n> port->raddr.salen is the length of port->raddr.addr; we want the length\n> of the copy to be sizeof(port->raddr) here, no?\n\nThat's interesting -- it works perfectly fine here. What platform are\nyou testing on?\n\nBut yes, you are correct, it should do that. I guess it's a case of\nthe salen actually ending up being uninitialized in the copy, and thus\nfailing at a later stage. (I sent for sizeof(SockAddr) to make it\neasier to read without having to look things up, but the net result is\nthe same)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Fri, 5 Mar 2021 10:22:35 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "\n\nOn 5/3/21 10:03, Magnus Hagander wrote:\n> On Fri, Mar 5, 2021 at 1:33 AM Álvaro Hernández <aht@ongres.com> wrote:\n>>\n>>\n>> On 5/3/21 0:21, Jacob Champion wrote:\n>>> On Thu, 2021-03-04 at 21:45 +0100, Magnus Hagander wrote:\n>>>> On Thu, Mar 4, 2021 at 9:07 PM Jacob Champion <pchampion@vmware.com> wrote:\n>>>>> Idle thought I had while setting up a local test rig: Are there any\n>>>>> compelling cases for allowing PROXY packets to arrive over Unix\n>>>>> sockets? (By which I mean, the proxy is running on the same machine as\n>>>>> Postgres, and connects to it using the .s.PGSQL socket file instead of\n>>>>> TCP.) Are there cases where you want some other software to interact\n>>>>> with the TCP stack instead of Postgres, but it'd still be nice to have\n>>>>> the original connection information available?\n>>>> I'm uncertain what that usecase would be for something like haproxy,\n>>>> tbh. It can't do connection pooling, so adding it on the same machine\n>>>> as postgres itself wouldn't really add anything, I think?\n>>> Yeah, I wasn't thinking HAproxy so much as some unspecified software\n>>> appliance that's performing Some Task before allowing a TCP client to\n>>> speak to Postgres. But it'd be better to hear from someone that has an\n>>> actual use case, instead of me spitballing.\n>> Here's a use case: Envoy's Postgres filter (see [1], [2]). Right now\n>> is able to capture protocol-level metrics and send them to a metrics\n>> collector (eg. Prometheus) while proxying the traffic. More capabilities\n>> are being added as of today, and will eventually manage HBA too. It\n>> would greatly benefit from this proposal, since it proxies the traffic\n>> with, obviously, its IP, not the client's. It may be used (we do)\n>> locally fronting Postgres, via UDS (so it can be easily trusted).\n> Yeah, Envoy is definitely a great example of a usecase for the proxy\n> protocol in general.\n\n Actually Envoy already implements the Proxy protocol:\nhttps://www.envoyproxy.io/docs/envoy/latest/configuration/listeners/listener_filters/proxy_protocol.html\nBut I believe it would need some further cooperation with the Postgres\nfilter, unless they can be chained directly. Still, Postgres needs to\nunderstand it, which is what your patch would add (thanks!).\n\n>\n> Specifically about the Unix socket though -- doesn't envoy normally\n> run on a different instance (or in a different container at least),\n> thus normally uses tcp between envoy and postgres? Or would it be a\n> reasonable usecase that you ran it locally on the postgres server,\n> having it speak IP to the clients but unix sockets to the postgres\n> backend? I guess maybe it is outside of the containerized world?\n>\n\n This is exactly the architecture we use at StackGres [1][2]. We use\nEnvoy as a sidecar (so it runs on the same pod, server as Postgres) and\nconnects via UDS. But then exposes the connection to the outside clients\nvia TCP/IP. So in my opinion it is quite applicable to the container\nworld :)\n\n\n Álvaro\n\n\n[1] https://stackgres.io\n[2]\nhttps://stackgres.io/doc/latest/intro/architecture/#stackgres-pod-architecture-diagram\n\n-- \n\nAlvaro Hernandez\n\n\n-----------\nOnGres\n\n\n\n\n",
"msg_date": "Fri, 5 Mar 2021 14:49:36 +0100",
"msg_from": "=?UTF-8?B?w4FsdmFybyBIZXJuw6FuZGV6?= <aht@ongres.com>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Fri, 2021-03-05 at 10:22 +0100, Magnus Hagander wrote:\r\n> On Fri, Mar 5, 2021 at 12:21 AM Jacob Champion <pchampion@vmware.com> wrote:\r\n> > A small nitpick on the current separate-port PoC is that I'm forced to\r\n> > set up a \"regular\" TCP port, even if I only want the PROXY behavior.\r\n> \r\n> Yeah. I'm not sure there's a good way to avoid that without making\r\n> configuations a lot more complex.\r\n\r\nA generic solution would also solve the \"I want to listen on more than\r\none port\" problem, but that's probably not something to tackle at the\r\nsame time.\r\n\r\n> > The original-host logging isn't working for me:\r\n> > \r\n> > [...]\r\n> \r\n> That's interesting -- it works perfectly fine here. What platform are\r\n> you testing on?\r\n\r\nUbuntu 20.04.\r\n\r\n> But yes, you are correct, it should do that. I guess it's a case of\r\n> the salen actually ending up being uninitialized in the copy, and thus\r\n> failing at a later stage.\r\n\r\nThat seems right; EAI_FAMILY can be returned for a mismatched addrlen.\r\n\r\n> (I sent for sizeof(SockAddr) to make it\r\n> easier to read without having to look things up, but the net result is\r\n> the same)\r\n\r\nCool. Did you mean to attach a patch?\r\n\r\n== More Notes ==\r\n\r\n(Stop me if I'm digging too far into a proof of concept patch.)\r\n\r\n> +\tproxyaddrlen = pg_ntoh16(proxyheader.len);\r\n> +\r\n> +\tif (proxyaddrlen > sizeof(proxyaddr))\r\n> +\t{\r\n> +\t\tereport(COMMERROR,\r\n> +\t\t\t\t(errcode(ERRCODE_PROTOCOL_VIOLATION),\r\n> +\t\t\t\t errmsg(\"oversized proxy packet\")));\r\n> +\t\treturn STATUS_ERROR;\r\n> +\t}\r\n\r\nI think this is not quite right -- if there's additional data beyond\r\nthe IPv6 header size, that just means there are TLVs tacked onto the\r\nheader that we should ignore. (Or, eventually, use.)\r\n\r\nAdditionally, we need to check for underflow as well. A misbehaving\r\nproxy might not send enough data to fill up the address block for the\r\naddress family in use.\r\n\r\n> +\t/* If there is any more header data present, skip past it */\r\n> +\tif (proxyaddrlen > sizeof(proxyaddr))\r\n> +\t\tpq_discardbytes(proxyaddrlen - sizeof(proxyaddr));\r\n\r\nThis looks like dead code, given that we'll error out for the same\r\ncheck above -- but once it's no longer dead code, the return value of\r\npq_discardbytes should be checked for EOF.\r\n\r\n> +\telse if (proxyheader.fam == 0x11)\r\n> +\t{\r\n> +\t\t/* TCPv4 */\r\n> +\t\tport->raddr.addr.ss_family = AF_INET;\r\n> +\t\tport->raddr.salen = sizeof(struct sockaddr_in);\r\n> +\t\t((struct sockaddr_in *) &port->raddr.addr)->sin_addr.s_addr = proxyaddr.ip4.src_addr;\r\n> +\t\t((struct sockaddr_in *) &port->raddr.addr)->sin_port = proxyaddr.ip4.src_port;\r\n> +\t}\r\n\r\nI'm trying to reason through the fallout of setting raddr and not\r\nladdr. I understand why we're not setting laddr -- several places in\r\nthe code rely on the laddr to actually refer to a machine-local address\r\n-- but the fact that there is no actual connection from raddr to laddr\r\ncould cause shenanigans. For example, the ident auth protocol will just\r\nbreak (and it might be nice to explicitly disable it for PROXY\r\nconnections). Are there any other situations where a \"faked\" raddr\r\ncould throw off Postgres internals?\r\n\r\n--Jacob\r\n",
"msg_date": "Fri, 5 Mar 2021 19:11:12 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Fri, Mar 5, 2021 at 8:11 PM Jacob Champion <pchampion@vmware.com> wrote:\n>\n> On Fri, 2021-03-05 at 10:22 +0100, Magnus Hagander wrote:\n> > On Fri, Mar 5, 2021 at 12:21 AM Jacob Champion <pchampion@vmware.com> wrote:\n> > > The original-host logging isn't working for me:\n> > >\n> > > [...]\n> >\n> > That's interesting -- it works perfectly fine here. What platform are\n> > you testing on?\n>\n> Ubuntu 20.04.\n\nCurious. It doesn't show up on my debian.\n\nBut either way -- it was clearly wrong :)\n\n\n> > (I sent for sizeof(SockAddr) to make it\n> > easier to read without having to look things up, but the net result is\n> > the same)\n>\n> Cool. Did you mean to attach a patch?\n\nI didn't, I had some other hacks that were broken :) I've attached one\nnow which includes those changes.\n\n\n> == More Notes ==\n>\n> (Stop me if I'm digging too far into a proof of concept patch.)\n\nDefinitely not -- much appreciated, and just what was needed to take\nit from poc to a proper one!\n\n\n> > + proxyaddrlen = pg_ntoh16(proxyheader.len);\n> > +\n> > + if (proxyaddrlen > sizeof(proxyaddr))\n> > + {\n> > + ereport(COMMERROR,\n> > + (errcode(ERRCODE_PROTOCOL_VIOLATION),\n> > + errmsg(\"oversized proxy packet\")));\n> > + return STATUS_ERROR;\n> > + }\n>\n> I think this is not quite right -- if there's additional data beyond\n> the IPv6 header size, that just means there are TLVs tacked onto the\n> header that we should ignore. (Or, eventually, use.)\n\nYeah, you're right. Fallout of too much moving around. I think inthe\nend that code should just be removed, in favor of the discard path as\nyou mentinoed below.\n\n\n> Additionally, we need to check for underflow as well. A misbehaving\n> proxy might not send enough data to fill up the address block for the\n> address family in use.\n\nI used to have that check. I seem to have lost it in restructuring. Added back!\n\n\n> > + /* If there is any more header data present, skip past it */\n> > + if (proxyaddrlen > sizeof(proxyaddr))\n> > + pq_discardbytes(proxyaddrlen - sizeof(proxyaddr));\n>\n> This looks like dead code, given that we'll error out for the same\n> check above -- but once it's no longer dead code, the return value of\n> pq_discardbytes should be checked for EOF.\n\nYup.\n\n\n> > + else if (proxyheader.fam == 0x11)\n> > + {\n> > + /* TCPv4 */\n> > + port->raddr.addr.ss_family = AF_INET;\n> > + port->raddr.salen = sizeof(struct sockaddr_in);\n> > + ((struct sockaddr_in *) &port->raddr.addr)->sin_addr.s_addr = proxyaddr.ip4.src_addr;\n> > + ((struct sockaddr_in *) &port->raddr.addr)->sin_port = proxyaddr.ip4.src_port;\n> > + }\n>\n> I'm trying to reason through the fallout of setting raddr and not\n> laddr. I understand why we're not setting laddr -- several places in\n> the code rely on the laddr to actually refer to a machine-local address\n> -- but the fact that there is no actual connection from raddr to laddr\n> could cause shenanigans. For example, the ident auth protocol will just\n> break (and it might be nice to explicitly disable it for PROXY\n> connections). Are there any other situations where a \"faked\" raddr\n> could throw off Postgres internals?\n\nThat's a good point to discuss. I thought about it initially and\nfigured it'd be even worse to actually copy over laddr since that\nwoudl then suddenly have the IP address belonging to a different\nmachine.. And then I forgot to enumerate the other cases.\n\nFor ident, disabling the method seems reasonable.\n\nAnother thing that shows up with added support for running the proxy\nprotocol over Unix sockets, is that PostgreSQL refuses to do SSL over\nUnix sockets. So that check has to be updated to allow it over proxy\nconnections. Same for GSSAPI.\n\nAn interesting thing is what to do about\ninet_server_addr/inet_server_port. That sort of loops back up to the\noriginal question of where/how to expose the information about the\nproxy in general (since right now it just logs). Right now you can\nactually use inet_server_port() to see if the connection was proxied\n(as long as it was over tcp).\n\nAttached is an updated, which covers your comments, as well as adds\nunix socket support (per your question and Alvaros confirmed usecase).\nIt allows proxy connections over unix sockets, but I saw no need to\nget into unix sockets over the proxy protocol (dealing with paths\nbetween machines etc).\n\nI changed the additional ListenSocket array to instead declare\nListenSocket as an array of structs holding two fields. Seems cleaner,\nand especially should there be further extensions needed in the\nfuture.\n\nI've also added some trivial tests (man that took an ungodly amount of\nfighting perl -- it's clearly been a long time since I used perl\nproperly). They probably need some more love but it's a start.\n\nAnd of course rebased.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/",
"msg_date": "Sat, 6 Mar 2021 16:17:18 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Sat, Mar 6, 2021 at 4:17 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Fri, Mar 5, 2021 at 8:11 PM Jacob Champion <pchampion@vmware.com> wrote:\n> >\n> > On Fri, 2021-03-05 at 10:22 +0100, Magnus Hagander wrote:\n> > > On Fri, Mar 5, 2021 at 12:21 AM Jacob Champion <pchampion@vmware.com> wrote:\n> > > > The original-host logging isn't working for me:\n> > > >\n> > > > [...]\n> > >\n> > > That's interesting -- it works perfectly fine here. What platform are\n> > > you testing on?\n> >\n> > Ubuntu 20.04.\n>\n> Curious. It doesn't show up on my debian.\n>\n> But either way -- it was clearly wrong :)\n>\n>\n> > > (I sent for sizeof(SockAddr) to make it\n> > > easier to read without having to look things up, but the net result is\n> > > the same)\n> >\n> > Cool. Did you mean to attach a patch?\n>\n> I didn't, I had some other hacks that were broken :) I've attached one\n> now which includes those changes.\n>\n>\n> > == More Notes ==\n> >\n> > (Stop me if I'm digging too far into a proof of concept patch.)\n>\n> Definitely not -- much appreciated, and just what was needed to take\n> it from poc to a proper one!\n>\n>\n> > > + proxyaddrlen = pg_ntoh16(proxyheader.len);\n> > > +\n> > > + if (proxyaddrlen > sizeof(proxyaddr))\n> > > + {\n> > > + ereport(COMMERROR,\n> > > + (errcode(ERRCODE_PROTOCOL_VIOLATION),\n> > > + errmsg(\"oversized proxy packet\")));\n> > > + return STATUS_ERROR;\n> > > + }\n> >\n> > I think this is not quite right -- if there's additional data beyond\n> > the IPv6 header size, that just means there are TLVs tacked onto the\n> > header that we should ignore. (Or, eventually, use.)\n>\n> Yeah, you're right. Fallout of too much moving around. I think inthe\n> end that code should just be removed, in favor of the discard path as\n> you mentinoed below.\n>\n>\n> > Additionally, we need to check for underflow as well. A misbehaving\n> > proxy might not send enough data to fill up the address block for the\n> > address family in use.\n>\n> I used to have that check. I seem to have lost it in restructuring. Added back!\n>\n>\n> > > + /* If there is any more header data present, skip past it */\n> > > + if (proxyaddrlen > sizeof(proxyaddr))\n> > > + pq_discardbytes(proxyaddrlen - sizeof(proxyaddr));\n> >\n> > This looks like dead code, given that we'll error out for the same\n> > check above -- but once it's no longer dead code, the return value of\n> > pq_discardbytes should be checked for EOF.\n>\n> Yup.\n>\n>\n> > > + else if (proxyheader.fam == 0x11)\n> > > + {\n> > > + /* TCPv4 */\n> > > + port->raddr.addr.ss_family = AF_INET;\n> > > + port->raddr.salen = sizeof(struct sockaddr_in);\n> > > + ((struct sockaddr_in *) &port->raddr.addr)->sin_addr.s_addr = proxyaddr.ip4.src_addr;\n> > > + ((struct sockaddr_in *) &port->raddr.addr)->sin_port = proxyaddr.ip4.src_port;\n> > > + }\n> >\n> > I'm trying to reason through the fallout of setting raddr and not\n> > laddr. I understand why we're not setting laddr -- several places in\n> > the code rely on the laddr to actually refer to a machine-local address\n> > -- but the fact that there is no actual connection from raddr to laddr\n> > could cause shenanigans. For example, the ident auth protocol will just\n> > break (and it might be nice to explicitly disable it for PROXY\n> > connections). Are there any other situations where a \"faked\" raddr\n> > could throw off Postgres internals?\n>\n> That's a good point to discuss. I thought about it initially and\n> figured it'd be even worse to actually copy over laddr since that\n> woudl then suddenly have the IP address belonging to a different\n> machine.. And then I forgot to enumerate the other cases.\n>\n> For ident, disabling the method seems reasonable.\n>\n> Another thing that shows up with added support for running the proxy\n> protocol over Unix sockets, is that PostgreSQL refuses to do SSL over\n> Unix sockets. So that check has to be updated to allow it over proxy\n> connections. Same for GSSAPI.\n>\n> An interesting thing is what to do about\n> inet_server_addr/inet_server_port. That sort of loops back up to the\n> original question of where/how to expose the information about the\n> proxy in general (since right now it just logs). Right now you can\n> actually use inet_server_port() to see if the connection was proxied\n> (as long as it was over tcp).\n>\n> Attached is an updated, which covers your comments, as well as adds\n> unix socket support (per your question and Alvaros confirmed usecase).\n> It allows proxy connections over unix sockets, but I saw no need to\n> get into unix sockets over the proxy protocol (dealing with paths\n> between machines etc).\n>\n> I changed the additional ListenSocket array to instead declare\n> ListenSocket as an array of structs holding two fields. Seems cleaner,\n> and especially should there be further extensions needed in the\n> future.\n>\n> I've also added some trivial tests (man that took an ungodly amount of\n> fighting perl -- it's clearly been a long time since I used perl\n> properly). They probably need some more love but it's a start.\n>\n> And of course rebased.\n\nPfft, I was hoping for cfbot to pick it up and test it on a different\nplatform. Of course, for it to do that, I need to include the test\ndirectory in the Makefile. Here's a new one which adds that, no other\nchanges.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/",
"msg_date": "Sat, 6 Mar 2021 17:30:17 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Sat, Mar 6, 2021 at 5:30 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Sat, Mar 6, 2021 at 4:17 PM Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> > On Fri, Mar 5, 2021 at 8:11 PM Jacob Champion <pchampion@vmware.com> wrote:\n> > >\n> > > On Fri, 2021-03-05 at 10:22 +0100, Magnus Hagander wrote:\n> > > > On Fri, Mar 5, 2021 at 12:21 AM Jacob Champion <pchampion@vmware.com> wrote:\n> > > > > The original-host logging isn't working for me:\n> > > > >\n> > > > > [...]\n> > > >\n> > > > That's interesting -- it works perfectly fine here. What platform are\n> > > > you testing on?\n> > >\n> > > Ubuntu 20.04.\n> >\n> > Curious. It doesn't show up on my debian.\n> >\n> > But either way -- it was clearly wrong :)\n> >\n> >\n> > > > (I sent for sizeof(SockAddr) to make it\n> > > > easier to read without having to look things up, but the net result is\n> > > > the same)\n> > >\n> > > Cool. Did you mean to attach a patch?\n> >\n> > I didn't, I had some other hacks that were broken :) I've attached one\n> > now which includes those changes.\n> >\n> >\n> > > == More Notes ==\n> > >\n> > > (Stop me if I'm digging too far into a proof of concept patch.)\n> >\n> > Definitely not -- much appreciated, and just what was needed to take\n> > it from poc to a proper one!\n> >\n> >\n> > > > + proxyaddrlen = pg_ntoh16(proxyheader.len);\n> > > > +\n> > > > + if (proxyaddrlen > sizeof(proxyaddr))\n> > > > + {\n> > > > + ereport(COMMERROR,\n> > > > + (errcode(ERRCODE_PROTOCOL_VIOLATION),\n> > > > + errmsg(\"oversized proxy packet\")));\n> > > > + return STATUS_ERROR;\n> > > > + }\n> > >\n> > > I think this is not quite right -- if there's additional data beyond\n> > > the IPv6 header size, that just means there are TLVs tacked onto the\n> > > header that we should ignore. (Or, eventually, use.)\n> >\n> > Yeah, you're right. Fallout of too much moving around. I think inthe\n> > end that code should just be removed, in favor of the discard path as\n> > you mentinoed below.\n> >\n> >\n> > > Additionally, we need to check for underflow as well. A misbehaving\n> > > proxy might not send enough data to fill up the address block for the\n> > > address family in use.\n> >\n> > I used to have that check. I seem to have lost it in restructuring. Added back!\n> >\n> >\n> > > > + /* If there is any more header data present, skip past it */\n> > > > + if (proxyaddrlen > sizeof(proxyaddr))\n> > > > + pq_discardbytes(proxyaddrlen - sizeof(proxyaddr));\n> > >\n> > > This looks like dead code, given that we'll error out for the same\n> > > check above -- but once it's no longer dead code, the return value of\n> > > pq_discardbytes should be checked for EOF.\n> >\n> > Yup.\n> >\n> >\n> > > > + else if (proxyheader.fam == 0x11)\n> > > > + {\n> > > > + /* TCPv4 */\n> > > > + port->raddr.addr.ss_family = AF_INET;\n> > > > + port->raddr.salen = sizeof(struct sockaddr_in);\n> > > > + ((struct sockaddr_in *) &port->raddr.addr)->sin_addr.s_addr = proxyaddr.ip4.src_addr;\n> > > > + ((struct sockaddr_in *) &port->raddr.addr)->sin_port = proxyaddr.ip4.src_port;\n> > > > + }\n> > >\n> > > I'm trying to reason through the fallout of setting raddr and not\n> > > laddr. I understand why we're not setting laddr -- several places in\n> > > the code rely on the laddr to actually refer to a machine-local address\n> > > -- but the fact that there is no actual connection from raddr to laddr\n> > > could cause shenanigans. For example, the ident auth protocol will just\n> > > break (and it might be nice to explicitly disable it for PROXY\n> > > connections). Are there any other situations where a \"faked\" raddr\n> > > could throw off Postgres internals?\n> >\n> > That's a good point to discuss. I thought about it initially and\n> > figured it'd be even worse to actually copy over laddr since that\n> > woudl then suddenly have the IP address belonging to a different\n> > machine.. And then I forgot to enumerate the other cases.\n> >\n> > For ident, disabling the method seems reasonable.\n> >\n> > Another thing that shows up with added support for running the proxy\n> > protocol over Unix sockets, is that PostgreSQL refuses to do SSL over\n> > Unix sockets. So that check has to be updated to allow it over proxy\n> > connections. Same for GSSAPI.\n> >\n> > An interesting thing is what to do about\n> > inet_server_addr/inet_server_port. That sort of loops back up to the\n> > original question of where/how to expose the information about the\n> > proxy in general (since right now it just logs). Right now you can\n> > actually use inet_server_port() to see if the connection was proxied\n> > (as long as it was over tcp).\n> >\n> > Attached is an updated, which covers your comments, as well as adds\n> > unix socket support (per your question and Alvaros confirmed usecase).\n> > It allows proxy connections over unix sockets, but I saw no need to\n> > get into unix sockets over the proxy protocol (dealing with paths\n> > between machines etc).\n> >\n> > I changed the additional ListenSocket array to instead declare\n> > ListenSocket as an array of structs holding two fields. Seems cleaner,\n> > and especially should there be further extensions needed in the\n> > future.\n> >\n> > I've also added some trivial tests (man that took an ungodly amount of\n> > fighting perl -- it's clearly been a long time since I used perl\n> > properly). They probably need some more love but it's a start.\n> >\n> > And of course rebased.\n>\n> Pfft, I was hoping for cfbot to pick it up and test it on a different\n> platform. Of course, for it to do that, I need to include the test\n> directory in the Makefile. Here's a new one which adds that, no other\n> changes.\n\nSo cfbot didn't like thato ne one bit. Turns out that it's not a great\nidea to hardcode the username \"mha\" in the tests :)\n\nAnd also changed to only use unix sockets for the tests on linux, and\ntcp only on windows. Because that's how our tests are supposed to be.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 9 Mar 2021 11:25:28 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Tue, 2021-03-09 at 11:25 +0100, Magnus Hagander wrote:\r\n> I've also added some trivial tests (man that took an ungodly amount of\r\n> fighting perl -- it's clearly been a long time since I used perl\r\n> properly).\r\n\r\nYeah. The tests I'm writing for this and NSS have been the same way;\r\nit's a real problem. I'm basically writing supplemental tests in Python\r\nas the \"daily driver\", then trying to port whatever is easiest (not\r\nmuch) into Perl, when I get time.\r\n\r\n== More Notes ==\r\n\r\nSome additional spec-compliance stuff:\r\n\r\n> \t/* Lower 4 bits hold type of connection */\r\n> \tif (proxyheader.fam == 0)\r\n> \t{\r\n> \t\t/* LOCAL connection, so we ignore the address included */\r\n> \t}\r\n\r\n(fam == 0) is the UNSPEC case, which isn't necessarily LOCAL. We have\r\nto do something different for the LOCAL case:\r\n\r\n> - \\x0 : LOCAL : [...] The receiver must accept this connection as\r\n> valid and must use the real connection endpoints and discard the\r\n> protocol block including the family which is ignored.\r\n\r\nSo we should ignore the entire \"protocol block\" (by which I believe\r\nthey mean the protocol-and-address-family byte) in the case of LOCAL,\r\nand just accept it with the original address info intact. That seems to\r\nmatch the sample code in the back of the spec. The current behavior in\r\nthe patch will apply the PROXY behavior incorrectly if the sender sends\r\na LOCAL header with something other than UNSPEC -- which is strange\r\nbehavior but not explicitly prohibited as far as I can see.\r\n\r\nWe also need to reject all connections that aren't either LOCAL or\r\nPROXY commands:\r\n\r\n> - other values are unassigned and must not be emitted by senders.\r\n> Receivers must drop connections presenting unexpected values here.\r\n\r\n...and naturally it'd be Nice (tm) if the tests covered those corner\r\ncases.\r\n\r\nOver on the struct side:\r\n\r\n> +\t\tstruct\r\n> +\t\t{\t\t\t\t\t\t/* for TCP/UDP over IPv4, len = 12 */\r\n> +\t\t\tuint32\t\tsrc_addr;\r\n> +\t\t\tuint32\t\tdst_addr;\r\n> +\t\t\tuint16\t\tsrc_port;\r\n> +\t\t\tuint16\t\tdst_port;\r\n> +\t\t}\t\t\tip4;\r\n> ... snip ...\r\n> +\t\t/* TCPv4 */\r\n> +\t\tif (proxyaddrlen < 12)\r\n> +\t\t{\r\n\r\nGiven the importance of these hardcoded lengths matching reality, is it\r\npossible to add some static assertions to make sure that sizeof(<ipv4\r\nblock>) == 12 and so on? That would also save any poor souls who are\r\nusing compilers with nonstandard struct-packing behavior.\r\n\r\n--Jacob\r\n",
"msg_date": "Wed, 10 Mar 2021 23:05:00 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Tue, Mar 9, 2021 at 11:25 AM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Sat, Mar 6, 2021 at 5:30 PM Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> > On Sat, Mar 6, 2021 at 4:17 PM Magnus Hagander <magnus@hagander.net> wrote:\n> > >\n> > > On Fri, Mar 5, 2021 at 8:11 PM Jacob Champion <pchampion@vmware.com> wrote:\n> > > >\n> > > > On Fri, 2021-03-05 at 10:22 +0100, Magnus Hagander wrote:\n> > > > > On Fri, Mar 5, 2021 at 12:21 AM Jacob Champion <pchampion@vmware.com> wrote:\n> > > > > > The original-host logging isn't working for me:\n> > > > > >\n> > > > > > [...]\n> > > > >\n> > > > > That's interesting -- it works perfectly fine here. What platform are\n> > > > > you testing on?\n> > > >\n> > > > Ubuntu 20.04.\n> > >\n> > > Curious. It doesn't show up on my debian.\n> > >\n> > > But either way -- it was clearly wrong :)\n> > >\n> > >\n> > > > > (I sent for sizeof(SockAddr) to make it\n> > > > > easier to read without having to look things up, but the net result is\n> > > > > the same)\n> > > >\n> > > > Cool. Did you mean to attach a patch?\n> > >\n> > > I didn't, I had some other hacks that were broken :) I've attached one\n> > > now which includes those changes.\n> > >\n> > >\n> > > > == More Notes ==\n> > > >\n> > > > (Stop me if I'm digging too far into a proof of concept patch.)\n> > >\n> > > Definitely not -- much appreciated, and just what was needed to take\n> > > it from poc to a proper one!\n> > >\n> > >\n> > > > > + proxyaddrlen = pg_ntoh16(proxyheader.len);\n> > > > > +\n> > > > > + if (proxyaddrlen > sizeof(proxyaddr))\n> > > > > + {\n> > > > > + ereport(COMMERROR,\n> > > > > + (errcode(ERRCODE_PROTOCOL_VIOLATION),\n> > > > > + errmsg(\"oversized proxy packet\")));\n> > > > > + return STATUS_ERROR;\n> > > > > + }\n> > > >\n> > > > I think this is not quite right -- if there's additional data beyond\n> > > > the IPv6 header size, that just means there are TLVs tacked onto the\n> > > > header that we should ignore. (Or, eventually, use.)\n> > >\n> > > Yeah, you're right. Fallout of too much moving around. I think inthe\n> > > end that code should just be removed, in favor of the discard path as\n> > > you mentinoed below.\n> > >\n> > >\n> > > > Additionally, we need to check for underflow as well. A misbehaving\n> > > > proxy might not send enough data to fill up the address block for the\n> > > > address family in use.\n> > >\n> > > I used to have that check. I seem to have lost it in restructuring. Added back!\n> > >\n> > >\n> > > > > + /* If there is any more header data present, skip past it */\n> > > > > + if (proxyaddrlen > sizeof(proxyaddr))\n> > > > > + pq_discardbytes(proxyaddrlen - sizeof(proxyaddr));\n> > > >\n> > > > This looks like dead code, given that we'll error out for the same\n> > > > check above -- but once it's no longer dead code, the return value of\n> > > > pq_discardbytes should be checked for EOF.\n> > >\n> > > Yup.\n> > >\n> > >\n> > > > > + else if (proxyheader.fam == 0x11)\n> > > > > + {\n> > > > > + /* TCPv4 */\n> > > > > + port->raddr.addr.ss_family = AF_INET;\n> > > > > + port->raddr.salen = sizeof(struct sockaddr_in);\n> > > > > + ((struct sockaddr_in *) &port->raddr.addr)->sin_addr.s_addr = proxyaddr.ip4.src_addr;\n> > > > > + ((struct sockaddr_in *) &port->raddr.addr)->sin_port = proxyaddr.ip4.src_port;\n> > > > > + }\n> > > >\n> > > > I'm trying to reason through the fallout of setting raddr and not\n> > > > laddr. I understand why we're not setting laddr -- several places in\n> > > > the code rely on the laddr to actually refer to a machine-local address\n> > > > -- but the fact that there is no actual connection from raddr to laddr\n> > > > could cause shenanigans. For example, the ident auth protocol will just\n> > > > break (and it might be nice to explicitly disable it for PROXY\n> > > > connections). Are there any other situations where a \"faked\" raddr\n> > > > could throw off Postgres internals?\n> > >\n> > > That's a good point to discuss. I thought about it initially and\n> > > figured it'd be even worse to actually copy over laddr since that\n> > > woudl then suddenly have the IP address belonging to a different\n> > > machine.. And then I forgot to enumerate the other cases.\n> > >\n> > > For ident, disabling the method seems reasonable.\n> > >\n> > > Another thing that shows up with added support for running the proxy\n> > > protocol over Unix sockets, is that PostgreSQL refuses to do SSL over\n> > > Unix sockets. So that check has to be updated to allow it over proxy\n> > > connections. Same for GSSAPI.\n> > >\n> > > An interesting thing is what to do about\n> > > inet_server_addr/inet_server_port. That sort of loops back up to the\n> > > original question of where/how to expose the information about the\n> > > proxy in general (since right now it just logs). Right now you can\n> > > actually use inet_server_port() to see if the connection was proxied\n> > > (as long as it was over tcp).\n> > >\n> > > Attached is an updated, which covers your comments, as well as adds\n> > > unix socket support (per your question and Alvaros confirmed usecase).\n> > > It allows proxy connections over unix sockets, but I saw no need to\n> > > get into unix sockets over the proxy protocol (dealing with paths\n> > > between machines etc).\n> > >\n> > > I changed the additional ListenSocket array to instead declare\n> > > ListenSocket as an array of structs holding two fields. Seems cleaner,\n> > > and especially should there be further extensions needed in the\n> > > future.\n> > >\n> > > I've also added some trivial tests (man that took an ungodly amount of\n> > > fighting perl -- it's clearly been a long time since I used perl\n> > > properly). They probably need some more love but it's a start.\n> > >\n> > > And of course rebased.\n> >\n> > Pfft, I was hoping for cfbot to pick it up and test it on a different\n> > platform. Of course, for it to do that, I need to include the test\n> > directory in the Makefile. Here's a new one which adds that, no other\n> > changes.\n>\n> So cfbot didn't like thato ne one bit. Turns out that it's not a great\n> idea to hardcode the username \"mha\" in the tests :)\n>\n> And also changed to only use unix sockets for the tests on linux, and\n> tcp only on windows. Because that's how our tests are supposed to be.\n\nPFA a rebase to make cfbot happy.\n\nThere's another set or review notes from Jacob on March 11, that I\nwill also address, but it's not included in this version.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 29 Jun 2021 10:08:54 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Thu, Mar 11, 2021 at 12:05 AM Jacob Champion <pchampion@vmware.com> wrote:\n>\n> On Tue, 2021-03-09 at 11:25 +0100, Magnus Hagander wrote:\n> > I've also added some trivial tests (man that took an ungodly amount of\n> > fighting perl -- it's clearly been a long time since I used perl\n> > properly).\n>\n> Yeah. The tests I'm writing for this and NSS have been the same way;\n> it's a real problem. I'm basically writing supplemental tests in Python\n> as the \"daily driver\", then trying to port whatever is easiest (not\n> much) into Perl, when I get time.\n>\n> == More Notes ==\n>\n> Some additional spec-compliance stuff:\n>\n> > /* Lower 4 bits hold type of connection */\n> > if (proxyheader.fam == 0)\n> > {\n> > /* LOCAL connection, so we ignore the address included */\n> > }\n>\n> (fam == 0) is the UNSPEC case, which isn't necessarily LOCAL. We have\n> to do something different for the LOCAL case:\n\nOh ugh. yeah, and the comment is wrong too -- it got the \"command\"\nconfused with \"connection family\". Too many copy/paste I think.\n\n\n> > - \\x0 : LOCAL : [...] The receiver must accept this connection as\n> > valid and must use the real connection endpoints and discard the\n> > protocol block including the family which is ignored.\n>\n> So we should ignore the entire \"protocol block\" (by which I believe\n> they mean the protocol-and-address-family byte) in the case of LOCAL,\n> and just accept it with the original address info intact. That seems to\n> match the sample code in the back of the spec. The current behavior in\n> the patch will apply the PROXY behavior incorrectly if the sender sends\n> a LOCAL header with something other than UNSPEC -- which is strange\n> behavior but not explicitly prohibited as far as I can see.\n\nYeah, I think we do the right thing in the \"right usecase\".\n\n\n> We also need to reject all connections that aren't either LOCAL or\n> PROXY commands:\n\nIndeed.\n\n\n> > - other values are unassigned and must not be emitted by senders.\n> > Receivers must drop connections presenting unexpected values here.\n>\n> ...and naturally it'd be Nice (tm) if the tests covered those corner\n> cases.\n\nI think that's covered in the attached update.\n\n\n> Over on the struct side:\n>\n> > + struct\n> > + { /* for TCP/UDP over IPv4, len = 12 */\n> > + uint32 src_addr;\n> > + uint32 dst_addr;\n> > + uint16 src_port;\n> > + uint16 dst_port;\n> > + } ip4;\n> > ... snip ...\n> > + /* TCPv4 */\n> > + if (proxyaddrlen < 12)\n> > + {\n>\n> Given the importance of these hardcoded lengths matching reality, is it\n> possible to add some static assertions to make sure that sizeof(<ipv4\n> block>) == 12 and so on? That would also save any poor souls who are\n> using compilers with nonstandard struct-packing behavior.\n\nYeah, probably makes sense. Added.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 29 Jun 2021 11:48:45 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "Hi Magnus,\r\n\r\nI'm only just starting to page this back into my head, so this is by no\r\nmeans a full review of the v7 changes -- just stuff I've noticed over\r\nthe last day or so of poking around.\r\n\r\nOn Tue, 2021-06-29 at 11:48 +0200, Magnus Hagander wrote:\r\n> On Thu, Mar 11, 2021 at 12:05 AM Jacob Champion <pchampion@vmware.com> wrote:\r\n> > On Tue, 2021-03-09 at 11:25 +0100, Magnus Hagander wrote:\r\n> > > - \\x0 : LOCAL : [...] The receiver must accept this connection as\r\n> > > valid and must use the real connection endpoints and discard the\r\n> > > protocol block including the family which is ignored.\r\n> > \r\n> > So we should ignore the entire \"protocol block\" (by which I believe\r\n> > they mean the protocol-and-address-family byte) in the case of LOCAL,\r\n> > and just accept it with the original address info intact. That seems to\r\n> > match the sample code in the back of the spec. The current behavior in\r\n> > the patch will apply the PROXY behavior incorrectly if the sender sends\r\n> > a LOCAL header with something other than UNSPEC -- which is strange\r\n> > behavior but not explicitly prohibited as far as I can see.\r\n> \r\n> Yeah, I think we do the right thing in the \"right usecase\".\r\n\r\nThe current implementation is, I think, stricter than the spec asks\r\nfor. We're supposed to ignore the family for LOCAL cases, and it's not\r\nclear to me whether we're supposed to also ignore the entire \"fam\"\r\nfamily-and-protocol byte (the phrase \"protocol block\" is not actually\r\ndefined in the spec).\r\n\r\nIt's probably not a big deal in practice, but it could mess with\r\ninteroperability for lazier proxy implementations. I think I'll ask the\r\nHAProxy folks for some clarification tomorrow.\r\n\r\n> + <term><varname>proxy_servers</varname> (<type>string</type>)\r\n> + <indexterm>\r\n> + <primary><varname>proxy_servers</varname> configuration parameter</primary>\r\n> + </indexterm>\r\n> + </term>\r\n> + <listitem>\r\n> + <para>\r\n> + A comma separated list of one or more host names, cidr specifications or the\r\n> + literal <literal>unix</literal>, indicating which proxy servers to trust when\r\n> + connecting on the port specified in <xref linkend=\"guc-proxy-port\" />.\r\n\r\nThe documentation mentions that host names are valid in proxy_servers,\r\nbut check_proxy_servers() uses the AI_NUMERICHOST hint with\r\ngetaddrinfo(), so host names get rejected.\r\n\r\n> + GUC_check_errdetail(\"Invalid IP addrress %s\", tok);\r\n\r\ns/addrress/address/\r\n\r\nI've been thinking more about your earlier comment:\r\n\r\n> An interesting thing is what to do about\r\n> inet_server_addr/inet_server_port. That sort of loops back up to the\r\n> original question of where/how to expose the information about the\r\n> proxy in general (since right now it just logs). Right now you can\r\n> actually use inet_server_port() to see if the connection was proxied\r\n> (as long as it was over tcp).\r\n\r\nIMO these should return the \"virtual\" dst_addr/port, instead of\r\nexposing the physical connection information to the client. That way,\r\nif you intend for your proxy to be transparent, you're not advertising\r\nyour network internals to connected clients. It also means that clients\r\ncan reasonably expect to be able to reconnect to the addr:port that we\r\ngive them, and prevents confusion if the proxy is using an address\r\nfamily that the client doesn't even support (e.g. the client is IPv4-\r\nonly but the proxy connects via IPv6).\r\n\r\n--Jacob\r\n",
"msg_date": "Thu, 8 Jul 2021 23:42:18 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Fri, Jul 9, 2021 at 1:42 AM Jacob Champion <pchampion@vmware.com> wrote:\n>\n> Hi Magnus,\n>\n> I'm only just starting to page this back into my head, so this is by no\n> means a full review of the v7 changes -- just stuff I've noticed over\n> the last day or so of poking around.\n>\n> On Tue, 2021-06-29 at 11:48 +0200, Magnus Hagander wrote:\n> > On Thu, Mar 11, 2021 at 12:05 AM Jacob Champion <pchampion@vmware.com> wrote:\n> > > On Tue, 2021-03-09 at 11:25 +0100, Magnus Hagander wrote:\n> > > > - \\x0 : LOCAL : [...] The receiver must accept this connection as\n> > > > valid and must use the real connection endpoints and discard the\n> > > > protocol block including the family which is ignored.\n> > >\n> > > So we should ignore the entire \"protocol block\" (by which I believe\n> > > they mean the protocol-and-address-family byte) in the case of LOCAL,\n> > > and just accept it with the original address info intact. That seems to\n> > > match the sample code in the back of the spec. The current behavior in\n> > > the patch will apply the PROXY behavior incorrectly if the sender sends\n> > > a LOCAL header with something other than UNSPEC -- which is strange\n> > > behavior but not explicitly prohibited as far as I can see.\n> >\n> > Yeah, I think we do the right thing in the \"right usecase\".\n>\n> The current implementation is, I think, stricter than the spec asks\n> for. We're supposed to ignore the family for LOCAL cases, and it's not\n> clear to me whether we're supposed to also ignore the entire \"fam\"\n> family-and-protocol byte (the phrase \"protocol block\" is not actually\n> defined in the spec).\n>\n> It's probably not a big deal in practice, but it could mess with\n> interoperability for lazier proxy implementations. I think I'll ask the\n> HAProxy folks for some clarification tomorrow.\n\nThanks!\n\nYeah, I have no problem being stricter than necessary, unless that\nactually causes any interop problems. It's a lot worse to not be\nstrict enough..\n\n\n> > + <term><varname>proxy_servers</varname> (<type>string</type>)\n> > + <indexterm>\n> > + <primary><varname>proxy_servers</varname> configuration parameter</primary>\n> > + </indexterm>\n> > + </term>\n> > + <listitem>\n> > + <para>\n> > + A comma separated list of one or more host names, cidr specifications or the\n> > + literal <literal>unix</literal>, indicating which proxy servers to trust when\n> > + connecting on the port specified in <xref linkend=\"guc-proxy-port\" />.\n>\n> The documentation mentions that host names are valid in proxy_servers,\n> but check_proxy_servers() uses the AI_NUMERICHOST hint with\n> getaddrinfo(), so host names get rejected.\n\nAh, good point. Should say \"ip addresses\".\n\n>\n> > + GUC_check_errdetail(\"Invalid IP addrress %s\", tok);\n>\n> s/addrress/address/\n\nOops.\n\n\n> I've been thinking more about your earlier comment:\n>\n> > An interesting thing is what to do about\n> > inet_server_addr/inet_server_port. That sort of loops back up to the\n> > original question of where/how to expose the information about the\n> > proxy in general (since right now it just logs). Right now you can\n> > actually use inet_server_port() to see if the connection was proxied\n> > (as long as it was over tcp).\n>\n> IMO these should return the \"virtual\" dst_addr/port, instead of\n> exposing the physical connection information to the client. That way,\n> if you intend for your proxy to be transparent, you're not advertising\n> your network internals to connected clients. It also means that clients\n> can reasonably expect to be able to reconnect to the addr:port that we\n> give them, and prevents confusion if the proxy is using an address\n> family that the client doesn't even support (e.g. the client is IPv4-\n> only but the proxy connects via IPv6).\n\nThat reasoning I think makes a lot of sense, especially with the\ncomment about being able to connect back to it.\n\nThe question at that point extends to, would we also add extra\nfunctions to get the data on the proxy connection itself? Maybe add a\ninet_proxy_addr()/inet_proxy_port()? Better names?\n\nPFA a patch that fixes the above errors, and changes\ninet_server_addr()/inet_server_port(). Does not yet add anything to\nreceive the actual local port in this case.\n\n--\n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/",
"msg_date": "Mon, 12 Jul 2021 18:28:40 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Mon, 2021-07-12 at 18:28 +0200, Magnus Hagander wrote:\r\n> Yeah, I have no problem being stricter than necessary, unless that\r\n> actually causes any interop problems. It's a lot worse to not be\r\n> strict enough..\r\n\r\nAgreed. Haven't heard back from the HAProxy mailing list yet, so\r\nstaying strict seems reasonable in the meantime. That could always be\r\nrolled back later.\r\n\r\n> > I've been thinking more about your earlier comment:\r\n> > \r\n> > > An interesting thing is what to do about\r\n> > > inet_server_addr/inet_server_port. That sort of loops back up to the\r\n> > > original question of where/how to expose the information about the\r\n> > > proxy in general (since right now it just logs). Right now you can\r\n> > > actually use inet_server_port() to see if the connection was proxied\r\n> > > (as long as it was over tcp).\r\n> > \r\n> > IMO these should return the \"virtual\" dst_addr/port, instead of\r\n> > exposing the physical connection information to the client. That way,\r\n> > if you intend for your proxy to be transparent, you're not advertising\r\n> > your network internals to connected clients. It also means that clients\r\n> > can reasonably expect to be able to reconnect to the addr:port that we\r\n> > give them, and prevents confusion if the proxy is using an address\r\n> > family that the client doesn't even support (e.g. the client is IPv4-\r\n> > only but the proxy connects via IPv6).\r\n> \r\n> That reasoning I think makes a lot of sense, especially with the\r\n> comment about being able to connect back to it.\r\n> \r\n> The question at that point extends to, would we also add extra\r\n> functions to get the data on the proxy connection itself? Maybe add a\r\n> inet_proxy_addr()/inet_proxy_port()? Better names?\r\n\r\nWhat's the intended use case? I have trouble viewing those as anything\r\nbut information disclosure vectors, but I'm jaded. :)\r\n\r\nIf the goal is to give a last-ditch debugging tool to someone whose\r\nproxy isn't behaving properly -- though I'd hope the proxy in question\r\nhas its own ways to debug its behavior -- maybe they could be locked\r\nbehind one of the pg_monitor roles, so that they're only available to\r\nsomeone who could get that information anyway?\r\n\r\n> PFA a patch that fixes the above errors, and changes\r\n> inet_server_addr()/inet_server_port(). Does not yet add anything to\r\n> receive the actual local port in this case.\r\n\r\nLooking good in local testing. I'm going to reread the spec with fresh\r\neyes and do a full review pass, but this is shaping up nicely IMO.\r\n\r\nSomething that I haven't thought about very hard yet is proxy\r\nauthentication, but I think the simple IP authentication will be enough\r\nfor a first version. For the Unix socket case, it looks like anyone\r\ncurrently relying on peer auth will need to switch to a\r\nunix_socket_group/_permissions model. For now, that sounds like a\r\nreasonable v1 restriction, though I think not being able to set the\r\nproxy socket's permissions separately from the \"normal\" one might lead\r\nto some complications in more advanced setups.\r\n\r\n--Jacob\r\n",
"msg_date": "Wed, 14 Jul 2021 18:23:59 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Wed, Jul 14, 2021 at 8:24 PM Jacob Champion <pchampion@vmware.com> wrote:\n>\n> On Mon, 2021-07-12 at 18:28 +0200, Magnus Hagander wrote:\n> > Yeah, I have no problem being stricter than necessary, unless that\n> > actually causes any interop problems. It's a lot worse to not be\n> > strict enough..\n>\n> Agreed. Haven't heard back from the HAProxy mailing list yet, so\n> staying strict seems reasonable in the meantime. That could always be\n> rolled back later.\n\nAny further feedback from them now, two months later? :)\n\n(Sorry, I was out on vacation for the end of the last CF, so didn't\nget around to this one, but it seemed there'd be plenty of time in\nthis CF)\n\n\n> > > I've been thinking more about your earlier comment:\n> > >\n> > > > An interesting thing is what to do about\n> > > > inet_server_addr/inet_server_port. That sort of loops back up to the\n> > > > original question of where/how to expose the information about the\n> > > > proxy in general (since right now it just logs). Right now you can\n> > > > actually use inet_server_port() to see if the connection was proxied\n> > > > (as long as it was over tcp).\n> > >\n> > > IMO these should return the \"virtual\" dst_addr/port, instead of\n> > > exposing the physical connection information to the client. That way,\n> > > if you intend for your proxy to be transparent, you're not advertising\n> > > your network internals to connected clients. It also means that clients\n> > > can reasonably expect to be able to reconnect to the addr:port that we\n> > > give them, and prevents confusion if the proxy is using an address\n> > > family that the client doesn't even support (e.g. the client is IPv4-\n> > > only but the proxy connects via IPv6).\n> >\n> > That reasoning I think makes a lot of sense, especially with the\n> > comment about being able to connect back to it.\n> >\n> > The question at that point extends to, would we also add extra\n> > functions to get the data on the proxy connection itself? Maybe add a\n> > inet_proxy_addr()/inet_proxy_port()? Better names?\n>\n> What's the intended use case? I have trouble viewing those as anything\n> but information disclosure vectors, but I'm jaded. :)\n\n\"Covering all the bases\"?\n\nI'm not entirely sure what the point is of the *existing* functions\nfor that though, so I'm definitely not wedded to including it.\n\n\n> If the goal is to give a last-ditch debugging tool to someone whose\n> proxy isn't behaving properly -- though I'd hope the proxy in question\n> has its own ways to debug its behavior -- maybe they could be locked\n> behind one of the pg_monitor roles, so that they're only available to\n> someone who could get that information anyway?\n\nYeah, agreed.\n\n\n> > PFA a patch that fixes the above errors, and changes\n> > inet_server_addr()/inet_server_port(). Does not yet add anything to\n> > receive the actual local port in this case.\n>\n> Looking good in local testing. I'm going to reread the spec with fresh\n> eyes and do a full review pass, but this is shaping up nicely IMO.\n\nThanks!\n\n\n> Something that I haven't thought about very hard yet is proxy\n> authentication, but I think the simple IP authentication will be enough\n> for a first version. For the Unix socket case, it looks like anyone\n> currently relying on peer auth will need to switch to a\n> unix_socket_group/_permissions model. For now, that sounds like a\n> reasonable v1 restriction, though I think not being able to set the\n> proxy socket's permissions separately from the \"normal\" one might lead\n> to some complications in more advanced setups.\n\nAgreed in principle, but I think those are some quite uncommon\nusecases, so definitely something we don't need to cover in a first\nfeature.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Tue, 7 Sep 2021 12:24:07 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Tue, 2021-09-07 at 12:24 +0200, Magnus Hagander wrote:\r\n> On Wed, Jul 14, 2021 at 8:24 PM Jacob Champion <pchampion@vmware.com> wrote:\r\n> > On Mon, 2021-07-12 at 18:28 +0200, Magnus Hagander wrote:\r\n> > > Yeah, I have no problem being stricter than necessary, unless that\r\n> > > actually causes any interop problems. It's a lot worse to not be\r\n> > > strict enough..\r\n> > \r\n> > Agreed. Haven't heard back from the HAProxy mailing list yet, so\r\n> > staying strict seems reasonable in the meantime. That could always be\r\n> > rolled back later.\r\n> \r\n> Any further feedback from them now, two months later? :)\r\n\r\nNot yet :( I've bumped the thread; in the meantime I still think the\r\nstricter operation is fine, since in the worst case you just make it\r\nless strict in the future.\r\n\r\n> (Sorry, I was out on vacation for the end of the last CF, so didn't\r\n> get around to this one, but it seemed there'd be plenty of time in\r\n> this CF)\r\n\r\nNo worries!\r\n\r\n> > > The question at that point extends to, would we also add extra\r\n> > > functions to get the data on the proxy connection itself? Maybe add a\r\n> > > inet_proxy_addr()/inet_proxy_port()? Better names?\r\n> > \r\n> > What's the intended use case? I have trouble viewing those as anything\r\n> > but information disclosure vectors, but I'm jaded. :)\r\n> \r\n> \"Covering all the bases\"?\r\n> \r\n> I'm not entirely sure what the point is of the *existing* functions\r\n> for that though, so I'm definitely not wedded to including it.\r\n\r\nI guess I'm in the same boat. I'm probably not the right person to\r\nweigh in.\r\n\r\n> > Looking good in local testing. I'm going to reread the spec with fresh\r\n> > eyes and do a full review pass, but this is shaping up nicely IMO.\r\n> \r\n> Thanks!\r\n\r\nI still owe you that overall review. Hoping to get to it this week.\r\n\r\n> > Something that I haven't thought about very hard yet is proxy\r\n> > authentication, but I think the simple IP authentication will be enough\r\n> > for a first version. For the Unix socket case, it looks like anyone\r\n> > currently relying on peer auth will need to switch to a\r\n> > unix_socket_group/_permissions model. For now, that sounds like a\r\n> > reasonable v1 restriction, though I think not being able to set the\r\n> > proxy socket's permissions separately from the \"normal\" one might lead\r\n> > to some complications in more advanced setups.\r\n> \r\n> Agreed in principle, but I think those are some quite uncommon\r\n> usecases, so definitely something we don't need to cover in a first\r\n> feature.\r\n\r\nHm. I guess I'm overly optimistic that \"properly securing your\r\ndatabase\" is not such an uncommon case, but... :)\r\n\r\n--Jacob\r\n",
"msg_date": "Wed, 8 Sep 2021 18:51:27 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Wed, 2021-09-08 at 18:51 +0000, Jacob Champion wrote:\r\n> I still owe you that overall review. Hoping to get to it this week.\r\n\r\nAnd here it is. I focused on things other than UnwrapProxyConnection()\r\nfor this round, since I think that piece is looking solid.\r\n\r\n> + if (port->isProxy)\r\n> + {\r\n> + ereport(LOG,\r\n> + (errcode_for_socket_access(),\r\n> + errmsg(\"Ident authentication cannot be used over PROXY connections\")));\r\n\r\nWhat are the rules on COMMERROR vs LOG when dealing with authentication\r\ncode? I always thought COMMERROR was required, but I see now that LOG\r\n(among others) is suppressed to the client during authentication.\r\n\r\n> #ifdef USE_SSL\r\n> /* No SSL when disabled or on Unix sockets */\r\n> - if (!LoadedSSL || IS_AF_UNIX(port->laddr.addr.ss_family))\r\n> + if (!LoadedSSL || (IS_AF_UNIX(port->laddr.addr.ss_family) && !port->isProxy))\r\n> SSLok = 'N';\r\n> else\r\n> SSLok = 'S'; /* Support for SSL */\r\n> @@ -2087,7 +2414,7 @@ retry1:\r\n> \r\n> #ifdef ENABLE_GSS\r\n> /* No GSSAPI encryption when on Unix socket */\r\n> - if (!IS_AF_UNIX(port->laddr.addr.ss_family))\r\n> + if (!IS_AF_UNIX(port->laddr.addr.ss_family) || port->isProxy)\r\n> GSSok = 'G';\r\n\r\nNow that we have port->daddr, could these checks be simplified to just\r\nIS_AF_UNIX(port->daddr...)? Or is there a corner case I'm missing for\r\nthe port->isProxy case?\r\n\r\n> + * Note: AuthenticationTimeout is applied here while waiting for the\r\n> + * startup packet, and then again in InitPostgres for the duration of any\r\n> + * authentication operations. So a hostile client could tie up the\r\n> + * process for nearly twice AuthenticationTimeout before we kick him off.\r\n\r\nThis comment needs to be adjusted after the move; waiting for the\r\nstartup packet comes later, and it looks like we can now tie up 3x the\r\ntimeout for the proxy case.\r\n\r\n> + /* Check if this is a proxy connection and if so unwrap the proxying */\r\n> + if (port->isProxy)\r\n> + {\r\n> + enable_timeout_after(STARTUP_PACKET_TIMEOUT, AuthenticationTimeout * 1000);\r\n> + if (UnwrapProxyConnection(port) != STATUS_OK)\r\n> + proc_exit(0);\r\n\r\nI think the timeout here could comfortably be substantially less than\r\nthe overall authentication timeout, since the proxy should send its\r\nheader immediately even if the client takes its time with the startup\r\npacket. The spec suggests allowing 3 seconds minimum to cover a\r\nretransmission. Maybe something to tune in the future?\r\n\r\n> + /* Also listen to the PROXY port on this address, if configured */\r\n> + if (ProxyPortNumber)\r\n> + {\r\n> + if (strcmp(curhost, \"*\") == 0)\r\n> + socket = StreamServerPort(AF_UNSPEC, NULL,\r\n> + (unsigned short) ProxyPortNumber,\r\n> + NULL,\r\n> + ListenSocket, MAXLISTEN);\r\n\r\nSorry if you already talked about this upthread somewhere, but it looks\r\nlike another downside of treating \"proxy mode\" as a server-wide on/off\r\nswitch is that it cuts the effective MAXLISTEN in half, from 64 to 32,\r\nsince we're opening two sockets for every address. If I've understood\r\nthat correctly, it might be worth mentioning in the docs.\r\n\r\n> - if (!success && elemlist != NIL)\r\n> + if (socket == NULL && elemlist != NIL)\r\n> ereport(FATAL,\r\n> (errmsg(\"could not create any TCP/IP sockets\")));\r\n\r\nWith this change in PostmasterMain, it looks like `success` is no\r\nlonger a useful variable. But I'm not convinced that this is the\r\ncorrect logic -- this is just checking to see if the last socket\r\ncreation succeeded, as opposed to seeing if any of them succeeded. Is\r\nthat what you intended?\r\n\r\n> +plan tests => 25;\r\n> +\r\n> +my $node = get_new_node('node');\r\n\r\nThe TAP test will need to be rebased over the changes in 201a76183e.\r\n\r\n--Jacob\r\n",
"msg_date": "Thu, 9 Sep 2021 23:44:09 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Fri, Sep 10, 2021 at 1:44 AM Jacob Champion <pchampion@vmware.com> wrote:\n>\n> On Wed, 2021-09-08 at 18:51 +0000, Jacob Champion wrote:\n> > I still owe you that overall review. Hoping to get to it this week.\n>\n> And here it is. I focused on things other than UnwrapProxyConnection()\n> for this round, since I think that piece is looking solid.\n\nThanks!\n\n\n> > + if (port->isProxy)\n> > + {\n> > + ereport(LOG,\n> > + (errcode_for_socket_access(),\n> > + errmsg(\"Ident authentication cannot be used over PROXY connections\")));\n>\n> What are the rules on COMMERROR vs LOG when dealing with authentication\n> code? I always thought COMMERROR was required, but I see now that LOG\n> (among others) is suppressed to the client during authentication.\n\nI honestly don't know :) In this case, LOG is what's used for all the\nother message in errors in ident_inet(), so I picked it for\nconsistency.\n\n\n> > #ifdef USE_SSL\n> > /* No SSL when disabled or on Unix sockets */\n> > - if (!LoadedSSL || IS_AF_UNIX(port->laddr.addr.ss_family))\n> > + if (!LoadedSSL || (IS_AF_UNIX(port->laddr.addr.ss_family) && !port->isProxy))\n> > SSLok = 'N';\n> > else\n> > SSLok = 'S'; /* Support for SSL */\n> > @@ -2087,7 +2414,7 @@ retry1:\n> >\n> > #ifdef ENABLE_GSS\n> > /* No GSSAPI encryption when on Unix socket */\n> > - if (!IS_AF_UNIX(port->laddr.addr.ss_family))\n> > + if (!IS_AF_UNIX(port->laddr.addr.ss_family) || port->isProxy)\n> > GSSok = 'G';\n>\n> Now that we have port->daddr, could these checks be simplified to just\n> IS_AF_UNIX(port->daddr...)? Or is there a corner case I'm missing for\n> the port->isProxy case?\n\nYeah, I think they could.\n\n\n> > + * Note: AuthenticationTimeout is applied here while waiting for the\n> > + * startup packet, and then again in InitPostgres for the duration of any\n> > + * authentication operations. So a hostile client could tie up the\n> > + * process for nearly twice AuthenticationTimeout before we kick him off.\n>\n> This comment needs to be adjusted after the move; waiting for the\n> startup packet comes later, and it looks like we can now tie up 3x the\n> timeout for the proxy case.\n\nGood point.\n\n\n> > + /* Check if this is a proxy connection and if so unwrap the proxying */\n> > + if (port->isProxy)\n> > + {\n> > + enable_timeout_after(STARTUP_PACKET_TIMEOUT, AuthenticationTimeout * 1000);\n> > + if (UnwrapProxyConnection(port) != STATUS_OK)\n> > + proc_exit(0);\n>\n> I think the timeout here could comfortably be substantially less than\n> the overall authentication timeout, since the proxy should send its\n> header immediately even if the client takes its time with the startup\n> packet. The spec suggests allowing 3 seconds minimum to cover a\n> retransmission. Maybe something to tune in the future?\n\nMaybe. I'll leave it with a new comment for now about us diong it, and\nthat we may want to consider igt in the future.\n\n>\n> > + /* Also listen to the PROXY port on this address, if configured */\n> > + if (ProxyPortNumber)\n> > + {\n> > + if (strcmp(curhost, \"*\") == 0)\n> > + socket = StreamServerPort(AF_UNSPEC, NULL,\n> > + (unsigned short) ProxyPortNumber,\n> > + NULL,\n> > + ListenSocket, MAXLISTEN);\n>\n> Sorry if you already talked about this upthread somewhere, but it looks\n> like another downside of treating \"proxy mode\" as a server-wide on/off\n> switch is that it cuts the effective MAXLISTEN in half, from 64 to 32,\n> since we're opening two sockets for every address. If I've understood\n> that correctly, it might be worth mentioning in the docs.\n\nCorrect. I don't see a way to avoid that without complicating things\n(as long as we want the ports to be separate), but I also don't see it\nas something that's reality to be an issue in reality.\n\nI would agree with documenting it, but I can't actually find us\ndocumenting the MAXLISTEN value anywhere. Do we?\n\n\n> > - if (!success && elemlist != NIL)\n> > + if (socket == NULL && elemlist != NIL)\n> > ereport(FATAL,\n> > (errmsg(\"could not create any TCP/IP sockets\")));\n>\n> With this change in PostmasterMain, it looks like `success` is no\n> longer a useful variable. But I'm not convinced that this is the\n> correct logic -- this is just checking to see if the last socket\n> creation succeeded, as opposed to seeing if any of them succeeded. Is\n> that what you intended?\n\nEh, no, that's clearly a code-moving-bug.\n\nI think the reasonable thing is to succeed if we create either a\nregular socket *or* a proxy one, but FATAL out if you configured\neither of them but we failed co create any.\n\n> > +plan tests => 25;\n> > +\n> > +my $node = get_new_node('node');\n>\n> The TAP test will need to be rebased over the changes in 201a76183e.\n\nDone, and adjustments above according to your comments, along with a\nsmall docs fix \"a proxy connection is done\" -> \"a proxy connection is\nmade\".\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 28 Sep 2021 15:23:07 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "> On 28 Sep 2021, at 15:23, Magnus Hagander <magnus@hagander.net> wrote:\n> On Fri, Sep 10, 2021 at 1:44 AM Jacob Champion <pchampion@vmware.com> wrote:\n\n>> The TAP test will need to be rebased over the changes in 201a76183e.\n> \n> Done\n\nAnd now the TAP test will need to be rebased over the changes in\nb3b4d8e68ae83f432f43f035c7eb481ef93e1583.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 3 Nov 2021 14:36:15 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Wed, Nov 3, 2021 at 2:36 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 28 Sep 2021, at 15:23, Magnus Hagander <magnus@hagander.net> wrote:\n> > On Fri, Sep 10, 2021 at 1:44 AM Jacob Champion <pchampion@vmware.com>\n> wrote:\n>\n> >> The TAP test will need to be rebased over the changes in 201a76183e.\n> >\n> > Done\n>\n> And now the TAP test will need to be rebased over the changes in\n> b3b4d8e68ae83f432f43f035c7eb481ef93e1583.\n>\n\nThanks for the pointer, PFA a rebase.\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>",
"msg_date": "Thu, 4 Nov 2021 12:03:53 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Thu, 2021-11-04 at 12:03 +0100, Magnus Hagander wrote:\r\n> Thanks for the pointer, PFA a rebase.\r\n\r\nI think the Unix socket handling needs the same \"success\" fix that you\r\napplied to the TCP socket handling above it:\r\n\r\n> @@ -1328,9 +1364,23 @@ PostmasterMain(int argc, char *argv[])\r\n> ereport(WARNING,\r\n> (errmsg(\"could not create Unix-domain socket in directory \\\"%s\\\"\",\r\n> socketdir)));\r\n> +\r\n> + if (ProxyPortNumber)\r\n> + {\r\n> + socket = StreamServerPort(AF_UNIX, NULL,\r\n> + (unsigned short) ProxyPortNumber,\r\n> + socketdir,\r\n> + ListenSocket, MAXLISTEN);\r\n> + if (socket)\r\n> + socket->isProxy = true;\r\n> + else\r\n> + ereport(WARNING,\r\n> + (errmsg(\"could not create Unix-domain PROXY socket for \\\"%s\\\"\",\r\n> + socketdir)));\r\n> + }\r\n> }\r\n> \r\n> - if (!success && elemlist != NIL)\r\n> + if (socket == NULL && elemlist != NIL)\r\n> ereport(FATAL,\r\n> (errmsg(\"could not create any Unix-domain sockets\")));\r\n\r\nOther than that, I can find nothing else to improve, and I think this\r\nis ready for more eyes than mine. :)\r\n\r\n--\r\n\r\nTo tie off some loose ends from upthread:\r\n\r\nI didn't find any MAXLISTEN documentation either, so I guess it's only\r\na documentation issue if someone runs into it, heh.\r\n\r\nI was not able to find any other cases (besides ident) where using\r\ndaddr instead of laddr would break things. I am going a bit snow-blind\r\non the patch, though, and there's a lot of auth code.\r\n\r\nI never did hear back from the PROXY spec maintainer on how strict to\r\nbe with LOCAL; another contributor did chime in but only to add that\r\nthey didn't know the answer. That conversation is at [1], in case\r\nsomeone picks it up in the future.\r\n\r\nA summary of possible improvements talked about upthread, for a future\r\nv2:\r\n\r\n- SQL functions to get the laddr info (scoped to superusers, somehow),\r\nif there's a use case for them\r\n\r\n- Setting up PROXY Unix socket permissions separately from the \"main\"\r\nsocket\r\n\r\n- Allowing PROXY-only communication (disable the \"main\" port)\r\n\r\nThanks,\r\n--Jacob\r\n\r\n[1] https://www.mail-archive.com/haproxy@formilux.org/msg40899.html\r\n",
"msg_date": "Mon, 15 Nov 2021 23:03:18 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Tue, Nov 16, 2021 at 12:03 AM Jacob Champion <pchampion@vmware.com> wrote:\n>\n> On Thu, 2021-11-04 at 12:03 +0100, Magnus Hagander wrote:\n> > Thanks for the pointer, PFA a rebase.\n>\n> I think the Unix socket handling needs the same \"success\" fix that you\n> applied to the TCP socket handling above it:\n>\n> > @@ -1328,9 +1364,23 @@ PostmasterMain(int argc, char *argv[])\n> > ereport(WARNING,\n> > (errmsg(\"could not create Unix-domain socket in directory \\\"%s\\\"\",\n> > socketdir)));\n> > +\n> > + if (ProxyPortNumber)\n> > + {\n> > + socket = StreamServerPort(AF_UNIX, NULL,\n> > + (unsigned short) ProxyPortNumber,\n> > + socketdir,\n> > + ListenSocket, MAXLISTEN);\n> > + if (socket)\n> > + socket->isProxy = true;\n> > + else\n> > + ereport(WARNING,\n> > + (errmsg(\"could not create Unix-domain PROXY socket for \\\"%s\\\"\",\n> > + socketdir)));\n> > + }\n> > }\n> >\n> > - if (!success && elemlist != NIL)\n> > + if (socket == NULL && elemlist != NIL)\n> > ereport(FATAL,\n> > (errmsg(\"could not create any Unix-domain sockets\")));\n>\n> Other than that, I can find nothing else to improve, and I think this\n> is ready for more eyes than mine. :)\n\nHere's another rebase on top of the AF_UNIX patch.\n\n\n\n> To tie off some loose ends from upthread:\n>\n> I didn't find any MAXLISTEN documentation either, so I guess it's only\n> a documentation issue if someone runs into it, heh.\n>\n> I was not able to find any other cases (besides ident) where using\n> daddr instead of laddr would break things. I am going a bit snow-blind\n> on the patch, though, and there's a lot of auth code.\n\nYeah, that's definitely a good reason for more eyes on it.\n\n\n> A summary of possible improvements talked about upthread, for a future\n> v2:\n>\n> - SQL functions to get the laddr info (scoped to superusers, somehow),\n> if there's a use case for them\n>\n> - Setting up PROXY Unix socket permissions separately from the \"main\"\n> socket\n>\n> - Allowing PROXY-only communication (disable the \"main\" port)\n\nThese all seem useful, but I'm liking the idea of putting them in a\nv2, to avoid expanding the scope too much.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 25 Feb 2022 11:41:05 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "A general question on this feature: AFAICT, you can only send the proxy \nheader once at the beginning of the connection. So this wouldn't be of \nuse for PostgreSQL-protocol connection poolers (pgbouncer, pgpool), \nwhere the same server connection can be used for clients from different \nsource addresses. Do I understand that correctly?\n\n\n",
"msg_date": "Wed, 9 Mar 2022 17:23:28 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Wed, Mar 9, 2022 at 5:23 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> A general question on this feature: AFAICT, you can only send the proxy\n> header once at the beginning of the connection. So this wouldn't be of\n> use for PostgreSQL-protocol connection poolers (pgbouncer, pgpool),\n> where the same server connection can be used for clients from different\n> source addresses. Do I understand that correctly?\n\nCorrect. It's only sent at connection startup, so if you're re-using\nthe connection it would also re-use the IP address of the first one.\n\nFor reusing the connection for multiple clients, you'd want something\ndifferent, like a \"priviliged mode in the tunnel\" that the pooler can\nhandle.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 9 Mar 2022 17:29:34 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "Hi,\r\n\r\nI've been able to test the patch. Here is a recap of the experimentation.\r\n\r\n# Setup\r\n\r\nAll tests have been done witch 3 VMs (PostgreSQL, HAproxy, psql client) on\r\nDebian 11 communicating over private network.\r\n* PostgreSQL have been built with proxy_protocol_11.patch applied on master branch (465ab24296).\r\n* psql client is from postgresql-client-13 from Debian 11 repository.\r\n* HAproxy version used is 2.5.5-1~bpo11+1 installed from https://haproxy.debian.net\r\n\r\n# Configuration\r\n\r\nPostgresSQL has been configured to listen only on its private IP. To enable\r\nproxy protocol support `proxy_port` has been configured to `5431` and\r\n`proxy_servers` to `10.0.0.0/24`. `log_connections` has been turned on to make\r\nsure the correct IP address is logged. `log_min_duration_statement` has been\r\nconfigured to 0 to log all queries. Finally `log_destination` has been\r\nconfigured to `csvlog`.\r\n\r\npg_hba.conf is like this:\r\n\r\n local all all trust\r\n host all all 127.0.0.1/32 trust\r\n host all all ::1/128 trust\r\n local replication all trust\r\n host replication all 127.0.0.1/32 trust\r\n host replication all ::1/128 trust\r\n host all all 10.0.0.208/32 md5\r\n\r\nWhere 10.0.0.208 is the IP host the psql client's VM.\r\n\r\nHAproxy has two frontends, one for proxy protocol (port 5431) and one for\r\nregular TCP traffic. The configuration looks like this:\r\n\r\n listen postgresql\r\n bind 10.0.0.222:5432\r\n server pg 10.0.0.253:5432 check\r\n\r\n listen postgresql_proxy\r\n bind 10.0.0.222:5431\r\n server pg 10.0.0.253:5431 send-proxy-v2\r\n\r\nWhere 10.0.0.222 is the IP of HAproxy's VM and 10.0.0.253 is the IP of\r\nPostgreSQL's VM.\r\n\r\n# Tests\r\n\r\n* from psql's vm to haproxy on port 5432 (no proxy protocol)\r\n --> connection denied by pg_hba.conf, as expected\r\n\r\n* from psql's vm to postgresql's VM on port 5432 (no proxy protocol)\r\n --> connection success with psql's vm ip in logfile and pg_stat_activity\r\n\r\n* from psql's vm to postgresql's VM on port 5431 (proxy protocol)\r\n --> unable to open a connection, as expected\r\n\r\n* from psql's vm to haproxy on port 5431 (proxy protocol)\r\n --> connection success with psql's vm ip in logfile and pg_stat_activity\r\n\r\nI've also tested without proxy protocol enable (and pg_hba.conf updated\r\naccordingly), PostgreSQL behave as expected.\r\n\r\n# Conclusion\r\n\r\nFrom my point of view the documentation is clear enough and the feature works\r\nas expected.",
"msg_date": "Fri, 01 Apr 2022 22:16:59 +0000",
"msg_from": "wilfried roset <wilfried.roset@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On Sat, Apr 2, 2022 at 12:17 AM wilfried roset <wilfried.roset@gmail.com>\nwrote:\n\n> Hi,\n>\n> I've been able to test the patch. Here is a recap of the experimentation.\n>\n> # Setup\n>\n> All tests have been done witch 3 VMs (PostgreSQL, HAproxy, psql client) on\n> Debian 11 communicating over private network.\n> * PostgreSQL have been built with proxy_protocol_11.patch applied on\n> master branch (465ab24296).\n> * psql client is from postgresql-client-13 from Debian 11 repository.\n> * HAproxy version used is 2.5.5-1~bpo11+1 installed from\n> https://haproxy.debian.net\n>\n> # Configuration\n>\n> PostgresSQL has been configured to listen only on its private IP. To enable\n> proxy protocol support `proxy_port` has been configured to `5431` and\n> `proxy_servers` to `10.0.0.0/24` <http://10.0.0.0/24>. `log_connections`\n> has been turned on to make\n> sure the correct IP address is logged. `log_min_duration_statement` has\n> been\n> configured to 0 to log all queries. Finally `log_destination` has been\n> configured to `csvlog`.\n>\n> pg_hba.conf is like this:\n>\n> local all all trust\n> host all all 127.0.0.1/32 trust\n> host all all ::1/128 trust\n> local replication all trust\n> host replication all 127.0.0.1/32 trust\n> host replication all ::1/128 trust\n> host all all 10.0.0.208/32 md5\n>\n> Where 10.0.0.208 is the IP host the psql client's VM.\n>\n> HAproxy has two frontends, one for proxy protocol (port 5431) and one for\n> regular TCP traffic. The configuration looks like this:\n>\n> listen postgresql\n> bind 10.0.0.222:5432\n> server pg 10.0.0.253:5432 check\n>\n> listen postgresql_proxy\n> bind 10.0.0.222:5431\n> server pg 10.0.0.253:5431 send-proxy-v2\n>\n> Where 10.0.0.222 is the IP of HAproxy's VM and 10.0.0.253 is the IP of\n> PostgreSQL's VM.\n>\n> # Tests\n>\n> * from psql's vm to haproxy on port 5432 (no proxy protocol)\n> --> connection denied by pg_hba.conf, as expected\n>\n> * from psql's vm to postgresql's VM on port 5432 (no proxy protocol)\n> --> connection success with psql's vm ip in logfile and pg_stat_activity\n>\n> * from psql's vm to postgresql's VM on port 5431 (proxy protocol)\n> --> unable to open a connection, as expected\n>\n> * from psql's vm to haproxy on port 5431 (proxy protocol)\n> --> connection success with psql's vm ip in logfile and pg_stat_activity\n>\n> I've also tested without proxy protocol enable (and pg_hba.conf updated\n> accordingly), PostgreSQL behave as expected.\n>\n> # Conclusion\n>\n> From my point of view the documentation is clear enough and the feature\n> works\n> as expected.\n\n\nHi!\n\nThanks for this review and testing!\n\nI think it could do with at least noe more look-over at the source code\nlevel as well at this point though since it's been sitting around for a\nwhile, so it won't make it in for this deadline. But hopefully I can get it\nin early in the next cycle!\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Apr 2, 2022 at 12:17 AM wilfried roset <wilfried.roset@gmail.com> wrote:Hi,\n\nI've been able to test the patch. Here is a recap of the experimentation.\n\n# Setup\n\nAll tests have been done witch 3 VMs (PostgreSQL, HAproxy, psql client) on\nDebian 11 communicating over private network.\n* PostgreSQL have been built with proxy_protocol_11.patch applied on master branch (465ab24296).\n* psql client is from postgresql-client-13 from Debian 11 repository.\n* HAproxy version used is 2.5.5-1~bpo11+1 installed from https://haproxy.debian.net\n\n# Configuration\n\nPostgresSQL has been configured to listen only on its private IP. To enable\nproxy protocol support `proxy_port` has been configured to `5431` and\n`proxy_servers` to `10.0.0.0/24`. `log_connections` has been turned on to make\nsure the correct IP address is logged. `log_min_duration_statement` has been\nconfigured to 0 to log all queries. Finally `log_destination` has been\nconfigured to `csvlog`.\n\npg_hba.conf is like this:\n\n local all all trust\n host all all 127.0.0.1/32 trust\n host all all ::1/128 trust\n local replication all trust\n host replication all 127.0.0.1/32 trust\n host replication all ::1/128 trust\n host all all 10.0.0.208/32 md5\n\nWhere 10.0.0.208 is the IP host the psql client's VM.\n\nHAproxy has two frontends, one for proxy protocol (port 5431) and one for\nregular TCP traffic. The configuration looks like this:\n\n listen postgresql\n bind 10.0.0.222:5432\n server pg 10.0.0.253:5432 check\n\n listen postgresql_proxy\n bind 10.0.0.222:5431\n server pg 10.0.0.253:5431 send-proxy-v2\n\nWhere 10.0.0.222 is the IP of HAproxy's VM and 10.0.0.253 is the IP of\nPostgreSQL's VM.\n\n# Tests\n\n* from psql's vm to haproxy on port 5432 (no proxy protocol)\n --> connection denied by pg_hba.conf, as expected\n\n* from psql's vm to postgresql's VM on port 5432 (no proxy protocol)\n --> connection success with psql's vm ip in logfile and pg_stat_activity\n\n* from psql's vm to postgresql's VM on port 5431 (proxy protocol)\n --> unable to open a connection, as expected\n\n* from psql's vm to haproxy on port 5431 (proxy protocol)\n --> connection success with psql's vm ip in logfile and pg_stat_activity\n\nI've also tested without proxy protocol enable (and pg_hba.conf updated\naccordingly), PostgreSQL behave as expected.\n\n# Conclusion\n\n From my point of view the documentation is clear enough and the feature works\nas expected.Hi!Thanks for this review and testing!I think it could do with at least noe more look-over at the source code level as well at this point though since it's been sitting around for a while, so it won't make it in for this deadline. But hopefully I can get it in early in the next cycle!-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Fri, 8 Apr 2022 13:58:21 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "This needs a rebase, but after that I expect it to be RfC.\r\n\r\n--Jacob\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Thu, 28 Jul 2022 20:05:37 +0000",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
},
{
"msg_contents": "On 7/28/22 22:05, Jacob Champion wrote:\n> This needs a rebase, but after that I expect it to be RfC.\n> \n> --Jacob\n> \n> The new status of this patch is: Waiting on Author\n\n\nHello folks,\n\nThank you all for this awesome work!\n\nI'm looking for this feature for years now. Last year, I've tried to \nrebase the patch without success. Unfortunately, this is out of my league.\n\nMagnus, please let me know if I can help.\n\nHave a nice day,\nJulien\n\n\n",
"msg_date": "Sat, 3 Feb 2024 12:37:57 +0100",
"msg_from": "Julien Riou <julien@riou.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PROXY protocol support"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nWe came across an issue where the checkpointer writes to the older\ntimeline while a promotion is ongoing after reaching the recovery point\nin a PITR, when there are prepared transactions before the recovery\npoint. We came across this issue first in REL_12_STABLE and saw that it\nalso exists in devel.\n\nSetup:\nPFA a minimal reproduction script repro.txt.\n\nAfter running the script, we notice that the checkpointer has written\nthe end-of-recovery shutdown checkpoint in the previous timeline (tli =\n1), i.e. it wrote into the WAL segment 000000010000000000000003 instead\nof writing to the WAL segment 000000020000000000000003, causing it to\noverwrite WAL records past the recovery point (please see attached diff\noutput file waldiff.diff) in 000000010000000000000003.\n\nAlso, performing a subsequent shutdown on the recovered server may cause\nthe next shutdown checkpoint record to be written, again, to the\nprevious timeline, i.e. to 000000010000000000000003. A subsequent server\nstart will fail as the starup process will be unable to find the\ncheckpoint in the latest timeline (000000020000000000000003) and we will\nget:\n\n...\nLOG: invalid record length at 0/3016FB8: wante\nd 24, got 0\nLOG: invalid primary checkpoint record\nPANIC: could not locate a valid checkpoint record\n...\n\nRCA:\n\nWhen there are prepared transactions in an older timeline, in the\ncheckpointer, a call to CheckPointTwoPhase() and subsequently to\nXlogReadTwoPhaseData() and subsequently to read_local_xlog_page() leads\nto the following line:\n\nread_upto = GetXLogReplayRecPtr(&ThisTimeLineID);\n\nGetXLogReplayRecPtr() will change ThisTimeLineID to 1, in order to read\nthe two phase WAL records in the older timeline. This variable will\nremain unchanged and the checkpointer ends up writing the checkpoint\nrecord into the older WAL segment (when XLogBeginInsert() is called\nwithin CreateCheckPoint(), the value is still 1). The value is not\nsynchronized as even if RecoveryInProgress() is called,\nxlogctl->SharedRecoveryState is not RECOVERY_STATE_DONE\n(SharedRecoveryInProgress = true in older versions) as the startup\nprocess waits for the checkpointer inside RequestCheckpoint() (since\nrecovery_target_action='promote' involves a non-fast promotion). Thus,\nInitXLOGAccess() is not called and the value of ThisTimeLineID is not\nupdated before the checkpoint record write.\n\nSince 1148e22a82e, GetXLogReplayRecPtr() is called with ThisTimeLineID\ninstead of a local variable, within read_local_xlog_page().\n\nPFA a small patch that fixes the problem by explicitly calling\nInitXLOGAccess() in CheckPointTwoPhase(), after the two phase state data\nis read, in order to update ThisTimeLineID to the latest timeline. It is\nokay to call InitXLOGAccess() as it is lightweight and would mostly be\na no-op.\n\nRegards,\nSoumyadeep, Kevin and Jimmy\nVMWare",
"msg_date": "Tue, 2 Mar 2021 17:56:03 -0800",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "PITR promote bug: Checkpointer writes to older timeline"
},
{
"msg_contents": "At Tue, 2 Mar 2021 17:56:03 -0800, Soumyadeep Chakraborty <soumyadeep2007@gmail.com> wrote in \n> Hello hackers,\n> \n> We came across an issue where the checkpointer writes to the older\n> timeline while a promotion is ongoing after reaching the recovery point\n> in a PITR, when there are prepared transactions before the recovery\n> point. We came across this issue first in REL_12_STABLE and saw that it\n> also exists in devel.\n\nGood Catch! I can reproduce that.\n\n> When there are prepared transactions in an older timeline, in the\n> checkpointer, a call to CheckPointTwoPhase() and subsequently to\n> XlogReadTwoPhaseData() and subsequently to read_local_xlog_page() leads\n> to the following line:\n> \n> read_upto = GetXLogReplayRecPtr(&ThisTimeLineID);\n> \n> GetXLogReplayRecPtr() will change ThisTimeLineID to 1, in order to read\n> the two phase WAL records in the older timeline. This variable will\n> remain unchanged and the checkpointer ends up writing the checkpoint\n> record into the older WAL segment (when XLogBeginInsert() is called\n> within CreateCheckPoint(), the value is still 1). The value is not\n> synchronized as even if RecoveryInProgress() is called,\n> xlogctl->SharedRecoveryState is not RECOVERY_STATE_DONE\n> (SharedRecoveryInProgress = true in older versions) as the startup\n> process waits for the checkpointer inside RequestCheckpoint() (since\n> recovery_target_action='promote' involves a non-fast promotion). Thus,\n> InitXLOGAccess() is not called and the value of ThisTimeLineID is not\n> updated before the checkpoint record write.\n> \n> Since 1148e22a82e, GetXLogReplayRecPtr() is called with ThisTimeLineID\n> instead of a local variable, within read_local_xlog_page().\n> \n> PFA a small patch that fixes the problem by explicitly calling\n> InitXLOGAccess() in CheckPointTwoPhase(), after the two phase state data\n> is read, in order to update ThisTimeLineID to the latest timeline. It is\n> okay to call InitXLOGAccess() as it is lightweight and would mostly be\n> a no-op.\n\nIt is correct that read_local_xlog_page() changes ThisTimeLineID, but\nInitXLOGAccess() is correctly called in CreateCheckPoint:\n\n|\t/*\n|\t * An end-of-recovery checkpoint is created before anyone is allowed to\n|\t * write WAL. To allow us to write the checkpoint record, temporarily\n|\t * enable XLogInsertAllowed. (This also ensures ThisTimeLineID is\n|\t * initialized, which we need here and in AdvanceXLInsertBuffer.)\n|\t */\n|\tif (flags & CHECKPOINT_END_OF_RECOVERY)\n|\t\tLocalSetXLogInsertAllowed();\n\nIt seems to e suficcient to recover ThisTimeLineID from the checkpoint\nrecord to be written, as attached?\n\nregareds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 03 Mar 2021 15:47:42 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PITR promote bug: Checkpointer writes to older timeline"
},
{
"msg_contents": "On 03/03/2021 08:47, Kyotaro Horiguchi wrote:\n> At Tue, 2 Mar 2021 17:56:03 -0800, Soumyadeep Chakraborty <soumyadeep2007@gmail.com> wrote in\n>> When there are prepared transactions in an older timeline, in the\n>> checkpointer, a call to CheckPointTwoPhase() and subsequently to\n>> XlogReadTwoPhaseData() and subsequently to read_local_xlog_page() leads\n>> to the following line:\n>>\n>> read_upto = GetXLogReplayRecPtr(&ThisTimeLineID);\n>>\n>> GetXLogReplayRecPtr() will change ThisTimeLineID to 1, in order to read\n>> the two phase WAL records in the older timeline. This variable will\n>> remain unchanged and the checkpointer ends up writing the checkpoint\n>> record into the older WAL segment (when XLogBeginInsert() is called\n>> within CreateCheckPoint(), the value is still 1). The value is not\n>> synchronized as even if RecoveryInProgress() is called,\n>> xlogctl->SharedRecoveryState is not RECOVERY_STATE_DONE\n>> (SharedRecoveryInProgress = true in older versions) as the startup\n>> process waits for the checkpointer inside RequestCheckpoint() (since\n>> recovery_target_action='promote' involves a non-fast promotion). Thus,\n>> InitXLOGAccess() is not called and the value of ThisTimeLineID is not\n>> updated before the checkpoint record write.\n>>\n>> Since 1148e22a82e, GetXLogReplayRecPtr() is called with ThisTimeLineID\n>> instead of a local variable, within read_local_xlog_page().\n\nConfusing...\n\n>> PFA a small patch that fixes the problem by explicitly calling\n>> InitXLOGAccess() in CheckPointTwoPhase(), after the two phase state data\n>> is read, in order to update ThisTimeLineID to the latest timeline. It is\n>> okay to call InitXLOGAccess() as it is lightweight and would mostly be\n>> a no-op.\n> \n> It is correct that read_local_xlog_page() changes ThisTimeLineID, but\n> InitXLOGAccess() is correctly called in CreateCheckPoint:\n> \n> |\t/*\n> |\t * An end-of-recovery checkpoint is created before anyone is allowed to\n> |\t * write WAL. To allow us to write the checkpoint record, temporarily\n> |\t * enable XLogInsertAllowed. (This also ensures ThisTimeLineID is\n> |\t * initialized, which we need here and in AdvanceXLInsertBuffer.)\n> |\t */\n> |\tif (flags & CHECKPOINT_END_OF_RECOVERY)\n> |\t\tLocalSetXLogInsertAllowed();\n> \n> It seems to e suficcient to recover ThisTimeLineID from the checkpoint\n> record to be written, as attached?\n\nI think it should be reset even earlier, inside XlogReadTwoPhaseData() \nprobably. With your patch, doesn't the LogStandbySnapshot() call just \nabove where you're ressetting ThisTimeLineID also write a WAL record \nwith incorrect timeline?\n\nEven better, can we avoid setting ThisTimeLineID in \nXlogReadTwoPhaseData() in the first place?\n\n- Heikki\n\n\n",
"msg_date": "Wed, 3 Mar 2021 10:46:42 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: PITR promote bug: Checkpointer writes to older timeline"
},
{
"msg_contents": "\n\nOn 2021/03/03 17:46, Heikki Linnakangas wrote:\n> On 03/03/2021 08:47, Kyotaro Horiguchi wrote:\n>> At Tue, 2 Mar 2021 17:56:03 -0800, Soumyadeep Chakraborty <soumyadeep2007@gmail.com> wrote in\n>>> When there are prepared transactions in an older timeline, in the\n>>> checkpointer, a call to CheckPointTwoPhase() and subsequently to\n>>> XlogReadTwoPhaseData() and subsequently to read_local_xlog_page() leads\n>>> to the following line:\n>>>\n>>> read_upto = GetXLogReplayRecPtr(&ThisTimeLineID);\n>>>\n>>> GetXLogReplayRecPtr() will change ThisTimeLineID to 1, in order to read\n>>> the two phase WAL records in the older timeline. This variable will\n>>> remain unchanged and the checkpointer ends up writing the checkpoint\n>>> record into the older WAL segment (when XLogBeginInsert() is called\n>>> within CreateCheckPoint(), the value is still 1). The value is not\n>>> synchronized as even if RecoveryInProgress() is called,\n>>> xlogctl->SharedRecoveryState is not RECOVERY_STATE_DONE\n>>> (SharedRecoveryInProgress = true in older versions) as the startup\n>>> process waits for the checkpointer inside RequestCheckpoint() (since\n>>> recovery_target_action='promote' involves a non-fast promotion). Thus,\n>>> InitXLOGAccess() is not called and the value of ThisTimeLineID is not\n>>> updated before the checkpoint record write.\n>>>\n>>> Since 1148e22a82e, GetXLogReplayRecPtr() is called with ThisTimeLineID\n>>> instead of a local variable, within read_local_xlog_page().\n> \n> Confusing...\n> \n>>> PFA a small patch that fixes the problem by explicitly calling\n>>> InitXLOGAccess() in CheckPointTwoPhase(), after the two phase state data\n>>> is read, in order to update ThisTimeLineID to the latest timeline. It is\n>>> okay to call InitXLOGAccess() as it is lightweight and would mostly be\n>>> a no-op.\n>>\n>> It is correct that read_local_xlog_page() changes ThisTimeLineID, but\n>> InitXLOGAccess() is correctly called in CreateCheckPoint:\n>>\n>> |��� /*\n>> |���� * An end-of-recovery checkpoint is created before anyone is allowed to\n>> |���� * write WAL. To allow us to write the checkpoint record, temporarily\n>> |���� * enable XLogInsertAllowed.� (This also ensures ThisTimeLineID is\n>> |���� * initialized, which we need here and in AdvanceXLInsertBuffer.)\n>> |���� */\n>> |��� if (flags & CHECKPOINT_END_OF_RECOVERY)\n>> |������� LocalSetXLogInsertAllowed();\n>>\n>> It seems to e suficcient to recover ThisTimeLineID from the checkpoint\n>> record to be written, as attached?\n> \n> I think it should be reset even earlier, inside XlogReadTwoPhaseData() probably. With your patch, doesn't the LogStandbySnapshot() call just above where you're ressetting ThisTimeLineID also write a WAL record with incorrect timeline?\n> \n> Even better, can we avoid setting ThisTimeLineID in XlogReadTwoPhaseData() in the first place?\n\nOr isn't it better to reset ThisTimeLineID in read_local_xlog_page(), i.e.,\nprevent read_local_xlog_page() from changing ThisTimeLineID? I'm not\nsure if that's possible, though.. In the future other functions that calls\nread_local_xlog_page() during the promotion may appear. Fixing the issue\noutside read_local_xlog_page() may cause those functions to get\nthe same issue.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 3 Mar 2021 18:04:05 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: PITR promote bug: Checkpointer writes to older timeline"
},
{
"msg_contents": "On 2021/03/03 17:46, Heikki Linnakangas wrote:\n\n> I think it should be reset even earlier, inside XlogReadTwoPhaseData()\n> probably. With your patch, doesn't the LogStandbySnapshot() call just\n> above where you're ressetting ThisTimeLineID also write a WAL record\n> with incorrect timeline?\n\nAgreed.\n\nOn Wed, Mar 3, 2021 at 1:04 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> > Even better, can we avoid setting ThisTimeLineID in XlogReadTwoPhaseData() in the first place?\n>\n>\n>\n> Or isn't it better to reset ThisTimeLineID in read_local_xlog_page(), i.e.,\n> prevent read_local_xlog_page() from changing ThisTimeLineID? I'm not\n> sure if that's possible, though.. In the future other functions that calls\n> read_local_xlog_page() during the promotion may appear. Fixing the issue\n> outside read_local_xlog_page() may cause those functions to get\n> the same issue.\n\nI agree. We should fix the issue in read_local_xlog_page(). I have\nattached two different patches which do so:\nsaved_ThisTimeLineID.patch and pass_ThisTimeLineID.patch.\n\nThe former saves the value of the ThisTimeLineID before it gets changed\nin read_local_xlog_page() and resets it after ThisTimeLineID has been\nused later on in the code (by XLogReadDetermineTimeline()).\n\nThe latter removes occurrences of ThisTimeLineID from\nXLogReadDetermineTimeline() and introduces an argument currTLI to\nXLogReadDetermineTimeline() to be used in its stead.\n\nRegards,\nSoumyadeep",
"msg_date": "Wed, 3 Mar 2021 14:56:25 -0800",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PITR promote bug: Checkpointer writes to older timeline"
},
{
"msg_contents": "At Wed, 3 Mar 2021 14:56:25 -0800, Soumyadeep Chakraborty <soumyadeep2007@gmail.com> wrote in \n> On 2021/03/03 17:46, Heikki Linnakangas wrote:\n> \n> > I think it should be reset even earlier, inside XlogReadTwoPhaseData()\n> > probably. With your patch, doesn't the LogStandbySnapshot() call just\n> > above where you're ressetting ThisTimeLineID also write a WAL record\n> > with incorrect timeline?\n> \n> Agreed.\n\nRight.\n\n> On Wed, Mar 3, 2021 at 1:04 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n> > > Even better, can we avoid setting ThisTimeLineID in XlogReadTwoPhaseData() in the first place?\n> >\n> >\n> >\n> > Or isn't it better to reset ThisTimeLineID in read_local_xlog_page(), i.e.,\n> > prevent read_local_xlog_page() from changing ThisTimeLineID? I'm not\n> > sure if that's possible, though.. In the future other functions that calls\n> > read_local_xlog_page() during the promotion may appear. Fixing the issue\n> > outside read_local_xlog_page() may cause those functions to get\n> > the same issue.\n> \n> I agree. We should fix the issue in read_local_xlog_page(). I have\n> attached two different patches which do so:\n> saved_ThisTimeLineID.patch and pass_ThisTimeLineID.patch.\n> \n> The former saves the value of the ThisTimeLineID before it gets changed\n> in read_local_xlog_page() and resets it after ThisTimeLineID has been\n> used later on in the code (by XLogReadDetermineTimeline()).\n> \n> The latter removes occurrences of ThisTimeLineID from\n> XLogReadDetermineTimeline() and introduces an argument currTLI to\n> XLogReadDetermineTimeline() to be used in its stead.\n\nread_local_xlog_page() works as a part of logical decoding and has\nresponsibility to update ThisTimeLineID properly. As the comment in\nthe function, it is the proper place to update ThisTimeLineID since we\nmiss a timeline change if we check it earlier and the function uses\nthe value just after. So we cannot change that behavior of the\nfunction. That is, neither of them doesn't seem to be the right fix.\n\nThe confusion here is that there's two ThisTimeLineID's here. The\nprevious TLI for read and the next TLI to write. Most part of the\nfunction is needed to read a 2pc recrod so the ways we can take here\nis:\n\n1. Somehow tell the function not to update ThisTimeLineID in specific\n cases. This can be done by xlogreader private data but this doesn't\n seem reasonable.\n\n2. Restore ThisTimeLineID after calling XLogReadRecord() in the\n *caller* side. This is what came up to me first but I don't like\n this, too, but I don't find better fix. way.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 04 Mar 2021 10:28:31 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PITR promote bug: Checkpointer writes to older timeline"
},
{
"msg_contents": "On Thu, Mar 04, 2021 at 10:28:31AM +0900, Kyotaro Horiguchi wrote:\n> read_local_xlog_page() works as a part of logical decoding and has\n> responsibility to update ThisTimeLineID properly. As the comment in\n> the function, it is the proper place to update ThisTimeLineID since we\n> miss a timeline change if we check it earlier and the function uses\n> the value just after. So we cannot change that behavior of the\n> function. That is, neither of them doesn't seem to be the right fix.\n> \n> The confusion here is that there's two ThisTimeLineID's here. The\n> previous TLI for read and the next TLI to write. Most part of the\n> function is needed to read a 2pc recrod so the ways we can take here\n> is:\n> \n> 1. Somehow tell the function not to update ThisTimeLineID in specific\n> cases. This can be done by xlogreader private data but this doesn't\n> seem reasonable.\n> \n> 2. Restore ThisTimeLineID after calling XLogReadRecord() in the\n> *caller* side. This is what came up to me first but I don't like\n> this, too, but I don't find better fix. way.\n\nI have not looked in details at the solutions proposed here, but could\nit be possible to have a TAP test at least please? Seeing the script\nfrom the top of the thread, it should not be difficult to do so. I\nwould put that in a file different than 009_twophase.pl, within\nsrc/test/recovery/.\n--\nMichael",
"msg_date": "Thu, 4 Mar 2021 11:18:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PITR promote bug: Checkpointer writes to older timeline"
},
{
"msg_contents": "\n\nOn 2021/03/04 10:28, Kyotaro Horiguchi wrote:\n> At Wed, 3 Mar 2021 14:56:25 -0800, Soumyadeep Chakraborty <soumyadeep2007@gmail.com> wrote in\n>> On 2021/03/03 17:46, Heikki Linnakangas wrote:\n>>\n>>> I think it should be reset even earlier, inside XlogReadTwoPhaseData()\n>>> probably. With your patch, doesn't the LogStandbySnapshot() call just\n>>> above where you're ressetting ThisTimeLineID also write a WAL record\n>>> with incorrect timeline?\n>>\n>> Agreed.\n> \n> Right.\n> \n>> On Wed, Mar 3, 2021 at 1:04 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>>>> Even better, can we avoid setting ThisTimeLineID in XlogReadTwoPhaseData() in the first place?\n>>>\n>>>\n>>>\n>>> Or isn't it better to reset ThisTimeLineID in read_local_xlog_page(), i.e.,\n>>> prevent read_local_xlog_page() from changing ThisTimeLineID? I'm not\n>>> sure if that's possible, though.. In the future other functions that calls\n>>> read_local_xlog_page() during the promotion may appear. Fixing the issue\n>>> outside read_local_xlog_page() may cause those functions to get\n>>> the same issue.\n>>\n>> I agree. We should fix the issue in read_local_xlog_page(). I have\n>> attached two different patches which do so:\n>> saved_ThisTimeLineID.patch and pass_ThisTimeLineID.patch.\n>>\n>> The former saves the value of the ThisTimeLineID before it gets changed\n>> in read_local_xlog_page() and resets it after ThisTimeLineID has been\n>> used later on in the code (by XLogReadDetermineTimeline()).\n>>\n>> The latter removes occurrences of ThisTimeLineID from\n>> XLogReadDetermineTimeline() and introduces an argument currTLI to\n>> XLogReadDetermineTimeline() to be used in its stead.\n> \n> read_local_xlog_page() works as a part of logical decoding and has\n> responsibility to update ThisTimeLineID properly. As the comment in\n> the function, it is the proper place to update ThisTimeLineID since we\n> miss a timeline change if we check it earlier and the function uses\n> the value just after. So we cannot change that behavior of the\n> function. That is, neither of them doesn't seem to be the right fix.\n\nCould you tell me what actual issue happens if read_local_xlog_page() resets\nThisTimeLineID at the end? Some replication slot-related functions that use\nread_local_xlog_page() can be executed even during recovery. For example,\nyou mean that, when timeline swithes during recovery, those functions\nbehave incorrectly if ThisTimeLineID is reset?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 4 Mar 2021 14:57:13 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: PITR promote bug: Checkpointer writes to older timeline"
},
{
"msg_contents": "At Thu, 4 Mar 2021 11:18:42 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Mar 04, 2021 at 10:28:31AM +0900, Kyotaro Horiguchi wrote:\n> > read_local_xlog_page() works as a part of logical decoding and has\n> > responsibility to update ThisTimeLineID properly. As the comment in\n> > the function, it is the proper place to update ThisTimeLineID since we\n> > miss a timeline change if we check it earlier and the function uses\n> > the value just after. So we cannot change that behavior of the\n> > function. That is, neither of them doesn't seem to be the right fix.\n> > \n> > The confusion here is that there's two ThisTimeLineID's here. The\n> > previous TLI for read and the next TLI to write. Most part of the\n> > function is needed to read a 2pc recrod so the ways we can take here\n> > is:\n> > \n> > 1. Somehow tell the function not to update ThisTimeLineID in specific\n> > cases. This can be done by xlogreader private data but this doesn't\n> > seem reasonable.\n> > \n> > 2. Restore ThisTimeLineID after calling XLogReadRecord() in the\n> > *caller* side. This is what came up to me first but I don't like\n> > this, too, but I don't find better fix. way.\n> \n> I have not looked in details at the solutions proposed here, but could\n> it be possible to have a TAP test at least please? Seeing the script\n> from the top of the thread, it should not be difficult to do so. I\n> would put that in a file different than 009_twophase.pl, within\n> src/test/recovery/.\n\nYear, agreed. It is needed as the final patch. That situation is\neasily caused. I'm not sure how to detect the corruption yet, though.\nI'll consider that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 04 Mar 2021 16:17:34 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PITR promote bug: Checkpointer writes to older timeline"
},
{
"msg_contents": "At Thu, 4 Mar 2021 14:57:13 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > read_local_xlog_page() works as a part of logical decoding and has\n> > responsibility to update ThisTimeLineID properly. As the comment in\n> > the function, it is the proper place to update ThisTimeLineID since we\n> > miss a timeline change if we check it earlier and the function uses\n> > the value just after. So we cannot change that behavior of the\n> > function. That is, neither of them doesn't seem to be the right fix.\n> \n> Could you tell me what actual issue happens if read_local_xlog_page()\n> resets\n> ThisTimeLineID at the end? Some replication slot-related functions\n> that use\n> read_local_xlog_page() can be executed even during recovery. For\n> example,\n> you mean that, when timeline swithes during recovery, those functions\n> behave incorrectly if ThisTimeLineID is reset?\n\nThe most significant point for me was I'm not fully convinced that we\ncan safely (or validly) remove the fucntion to maintain the variable\nfrom read_local_xlog_page.\n\n> * RecoveryInProgress() will update ThisTimeLineID when it first\n> * notices recovery finishes, so we only have to maintain it for the\n> * local process until recovery ends.\n\nread_local_xlog_page is *designed* to maintain ThisTimeLineID.\nCurrently it doesn't seem utilized but I think it's sufficiently\nreasonable that the function maintains ThisTimeLineID.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 04 Mar 2021 17:10:36 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PITR promote bug: Checkpointer writes to older timeline"
},
{
"msg_contents": "At Thu, 04 Mar 2021 16:17:34 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 4 Mar 2021 11:18:42 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> > I have not looked in details at the solutions proposed here, but could\n> > it be possible to have a TAP test at least please? Seeing the script\n> > from the top of the thread, it should not be difficult to do so. I\n> > would put that in a file different than 009_twophase.pl, within\n> > src/test/recovery/.\n\nIsn't 004_timeline_switch.pl the place?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 04 Mar 2021 17:17:15 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PITR promote bug: Checkpointer writes to older timeline"
},
{
"msg_contents": "Hey all,\n\nI took a stab at a quick and dirty TAP test (my first ever). So it\ncan probably be improved a lot. Please take a look.\n\nOn Thu, Mar 04, 2021 at 10:28:31AM +0900, Kyotaro Horiguchi wrote:\n\n> 2. Restore ThisTimeLineID after calling XLogReadRecord() in the\n> *caller* side. This is what came up to me first but I don't like\n> this, too, but I don't find better fix. way.\n\n+1 to this patch [1].\nThe above TAP test passes with this patch applied.\n\n[1] https://www.postgresql.org/message-id/attachment/119972/dont_change_thistimelineid.patch\n\nRegards,\nSoumyadeep\n\nRegards,\nSoumyadeep",
"msg_date": "Thu, 4 Mar 2021 15:42:05 -0800",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PITR promote bug: Checkpointer writes to older timeline"
},
{
"msg_contents": "Hello,\n\nPFA version 2 of the TAP test. I removed the non-deterministic sleep\nand introduced retries until the WAL segment is archived and promotion\nis complete. Some additional tidying up too.\n\nRegards,\nSoumyadeep (VMware)",
"msg_date": "Sat, 13 Mar 2021 12:29:06 -0800",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PITR promote bug: Checkpointer writes to older timeline"
},
{
"msg_contents": "On Thu, Mar 04, 2021 at 05:10:36PM +0900, Kyotaro Horiguchi wrote:\n> read_local_xlog_page is *designed* to maintain ThisTimeLineID.\n> Currently it doesn't seem utilized but I think it's sufficiently\n> reasonable that the function maintains ThisTimeLineID.\n\nI don't quite follow this line of thoughts. ThisTimeLineID is\ndesigned to remain 0 while recovery is running in most processes\n(at the close exception of a WAL sender with a cascading setup,\nphysical or logical, of course), so why is there any business for\nread_local_xlog_page() to touch this field at all while in recovery to\nbegin with?\n\nI equally find confusing that XLogReadDetermineTimeline() relies on a\nspecific value of ThisTimeLineID in its own logic, while it clearly\nstates that all its callers have to read the current active TLI\nbeforehand. So I think that the existing logic is pretty weak, and\nthat resetting the field is an incorrect approach? It seems to me\nthat we had better not change ThisTimeLineID actively while in\nrecovery in this code path and just let others do the job, like\nRecoveryInProgress() once recovery finishes, or\nGetStandbyFlushRecPtr() for a WAL sender. And finally, we should\nstore the current TLI used for replay in a separate variable that gets\npassed down to the XLogReadDetermineTimeline() as argument.\n\nWhile going through it, I have simplified a bit the proposed TAP tests\n(thanks for replacing the sleep() call, Soumyadeep. This would have\nmade the test slower for nothing on fast machines, and it would cause\nfailures on very slow machines).\n\nThe attached fixes the original issue for me, keeping all the records\nin their correct timeline. And I have not been able to break\ncascading setups. If it happens that such cases actually break, we\nhave holes in our existing test coverage that should be improved. I\ncannot see anything fancy missing on this side, though.\n\nAny thoughts?\n--\nMichael",
"msg_date": "Sun, 14 Mar 2021 17:59:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PITR promote bug: Checkpointer writes to older timeline"
},
{
"msg_contents": "At Sun, 14 Mar 2021 17:59:59 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Mar 04, 2021 at 05:10:36PM +0900, Kyotaro Horiguchi wrote:\n> > read_local_xlog_page is *designed* to maintain ThisTimeLineID.\n> > Currently it doesn't seem utilized but I think it's sufficiently\n> > reasonable that the function maintains ThisTimeLineID.\n> \n> I don't quite follow this line of thoughts. ThisTimeLineID is\n> designed to remain 0 while recovery is running in most processes\n> (at the close exception of a WAL sender with a cascading setup,\n\nThe reason for the \"0\" is they just aren't interested in the value.\nCheckpointer temporarily uses it then restore to 0 soon.\n\n> physical or logical, of course), so why is there any business for\n> read_local_xlog_page() to touch this field at all while in recovery to\n> begin with?\n\nLogical decoding stuff is (I think) designed to turn any backend into\na walsender, which may need to maintain ThisTimeLineID. It seems to\nme that logical decoding stuff indents to maintain ThisTimeLineID of\nsuch backends at reading a WAL record. logical_read_xlog_page also\nupdates ThisTimeLineID and pg_logical_slot_get_changes_guts(),\npg_replication_slot_advance() (and maybe other functions) updates\nThisTimeLineID. So it is natural that local_read_xlog_page() updates\nit since it is intended to be used used in logical decoding plugins.\n\n> I equally find confusing that XLogReadDetermineTimeline() relies on a\n> specific value of ThisTimeLineID in its own logic, while it clearly\n> states that all its callers have to read the current active TLI\n> beforehand. So I think that the existing logic is pretty weak, and\n> that resetting the field is an incorrect approach? It seems to me\n\nIt is initialized by IndentifySystem(). And the logical walsender\nintends to maintain ThisTimeLineID by subsequent calls to\nGetStandbyFlushRecPtr(), which happen in logical_read_xlog_page().\n\n> that we had better not change ThisTimeLineID actively while in\n> recovery in this code path and just let others do the job, like\n> RecoveryInProgress() once recovery finishes, or\n> GetStandbyFlushRecPtr() for a WAL sender. And finally, we should\n> store the current TLI used for replay in a separate variable that gets\n> passed down to the XLogReadDetermineTimeline() as argument.\n\nI agree that it's better that the replay TLI is stored in a separate\nvariable. It is what I was complained on in the previous mails. (It\nmight not have been so obvious, though..)\n\n> While going through it, I have simplified a bit the proposed TAP tests\n> (thanks for replacing the sleep() call, Soumyadeep. This would have\n> made the test slower for nothing on fast machines, and it would cause\n> failures on very slow machines).\n> \n> The attached fixes the original issue for me, keeping all the records\n> in their correct timeline. And I have not been able to break\n> cascading setups. If it happens that such cases actually break, we\n> have holes in our existing test coverage that should be improved. I\n> cannot see anything fancy missing on this side, though.\n> \n> Any thoughts?\n\nI don't think there's any acutual user of the function for the\npurpose, but.. Anyawy if we remove the update of ThisTimeLineID from\nread_local_xlog_page, I think we should remove or rewrite the\nfollowing comment for the function. It no longer works as written in\nthe catchphrase.\n\n> * Public because it would likely be very helpful for someone writing another\n> * output method outside walsender, e.g. in a bgworker.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 15 Mar 2021 15:01:09 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PITR promote bug: Checkpointer writes to older timeline"
},
{
"msg_contents": "On Mon, Mar 15, 2021 at 03:01:09PM +0900, Kyotaro Horiguchi wrote:\n> Logical decoding stuff is (I think) designed to turn any backend into\n> a walsender, which may need to maintain ThisTimeLineID. It seems to\n> me that logical decoding stuff indents to maintain ThisTimeLineID of\n> such backends at reading a WAL record. logical_read_xlog_page also\n> updates ThisTimeLineID and pg_logical_slot_get_changes_guts(),\n> pg_replication_slot_advance() (and maybe other functions) updates\n> ThisTimeLineID. So it is natural that local_read_xlog_page() updates\n> it since it is intended to be used used in logical decoding plugins.\n\nLogical decoding contexts cannot be created while in recovery as per\nCheckLogicalDecodingRequirements(), and as mentioned not everything is\nin place to allow that. FWIW, I think that it is just confusing for\npg_replication_slot_advance() and pg_logical_slot_get_changes_guts()\nto update it, and we just look for the latest value each time it is\nnecessary when reading a new WAL page.\n\n> It is initialized by IndentifySystem(). And the logical walsender\n> intends to maintain ThisTimeLineID by subsequent calls to\n> GetStandbyFlushRecPtr(), which happen in logical_read_xlog_page().\n\nI don't understand this part about logical_read_xlog_page(), though.\nDo you mean a different routine or a different code path?\n\n> I agree that it's better that the replay TLI is stored in a separate\n> variable. It is what I was complained on in the previous mails. (It\n> might not have been so obvious, though..)\n\nOkay. I understood that this was what you implied.\n\n> I don't think there's any acutual user of the function for the\n> purpose, but.. Anyawy if we remove the update of ThisTimeLineID from\n> read_local_xlog_page, I think we should remove or rewrite the\n> following comment for the function. It no longer works as written in\n> the catchphrase.\n\nWho knows. We cannot know all the users of this code, and the API is\npublic.\n\n> > * Public because it would likely be very helpful for someone writing another\n> > * output method outside walsender, e.g. in a bgworker.\n\nI don't see a reason to remove this comment as this routine can still\nbe useful for many purposes. What kind of rewording do you have in\nmind?\n--\nMichael",
"msg_date": "Mon, 15 Mar 2021 16:38:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PITR promote bug: Checkpointer writes to older timeline"
},
{
"msg_contents": "On Mon, Mar 15, 2021 at 04:38:08PM +0900, Michael Paquier wrote:\n> On Mon, Mar 15, 2021 at 03:01:09PM +0900, Kyotaro Horiguchi wrote:\n>> Logical decoding stuff is (I think) designed to turn any backend into\n>> a walsender, which may need to maintain ThisTimeLineID. It seems to\n>> me that logical decoding stuff indents to maintain ThisTimeLineID of\n>> such backends at reading a WAL record. logical_read_xlog_page also\n>> updates ThisTimeLineID and pg_logical_slot_get_changes_guts(),\n>> pg_replication_slot_advance() (and maybe other functions) updates\n>> ThisTimeLineID. So it is natural that local_read_xlog_page() updates\n>> it since it is intended to be used used in logical decoding plugins.\n>\n> Logical decoding contexts cannot be created while in recovery as per\n> CheckLogicalDecodingRequirements(), and as mentioned not everything\n> is\n> in place to allow that. FWIW, I think that it is just confusing for\n> pg_replication_slot_advance() and pg_logical_slot_get_changes_guts()\n> to update it, and we just look for the latest value each time it is\n> necessary when reading a new WAL page.\n\nStudying some history today, having read_local_xlog_page() directly\nupdate ThisTimeLineID has been extensively discussed here back in 2017\nto attempt to introduce logical decoding on standbys (1148e22a):\nhttps://www.postgresql.org/message-id/CAMsr%2BYEVmBJ%3DdyLw%3D%2BkTihmUnGy5_EW4Mig5T0maieg_Zu%3DXCg%40mail.gmail.com\n\nCurrently with HEAD and back branches, nothing would be broken as\nlogical contexts cannot exist in recovery. Still it would be easy\nto miss the new behavior for anybody attempting to work more on this\nfeature in the future if we change read_local_xlog_page to not update\nThisTimeLineID anymore. Resetting the value of ThisTimeLineID in\nread_local_xlog_page() does not seem completely right either with this\nargument, as they could be some custom code relying on the existing\nbehavior of read_local_xlog_page() to maintain ThisTimeLineID.\n\nHmmm. I am wondering whether the best answer for the moment would not\nbe to save and reset ThisTimeLineID just in XlogReadTwoPhaseData(), as\nthat's the local change that uses read_local_xlog_page().\n\nThe state of the code is really confusing on HEAD, and I'd like to\nthink that the best thing we could do in the long-term is to make the\nlogical decoding path not rely on ThisTimeLineID at all and decouple\nall that, putting the code in a state sane enough so as we don't\nfinish with similar errors as what is discussed on this thread. That\nwould be a work for a different patch though, not for stable\nbranches. And seeing some slot and advancing functions update\ndirectly ThisTimeLineID is no good either..\n\nAny thoughts?\n--\nMichael",
"msg_date": "Wed, 17 Mar 2021 17:09:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PITR promote bug: Checkpointer writes to older timeline"
},
{
"msg_contents": "On Wed, Mar 17, 2021 at 05:09:50PM +0900, Michael Paquier wrote:\n> Currently with HEAD and back branches, nothing would be broken as\n> logical contexts cannot exist in recovery. Still it would be easy\n> to miss the new behavior for anybody attempting to work more on this\n> feature in the future if we change read_local_xlog_page to not update\n> ThisTimeLineID anymore. Resetting the value of ThisTimeLineID in\n> read_local_xlog_page() does not seem completely right either with this\n> argument, as they could be some custom code relying on the existing\n> behavior of read_local_xlog_page() to maintain ThisTimeLineID.\n\nI was looking at uses of ThisTimeLineID in the wild, and could not\nfind it getting checked or used actually in backend-side code that\ninvolved the WAL reader facility. Even if it brings confidence, it\ndoes not mean that it is not used somewhere :/\n\n> Hmmm. I am wondering whether the best answer for the moment would not\n> be to save and reset ThisTimeLineID just in XlogReadTwoPhaseData(), as\n> that's the local change that uses read_local_xlog_page().\n\nAnd attached is the patch able to achieve that. At least it is\nsimple, and does not break the actual assumptions this callback relies\non. This is rather weak though if there are errors as this is out of\ncritical sections, still the disease is worse. I have double-checked\nall the existing backend code that uses XLogReadRecord(), and did not\nnotice any code paths with issues similar to this one.\n\n> The state of the code is really confusing on HEAD, and I'd like to\n> think that the best thing we could do in the long-term is to make the\n> logical decoding path not rely on ThisTimeLineID at all and decouple\n> all that, putting the code in a state sane enough so as we don't\n> finish with similar errors as what is discussed on this thread. That\n> would be a work for a different patch though, not for stable\n> branches. And seeing some slot and advancing functions update\n> directly ThisTimeLineID is no good either..\n\nHowever, I'd like to think that we should completely untie the\ndependency to ThisTimeLineID in any page read callbacks in core in the\nlong-term, and potentially clean up any assumptions behind timeline\njumps while in recovery for logical contexts as that cannot happen.\nAt this stage of the 14 dev cycle, that would be material for 15~, but\nI also got to wonder if there is work going on to support logical\ndecoding on standbys, in particular if this would really rely on\nThisTimeLineID.\n\nThoughts are welcome.\n--\nMichael",
"msg_date": "Thu, 18 Mar 2021 12:56:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PITR promote bug: Checkpointer writes to older timeline"
},
{
"msg_contents": "On Thu, Mar 18, 2021 at 12:56:12PM +0900, Michael Paquier wrote:\n> I was looking at uses of ThisTimeLineID in the wild, and could not\n> find it getting checked or used actually in backend-side code that\n> involved the WAL reader facility. Even if it brings confidence, it\n> does not mean that it is not used somewhere :/\n\nI have been working on that over the last couple of days, and applied\na fix down to 10. One thing that I did not like in the test was the\nuse of compare() to check if the contents of the WAL segment before\nand after the timeline jump remained the same as this would have been\nunstable with any concurrent activity. Instead, I have added a phase\nat the end of the test with an extra checkpoint and recovery triggered\nonce, which is enough to reproduce the PANIC reported at the top of\nthe thread.\n\nI'll look into clarifying the use of ThisTimeLineID within the those\nWAL reader callbacks, because this is really bug-prone in the long\nterm... This requires some coordination with the recent work aimed at\nadding some logical decoding support in standbys, though.\n--\nMichael",
"msg_date": "Mon, 22 Mar 2021 09:07:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PITR promote bug: Checkpointer writes to older timeline"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> I have been working on that over the last couple of days, and applied\n> a fix down to 10. One thing that I did not like in the test was the\n> use of compare() to check if the contents of the WAL segment before\n> and after the timeline jump remained the same as this would have been\n> unstable with any concurrent activity. Instead, I have added a phase\n> at the end of the test with an extra checkpoint and recovery triggered\n> once, which is enough to reproduce the PANIC reported at the top of\n> the thread.\n\nBuildfarm member hornet just reported a failure in this test:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2021-06-27%2013%3A40%3A57\n\nthe critical bit being\n\n2021-06-27 17:35:46.504 UTC [11862234:1] [unknown] LOG: connection received: host=[local]\n2021-06-27 17:35:46.505 UTC [18350260:12] LOG: recovering prepared transaction 734 from shared memory\nTRAP: FailedAssertion(\"TransactionIdPrecedesOrEquals(TransactionXmin, RecentXmin)\", File: \"procarray.c\", Line: 2492, PID: 11862234)\n2021-06-27 17:35:46.511 UTC [14876838:4] LOG: database system is ready to accept connections\n\nIt's not clear whether this is a problem with the test case or an\nactual server bug, but I'm leaning to the latter theory. My gut\nfeel is it's some problem in the \"snapshot scalability\" work. It\ndoesn't look the same as the known open issue, but maybe related?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 27 Jun 2021 14:35:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PITR promote bug: Checkpointer writes to older timeline"
},
{
"msg_contents": "I wrote:\n> Buildfarm member hornet just reported a failure in this test:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2021-06-27%2013%3A40%3A57\n> It's not clear whether this is a problem with the test case or an\n> actual server bug, but I'm leaning to the latter theory. My gut\n> feel is it's some problem in the \"snapshot scalability\" work. It\n> doesn't look the same as the known open issue, but maybe related?\n\nHmm, the plot thickens. I scraped the buildfarm logs for similar-looking\nassertion failures back to last August, when the snapshot scalability\npatches went in. The first such failure is not until 2021-03-24\n(see attachment), and they all look to be triggered by\n023_pitr_prepared_xact.pl. It sure looks like recovering a prepared\ntransaction creates a transient state in which a new backend will\ncompute a broken snapshot.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 27 Jun 2021 15:13:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PITR promote bug: Checkpointer writes to older timeline"
},
{
"msg_contents": "I wrote:\n> It sure looks like recovering a prepared\n> transaction creates a transient state in which a new backend will\n> compute a broken snapshot.\n\nOh, after further digging this is the same issue discussed here:\n\nhttps://www.postgresql.org/message-id/flat/20210422203603.fdnh3fu2mmfp2iov%40alap3.anarazel.de\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 27 Jun 2021 15:20:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PITR promote bug: Checkpointer writes to older timeline"
}
] |
[
{
"msg_contents": "Hi,\n\nI noticed there is buildfarm failure in crake, it fails with the\nfollowing error:\nMar 02 21:22:56 ./src/test/recovery/t/001_stream_rep.pl: Variable\ndeclared in conditional statement at line 88, column 2. Declare\nvariables outside of the condition.\n([Variables::ProhibitConditionalDeclarations] Severity: 5)\n\nI felt the variable declaration and the assignment need to be split as\nit involves conditional statements. I have attached a patch which will\nhelp in fixing the problem.\nThoughts?\n\nRegards,\nVignesh",
"msg_date": "Wed, 3 Mar 2021 08:46:41 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Buildfarm failure in crake"
},
{
"msg_contents": "On Wed, Mar 03, 2021 at 08:46:41AM +0530, vignesh C wrote:\n> I noticed there is buildfarm failure in crake, it fails with the\n> following error:\n> Mar 02 21:22:56 ./src/test/recovery/t/001_stream_rep.pl: Variable\n> declared in conditional statement at line 88, column 2. Declare\n> variables outside of the condition.\n> ([Variables::ProhibitConditionalDeclarations] Severity: 5)\n> \n> I felt the variable declaration and the assignment need to be split as\n> it involves conditional statements. I have attached a patch which will\n> help in fixing the problem.\n> Thoughts?\n\nTom has fixed that with d422a2a.\n--\nMichael",
"msg_date": "Wed, 3 Mar 2021 15:03:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Buildfarm failure in crake"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nI have a question regarding distributing the filter\nclause(baserestrictinfo) of one table into another table(Keys belong to the\nsame EquivalenceClass).\n\nIn the following query, why PG is not copying the filter (t1.pk=1 OR t1.pk=2)\ninto t2's baserestrictinfo? I believe PG copies those filters which are\nOpExpr and not BoolExpr, but still wanted to know what would be the risks\nif it gets copied.\n\n SELECT * FROM\n t1 INNER JOIN t2 ON (t1.pk = t2.pk)\n WHERE t1.pk = 1 OR t1.pk = 2;\n\nThe filters are effectively: (t1.pk = t2.pk) AND (t1.pk = 1 OR t1.pk = 2).\nCan we expand this into (t1.pk = t2.pk) AND (t1.pk = 1 OR t1.pk = 2) AND (\nt2.pk = 1 OR t2.pk = 2)?\n\nThe above query is resulting in a Query Plan like:\n [Scan(t1, with filter pk = 1 OR pk = 2)] Join [Scan(t2, with Parameter\nt1.pk = t2.pk)]\n\nIf PG copies t1's filter into t2, it could've been like this:\n [Scan(t1, with filter pk = 1 OR pk = 2)] Join [Scan(t2, with *filter pk =\n1 OR pk = 2*)]\n\nWith Postgres Table Partition, this results in more performance issues.\nUnneeded partitions need to be scanned, since the filters are not getting\ncopied.\n\n\nActually, in my case, both t1 and t2 are HASH partitioned with the key\n(pk), and with the same number of partitions and range.\nAnd running the same query results in reading only 2 partitions of t1, and\nall of the partitions of t2.\nIf we could copy the filter into t2 as well, then only 2 partitions of t2\nwould be required to be read.\n\nWhat could be the reasons for NOT copying the t1's filters into t2's\nbaserestrictinfo? If we copy that, could that result in wrong results?\n\nP.S. PlanTree for some sample queries is attached for reference.\n\nThanks,\nMohamed Insaf K",
"msg_date": "Wed, 3 Mar 2021 12:28:04 +0530",
"msg_from": "Mohamed Insaf <insafmpm@gmail.com>",
"msg_from_op": true,
"msg_subject": "Why OR-clauses not getting copied into baserestrictinfo of another\n table whose columns are in the same EquivalenceClass?"
},
{
"msg_contents": "On Wed, 3 Mar 2021 at 23:26, Mohamed Insaf <insafmpm@gmail.com> wrote:\n> I have a question regarding distributing the filter clause(baserestrictinfo) of one table into another table(Keys belong to the same EquivalenceClass).\n>\n> In the following query, why PG is not copying the filter (t1.pk=1 OR t1.pk=2) into t2's baserestrictinfo? I believe PG copies those filters which are OpExpr and not BoolExpr, but still wanted to know what would be the risks if it gets copied.\n>\n> SELECT * FROM\n> t1 INNER JOIN t2 ON (t1.pk = t2.pk)\n> WHERE t1.pk = 1 OR t1.pk = 2;\n>\n> The filters are effectively: (t1.pk = t2.pk) AND (t1.pk = 1 OR t1.pk = 2). Can we expand this into (t1.pk = t2.pk) AND (t1.pk = 1 OR t1.pk = 2) AND (t2.pk = 1 OR t2.pk = 2)?\n\nThere's not really any reason we don't do this other than nobody has\nimplemented it yet. In 2015 I did propose [1] we do something a bit\nsmarter with range quals and push those into EquivalenceClasses too,\nbut there was some concern about duplication of other quals that might\nalready exist in the EquivalenceClass and additional evaluations of\nredundant quals. I don't think there are any problems there we\ncouldn't code around.\n\nIIRC there was also some concern about the effort required to find a\ngiven Expr in an EquivalenceClass. That might be a little more\nefficient to do now as we could pull_varnos from the Expr and only\nlook at each varno's RelOptInfo->eclass_indexes. However, we might\nnot have built the eclass_indexes by the time we need to do this.\n\nAlso, we'd still need to trawl through each EquivalenceMember which\nwould be slow for ECs with lots of members. It's not been touched in\na while, but in [2] there was some WIP with some infrastructure that\nwould help to speed up finding an Expr within an EquivalenceClass.\n\nMore recently (probably 2-3 years) Tom did mention about the\npossibility of putting IN(const1, const2) type Exprs in\nEquivalenceClass. That's pretty similar to your case. I can't find\nthe thread for that.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/30810.1449335261%40sss.pgh.pa.us#906319f5e212fc3a6a682f16da079f04\n[2] https://www.postgresql.org/message-id/flat/CA%2BTgmoZL6KaVGWCgwCziXiCMr3tNvf1hhrHDjjYAF5CRss2ksg%40mail.gmail.com#6423828089e65655005ae8af526e93ab\n\n\n",
"msg_date": "Thu, 4 Mar 2021 00:08:04 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why OR-clauses not getting copied into baserestrictinfo of\n another table whose columns are in the same EquivalenceClass?"
},
{
"msg_contents": "On Wed, Mar 3, 2021 at 3:56 PM Mohamed Insaf <insafmpm@gmail.com> wrote:\n>\n> Hello hackers,\n>\n> I have a question regarding distributing the filter clause(baserestrictinfo) of one table into another table(Keys belong to the same EquivalenceClass).\n>\n> In the following query, why PG is not copying the filter (t1.pk=1 OR t1.pk=2) into t2's baserestrictinfo? I believe PG copies those filters which are OpExpr and not BoolExpr, but still wanted to know what would be the risks if it gets copied.\n>\n> SELECT * FROM\n> t1 INNER JOIN t2 ON (t1.pk = t2.pk)\n> WHERE t1.pk = 1 OR t1.pk = 2;\n>\n> The filters are effectively: (t1.pk = t2.pk) AND (t1.pk = 1 OR t1.pk = 2). Can we expand this into (t1.pk = t2.pk) AND (t1.pk = 1 OR t1.pk = 2) AND (t2.pk = 1 OR t2.pk = 2)?\n>\n> The above query is resulting in a Query Plan like:\n> [Scan(t1, with filter pk = 1 OR pk = 2)] Join [Scan(t2, with Parameter t1.pk = t2.pk)]\n>\n> If PG copies t1's filter into t2, it could've been like this:\n> [Scan(t1, with filter pk = 1 OR pk = 2)] Join [Scan(t2, with *filter pk = 1 OR pk = 2*)]\n>\n> With Postgres Table Partition, this results in more performance issues. Unneeded partitions need to be scanned, since the filters are not getting copied.\n>\n>\n> Actually, in my case, both t1 and t2 are HASH partitioned with the key (pk), and with the same number of partitions and range.\n> And running the same query results in reading only 2 partitions of t1, and all of the partitions of t2.\n> If we could copy the filter into t2 as well, then only 2 partitions of t2 would be required to be read.\n\nIf you have these tables partitioned similarly, partition-wise join\nshould take care of eliminating the partitions in t2. Partition\npruning will prune the partitions in t1. Partition-wise join will\ncreate joins between unpruned partitions of t1 with matching\npartitions of t2. Final plan will not have scans on partitions of t2\nwhich do not match unpruned partitions of t1, effectively pruning t2\nas well. You will need to set enable_partitionwise_join = true for\nthat.\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 3 Mar 2021 17:51:31 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Why OR-clauses not getting copied into baserestrictinfo of\n another table whose columns are in the same EquivalenceClass?"
}
] |
[
{
"msg_contents": "Hi all,\n\nEach time I do development on Windows, I get annoyed by the fact that\nit is not easy to run individual test scripts in the same way as we do\non any other platforms, using PROVE_TESTS, or even PROVE_FLAGS. And\nthere have been recent complaints about not being able to do that.\n\nPlease find attached a patch to support both variables, with some\ndocumentation. Using a terminal on Windows, one can set those\nvariables using just.. \"set\", say:\nset PROVE_FLAGS=--timer\nset PROVE_TESTS=t/020*.pl t/010*.pl\n\nNote the absence of quotes for the second one, so as it is possible to\napply cleanly glob() to each element passed down.\n\nThoughts?\n--\nMichael",
"msg_date": "Wed, 3 Mar 2021 17:19:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Add support for PROVE_FLAGS and PROVE_TESTS in MSVC scripts"
},
{
"msg_contents": "On Wed, Mar 03, 2021 at 05:19:22PM +0900, Michael Paquier wrote:\n> \n> Each time I do development on Windows, I get annoyed by the fact that\n> it is not easy to run individual test scripts in the same way as we do\n> on any other platforms, using PROVE_TESTS, or even PROVE_FLAGS. And\n> there have been recent complaints about not being able to do that.\n> \n> Please find attached a patch to support both variables, with some\n> documentation. Using a terminal on Windows, one can set those\n> variables using just.. \"set\", say:\n> set PROVE_FLAGS=--timer\n> set PROVE_TESTS=t/020*.pl t/010*.pl\n> \n> Note the absence of quotes for the second one, so as it is possible to\n> apply cleanly glob() to each element passed down.\n> \n> Thoughts?\n\n+1, I heavily rely on that and I can imagine how hard it's to develop a patch\non windows without it.\n\nPatch LGTM, althouth I don't have any Windows environnment to test it.\n\n\n",
"msg_date": "Wed, 3 Mar 2021 18:26:19 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add support for PROVE_FLAGS and PROVE_TESTS in MSVC scripts"
},
{
"msg_contents": "On Wed, Mar 3, 2021 at 11:25 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Wed, Mar 03, 2021 at 05:19:22PM +0900, Michael Paquier wrote:\n> >\n> > Each time I do development on Windows, I get annoyed by the fact that\n> > it is not easy to run individual test scripts in the same way as we do\n> > on any other platforms, using PROVE_TESTS, or even PROVE_FLAGS. And\n> > there have been recent complaints about not being able to do that.\n> >\n> > Please find attached a patch to support both variables, with some\n> > documentation. Using a terminal on Windows, one can set those\n> > variables using just.. \"set\", say:\n> > set PROVE_FLAGS=--timer\n> > set PROVE_TESTS=t/020*.pl t/010*.pl\n> >\n> > Note the absence of quotes for the second one, so as it is possible to\n> > apply cleanly glob() to each element passed down.\n> >\n> > Thoughts?\n>\n> +1, I heavily rely on that and I can imagine how hard it's to develop a\n> patch\n> on windows without it.\n>\n> Patch LGTM, althouth I don't have any Windows environnment to test it.\n>\n\n+1 for the functionality. I have tested it and works as expected in my\nenvironment (Windows 10 + VS 2019).\n\nJust a comment about the documentation:\n\n+ <para>\n+ The TAP tests run with <command>vcregress</command> support the\n+ environment variables <varname>PROVE_TESTS</varname>, that is expanded\n+ automatically using the name patterns given, and\n+ <varname>PROVE_FLAGS</varname>. These can for instance be set\n+ on a Windows terminal as follows, before running\n+ <command>vcregress</command>:\n+<programlisting>\n+set PROVE_FLAGS=--timer\n+set PROVE_TESTS=t/020*.pl t/010*.pl\n+</programlisting>\n+ </para>\n\nThis seems to me as if setting the variables in the shell is the proposed\nway to do so. In the previous doc point we do the same with the buildenv.pl\nfile. It looks inconsistent, as if it was one way or the other, when it\ncould be either.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Wed, Mar 3, 2021 at 11:25 AM Julien Rouhaud <rjuju123@gmail.com> wrote:On Wed, Mar 03, 2021 at 05:19:22PM +0900, Michael Paquier wrote:\n> \n> Each time I do development on Windows, I get annoyed by the fact that\n> it is not easy to run individual test scripts in the same way as we do\n> on any other platforms, using PROVE_TESTS, or even PROVE_FLAGS. And\n> there have been recent complaints about not being able to do that.\n> \n> Please find attached a patch to support both variables, with some\n> documentation. Using a terminal on Windows, one can set those\n> variables using just.. \"set\", say:\n> set PROVE_FLAGS=--timer\n> set PROVE_TESTS=t/020*.pl t/010*.pl\n> \n> Note the absence of quotes for the second one, so as it is possible to\n> apply cleanly glob() to each element passed down.\n> \n> Thoughts?\n\n+1, I heavily rely on that and I can imagine how hard it's to develop a patch\non windows without it.\n\nPatch LGTM, althouth I don't have any Windows environnment to test it.+1 for the functionality. I have tested it and works as expected in my environment (Windows 10 + VS 2019).Just a comment about the documentation:+ <para>+ The TAP tests run with <command>vcregress</command> support the+ environment variables <varname>PROVE_TESTS</varname>, that is expanded+ automatically using the name patterns given, and+ <varname>PROVE_FLAGS</varname>. These can for instance be set+ on a Windows terminal as follows, before running+ <command>vcregress</command>:+<programlisting>+set PROVE_FLAGS=--timer+set PROVE_TESTS=t/020*.pl t/010*.pl+</programlisting>+ </para>This seems to me as if setting the variables in the shell is the proposed way to do so. In the previous doc point we do the same with the buildenv.pl file. It looks inconsistent, as if it was one way or the other, when it could be either.Regards,Juan José Santamaría Flecha",
"msg_date": "Wed, 3 Mar 2021 20:59:30 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add support for PROVE_FLAGS and PROVE_TESTS in MSVC scripts"
},
{
"msg_contents": "On Wed, Mar 03, 2021 at 08:59:30PM +0100, Juan José Santamaría Flecha wrote:\n> This seems to me as if setting the variables in the shell is the proposed\n> way to do so. In the previous doc point we do the same with the buildenv.pl\n> file. It looks inconsistent, as if it was one way or the other, when it\n> could be either.\n\nOkay, that makes sense. PROVE_TESTS is a runtime-only dependency so\nmy guess is that most people would set that directly on the command\nprompt like I do, still it can be useful to enforce that in build.pl.\nPROVE_FLAGS can be both, but you are right that setting it in build.pl\nwould be the most common approach. So let's document both. Attached\nis a proposal for that, what do you think? I have extended the\nexample of PROVE_FLAGS to show how to set up more --jobs.\n--\nMichael",
"msg_date": "Thu, 4 Mar 2021 11:11:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Add support for PROVE_FLAGS and PROVE_TESTS in MSVC scripts"
},
{
"msg_contents": "On Thu, Mar 4, 2021 at 3:11 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Mar 03, 2021 at 08:59:30PM +0100, Juan José Santamaría Flecha\n> wrote:\n> > This seems to me as if setting the variables in the shell is the proposed\n> > way to do so. In the previous doc point we do the same with the\n> buildenv.pl\n> > file. It looks inconsistent, as if it was one way or the other, when it\n> > could be either.\n>\n> Okay, that makes sense. PROVE_TESTS is a runtime-only dependency so\n> my guess is that most people would set that directly on the command\n> prompt like I do, still it can be useful to enforce that in build.pl.\n> PROVE_FLAGS can be both, but you are right that setting it in build.pl\n> would be the most common approach. So let's document both. Attached\n> is a proposal for that, what do you think? I have extended the\n> example of PROVE_FLAGS to show how to set up more --jobs.\n>\n\nLGTM.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Thu, Mar 4, 2021 at 3:11 AM Michael Paquier <michael@paquier.xyz> wrote:On Wed, Mar 03, 2021 at 08:59:30PM +0100, Juan José Santamaría Flecha wrote:\n> This seems to me as if setting the variables in the shell is the proposed\n> way to do so. In the previous doc point we do the same with the buildenv.pl\n> file. It looks inconsistent, as if it was one way or the other, when it\n> could be either.\n\nOkay, that makes sense. PROVE_TESTS is a runtime-only dependency so\nmy guess is that most people would set that directly on the command\nprompt like I do, still it can be useful to enforce that in build.pl.\nPROVE_FLAGS can be both, but you are right that setting it in build.pl\nwould be the most common approach. So let's document both. Attached\nis a proposal for that, what do you think? I have extended the\nexample of PROVE_FLAGS to show how to set up more --jobs.LGTM.Regards,Juan José Santamaría Flecha",
"msg_date": "Thu, 4 Mar 2021 18:37:56 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add support for PROVE_FLAGS and PROVE_TESTS in MSVC scripts"
},
{
"msg_contents": "On Thu, Mar 04, 2021 at 06:37:56PM +0100, Juan José Santamaría Flecha wrote:\n> LGTM.\n\nThanks. I have tested more combinations of options, came back a bit\nto the documentation for buildenv.pl where copying the values from the\ndocs would result in a warning, and applied it.\n--\nMichael",
"msg_date": "Fri, 5 Mar 2021 10:28:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Add support for PROVE_FLAGS and PROVE_TESTS in MSVC scripts"
},
{
"msg_contents": "On Fri, Mar 5, 2021 at 9:28 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Mar 04, 2021 at 06:37:56PM +0100, Juan José Santamaría Flecha wrote:\n> > LGTM.\n>\n> Thanks. I have tested more combinations of options, came back a bit\n> to the documentation for buildenv.pl where copying the values from the\n> docs would result in a warning, and applied it.\n\nGreat news!\n\n\n",
"msg_date": "Fri, 5 Mar 2021 16:14:13 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add support for PROVE_FLAGS and PROVE_TESTS in MSVC scripts"
}
] |
[
{
"msg_contents": "Hi,\n\nPlaying with a large value of partitions I caught the limit with 65000 \ntable entries in a query plan:\n\nif (IS_SPECIAL_VARNO(list_length(glob->finalrtable)))\n\tereport(ERROR,\n\t\t(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n\t\terrmsg(\"too many range table entries\")));\n\nPostgres works well with so many partitions.\nThe constants INNER_VAR, OUTER_VAR, INDEX_VAR are used as values of the \nvariable 'var->varno' of integer type. As I see, they were introduced \nwith commit 1054097464 authored by Marc G. Fournier, in 1996.\nValue 65000 was relevant to the size of the int type at that time.\n\nMaybe we will change these values to INT_MAX? (See the patch in attachment).\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Wed, 3 Mar 2021 11:29:12 +0300",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Increase value of OUTER_VAR"
},
{
"msg_contents": "On Wed, 3 Mar 2021 at 21:29, Andrey Lepikhov <a.lepikhov@postgrespro.ru> wrote:\n>\n> Playing with a large value of partitions I caught the limit with 65000\n> table entries in a query plan:\n>\n> if (IS_SPECIAL_VARNO(list_length(glob->finalrtable)))\n> ereport(ERROR,\n> (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n> errmsg(\"too many range table entries\")));\n>\n> Postgres works well with so many partitions.\n> The constants INNER_VAR, OUTER_VAR, INDEX_VAR are used as values of the\n> variable 'var->varno' of integer type. As I see, they were introduced\n> with commit 1054097464 authored by Marc G. Fournier, in 1996.\n> Value 65000 was relevant to the size of the int type at that time.\n>\n> Maybe we will change these values to INT_MAX? (See the patch in attachment).\n\nI don't really see any reason not to increase these a bit, but I'd\nrather we kept them at some realistic maximum rather than all-out went\nto INT_MAX.\n\nI imagine a gap was left between 65535 and 65000 to allow space for\nmore special varno in the future. We did get INDEX_VAR since then, so\nit seems like it was probably a good idea to leave a gap.\n\nThe problem I see what going close to INT_MAX is that the ERROR you\nmention is unlikely to work correctly since a list_length() will never\nget close to having INT_MAX elements before palloc() would exceed\nMaxAllocSize for the elements array.\n\nSomething like 1 million seems like a more realistic limit to me.\nThat might still be on the high side, but it'll likely mean we'd not\nneed to revisit this for quite a while.\n\nDavid\n\n\n",
"msg_date": "Wed, 3 Mar 2021 21:52:00 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "On Wed, Mar 3, 2021 at 5:52 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Wed, 3 Mar 2021 at 21:29, Andrey Lepikhov <a.lepikhov@postgrespro.ru> wrote:\n> >\n> > Playing with a large value of partitions I caught the limit with 65000\n> > table entries in a query plan:\n> >\n> > if (IS_SPECIAL_VARNO(list_length(glob->finalrtable)))\n> > ereport(ERROR,\n> > (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n> > errmsg(\"too many range table entries\")));\n> >\n> > Postgres works well with so many partitions.\n> > The constants INNER_VAR, OUTER_VAR, INDEX_VAR are used as values of the\n> > variable 'var->varno' of integer type. As I see, they were introduced\n> > with commit 1054097464 authored by Marc G. Fournier, in 1996.\n> > Value 65000 was relevant to the size of the int type at that time.\n> >\n> > Maybe we will change these values to INT_MAX? (See the patch in attachment).\n>\n> I don't really see any reason not to increase these a bit, but I'd\n> rather we kept them at some realistic maximum rather than all-out went\n> to INT_MAX.\n>\n> I imagine a gap was left between 65535 and 65000 to allow space for\n> more special varno in the future. We did get INDEX_VAR since then, so\n> it seems like it was probably a good idea to leave a gap.\n>\n> The problem I see what going close to INT_MAX is that the ERROR you\n> mention is unlikely to work correctly since a list_length() will never\n> get close to having INT_MAX elements before palloc() would exceed\n> MaxAllocSize for the elements array.\n>\n> Something like 1 million seems like a more realistic limit to me.\n> That might still be on the high side, but it'll likely mean we'd not\n> need to revisit this for quite a while.\n\n+1\n\nAlso, I got reminded of this discussion from not so long ago:\n\nhttps://www.postgresql.org/message-id/flat/16302-e45634e2c0e34e97%40postgresql.org\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 3 Mar 2021 17:56:40 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "On Wed, Mar 3, 2021 at 4:57 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Wed, Mar 3, 2021 at 5:52 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > Something like 1 million seems like a more realistic limit to me.\n> > That might still be on the high side, but it'll likely mean we'd not\n> > need to revisit this for quite a while.\n>\n> +1\n>\n> Also, I got reminded of this discussion from not so long ago:\n>\n> https://www.postgresql.org/message-id/flat/16302-e45634e2c0e34e97%40postgresql.org\n\n+1\n\n\n",
"msg_date": "Wed, 3 Mar 2021 17:52:10 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> Also, I got reminded of this discussion from not so long ago:\n\n> https://www.postgresql.org/message-id/flat/16302-e45634e2c0e34e97%40postgresql.org\n\nYeah. Nobody seems to have pursued Peter's idea of changing the magic\nvalues to small negative ones, but that seems like a nicer idea than\narguing over what large positive value is large enough.\n\n(Having said that, I remain pretty dubious that we're anywhere near\ngetting any real-world use out of such a change.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 03 Mar 2021 10:06:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "On 3/3/21 12:52, Julien Rouhaud wrote:\n> On Wed, Mar 3, 2021 at 4:57 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>>\n>> On Wed, Mar 3, 2021 at 5:52 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>>> Something like 1 million seems like a more realistic limit to me.\n>>> That might still be on the high side, but it'll likely mean we'd not\n>>> need to revisit this for quite a while.\n>>\n>> +1\n>>\n>> Also, I got reminded of this discussion from not so long ago:\n>>\n>> https://www.postgresql.org/message-id/flat/16302-e45634e2c0e34e97%40postgresql.org\nThank you\n> \n> +1\n> \nOk. I changed the value to 1 million and explained this decision in the \ncomment.\nThis issue caused by two cases:\n1. Range partitioning on a timestamp column.\n2. Hash partitioning.\nUsers use range distribution by timestamp because they want to insert \nnew data quickly and analyze entire set of data.\nAlso, in some discussions, I see Oracle users discussing issues with \nmore than 1e5 partitions.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Thu, 4 Mar 2021 10:43:56 +0300",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "\n\nOn 3/4/21 8:43 AM, Andrey Lepikhov wrote:\n> On 3/3/21 12:52, Julien Rouhaud wrote:\n>> On Wed, Mar 3, 2021 at 4:57 PM Amit Langote <amitlangote09@gmail.com>\n>> wrote:\n>>>\n>>> On Wed, Mar 3, 2021 at 5:52 PM David Rowley <dgrowleyml@gmail.com>\n>>> wrote:\n>>>> Something like 1 million seems like a more realistic limit to me.\n>>>> That might still be on the high side, but it'll likely mean we'd not\n>>>> need to revisit this for quite a while.\n>>>\n>>> +1\n>>>\n>>> Also, I got reminded of this discussion from not so long ago:\n>>>\n>>> https://www.postgresql.org/message-id/flat/16302-e45634e2c0e34e97%40postgresql.org\n>>>\n> Thank you\n>>\n>> +1\n>>\n> Ok. I changed the value to 1 million and explained this decision in the\n> comment.\n\nIMO just bumping up the constants from ~65k to 1M is a net loss, for\nmost users. We add this to bitmapsets, which means we're using ~8kB with\nthe current values, but this jumps to 128kB with this higher value. This\nalso means bms_next_member etc. have to walk much more memory, which is\nbound to have some performance impact for everyone.\n\nSwitching to small negative values is a much better idea, but it's going\nto be more invasive - we'll have to offset the values in the bitmapsets,\nor we'll have to invent a new bitmapset variant that can store negative\nvalues directly (e.g. by keeping two separate bitmaps internally, one\nfor negative and one for positive values). But that complicates other\nstuff too (e.g. bms_next_member now returns -1 to signal \"end\").\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 4 Mar 2021 14:59:09 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> IMO just bumping up the constants from ~65k to 1M is a net loss, for\n> most users. We add this to bitmapsets, which means we're using ~8kB with\n> the current values, but this jumps to 128kB with this higher value. This\n> also means bms_next_member etc. have to walk much more memory, which is\n> bound to have some performance impact for everyone.\n\nHmm, do we really have any places that include OUTER_VAR etc in\nbitmapsets? They shouldn't appear in relid sets, for sure.\nI agree though that if they did, this would have bad performance\nconsequences.\n\nI still think the negative-special-values approach is better.\nIf there are any places that that would break, we'd find out about\nit in short order, rather than having a silent performance lossage.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Mar 2021 10:16:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "On 3/4/21 4:16 PM, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> IMO just bumping up the constants from ~65k to 1M is a net loss, for\n>> most users. We add this to bitmapsets, which means we're using ~8kB with\n>> the current values, but this jumps to 128kB with this higher value. This\n>> also means bms_next_member etc. have to walk much more memory, which is\n>> bound to have some performance impact for everyone.\n> \n> Hmm, do we really have any places that include OUTER_VAR etc in\n> bitmapsets? They shouldn't appear in relid sets, for sure.\n> I agree though that if they did, this would have bad performance\n> consequences.\n> \n\nHmmm, I don't know. I mostly assumed that if I do pull_varnos() it would\ninclude those values. But maybe that's not supposed to happen.\n\n> I still think the negative-special-values approach is better.\n> If there are any places that that would break, we'd find out about\n> it in short order, rather than having a silent performance lossage.\n> \n\nOK\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 4 Mar 2021 16:34:51 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 3/4/21 4:16 PM, Tom Lane wrote:\n>> Hmm, do we really have any places that include OUTER_VAR etc in\n>> bitmapsets? They shouldn't appear in relid sets, for sure.\n>> I agree though that if they did, this would have bad performance\n>> consequences.\n\n> Hmmm, I don't know. I mostly assumed that if I do pull_varnos() it would\n> include those values. But maybe that's not supposed to happen.\n\nBut (IIRC) those varnos are never used till setrefs.c fixes up the plan\nto replace normal Vars with references to lower plan nodes' outputs.\nI'm not sure why anyone would be doing pull_varnos() after that;\nit would not give very meaningful results.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Mar 2021 13:11:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "Just as a proof of concept, I tried the attached, and it passes\ncheck-world. So if there's anyplace trying to stuff OUTER_VAR and\nfriends into bitmapsets, it's pretty far off the beaten track.\n\nThe main loose ends that'd have to be settled seem to be:\n\n(1) What data type do we want Var.varno to be declared as? In the\nprevious thread, Robert opined that plain \"int\" isn't a good choice,\nbut I'm not sure I agree. There's enough \"int\" for rangetable indexes\nall over the place that it'd be a fool's errand to try to make it\nuniformly something different.\n\n(2) Does that datatype change need to propagate anywhere besides\nwhat I touched here? I did not make any effort to search for\nother places.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 04 Mar 2021 14:01:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "On 04.03.21 20:01, Tom Lane wrote:\n> Just as a proof of concept, I tried the attached, and it passes\n> check-world. So if there's anyplace trying to stuff OUTER_VAR and\n> friends into bitmapsets, it's pretty far off the beaten track.\n> \n> The main loose ends that'd have to be settled seem to be:\n> \n> (1) What data type do we want Var.varno to be declared as? In the\n> previous thread, Robert opined that plain \"int\" isn't a good choice,\n> but I'm not sure I agree. There's enough \"int\" for rangetable indexes\n> all over the place that it'd be a fool's errand to try to make it\n> uniformly something different.\n\nint seems fine.\n\n> (2) Does that datatype change need to propagate anywhere besides\n> what I touched here? I did not make any effort to search for\n> other places.\n\nI think\n\nVar.varnosyn\nCurrentOfExpr.cvarno\n\nshould also have their type changed.\n\n\n",
"msg_date": "Sat, 6 Mar 2021 09:43:45 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 04.03.21 20:01, Tom Lane wrote:\n>> (2) Does that datatype change need to propagate anywhere besides\n>> what I touched here? I did not make any effort to search for\n>> other places.\n\n> I think\n\n> Var.varnosyn\n> CurrentOfExpr.cvarno\n\n> should also have their type changed.\n\nAgreed as to CurrentOfExpr.cvarno. But I think the entire point of\nvarnosyn is that it saves the original rangetable reference and\n*doesn't* get overwritten with OUTER_VAR etc. So that one is a\ndifferent animal, and I'm inclined to leave it as Index.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 06 Mar 2021 09:59:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "On 06.03.21 15:59, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 04.03.21 20:01, Tom Lane wrote:\n>>> (2) Does that datatype change need to propagate anywhere besides\n>>> what I touched here? I did not make any effort to search for\n>>> other places.\n> \n>> I think\n> \n>> Var.varnosyn\n>> CurrentOfExpr.cvarno\n> \n>> should also have their type changed.\n> \n> Agreed as to CurrentOfExpr.cvarno. But I think the entire point of\n> varnosyn is that it saves the original rangetable reference and\n> *doesn't* get overwritten with OUTER_VAR etc. So that one is a\n> different animal, and I'm inclined to leave it as Index.\n\nCan we move forward with this?\n\nI suppose there was still some uncertainty about whether all the places \nthat need changing have been identified, but do we have a better idea \nhow to find them?\n\n\n\n\n",
"msg_date": "Wed, 7 Apr 2021 15:35:56 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Can we move forward with this?\n\n> I suppose there was still some uncertainty about whether all the places \n> that need changing have been identified, but do we have a better idea \n> how to find them?\n\nWe could just push the change and see what happens. But I was thinking\nmore in terms of doing that early in the v15 cycle. I remain skeptical\nthat we need a near-term fix.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Apr 2021 09:40:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "I wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> Can we move forward with this?\n\n> We could just push the change and see what happens. But I was thinking\n> more in terms of doing that early in the v15 cycle. I remain skeptical\n> that we need a near-term fix.\n\nTo make sure we don't forget, I added an entry to the next CF for this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Apr 2021 23:13:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "On 4/8/21 8:13 AM, Tom Lane wrote:\n> I wrote:\n>> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>>> Can we move forward with this?\n> \n>> We could just push the change and see what happens. But I was thinking\n>> more in terms of doing that early in the v15 cycle. I remain skeptical\n>> that we need a near-term fix.\n> \n> To make sure we don't forget, I added an entry to the next CF for this.\nThanks for your efforts.\n\nI tried to dive deeper: replace ROWID_VAR with -4 and explicitly change \ntypes of varnos in the description of functions that can only work with \nspecial varnos.\nUse cases of OUTER_VAR looks simple (i guess). Use cases of INNER_VAR is \nmore complex because of the map_variable_attnos(). It is needed to \nanalyze how negative value of INNER_VAR can affect on this function.\n\nINDEX_VAR causes potential problem:\nin ExecInitForeignScan() and ExecInitForeignScan() we do\ntlistvarno = INDEX_VAR;\n\nhere tlistvarno has non-negative type.\n\n\nROWID_VAR caused two problems in the check-world tests:\nset_pathtarget_cost_width():\nif (var->varno < root->simple_rel_array_size)\n{\n\tRelOptInfo *rel = root->simple_rel_array[var->varno];\n...\n\nand\n\nreplace_nestloop_params_mutator():\nif (!bms_is_member(var->varno, root->curOuterRels))\n\nI skipped this problems to see other weak points, but check-world \ncouldn't find another.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Thu, 8 Apr 2021 10:24:22 +0500",
"msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "Here's a more fleshed-out version of this patch. I ran around and\nfixed all the places where INNER_VAR etc. were being assigned directly to\na variable or parameter of type Index, and also grepped for 'Index.*varno'\nto find suspicious declarations. (I didn't change every last instance\nof the latter though; just places that could possibly be looking at\npost-setrefs.c Vars.)\n\nI concluded that we don't really need to change the type of\nCurrentOfExpr.cvarno, because that's never set to a special value.\n\nThe main thing I remain concerned about is whether there are more\nplaces like set_pathtarget_cost_width(), where we could be making\nan inequality comparison on \"varno\" that would now be wrong.\nI tried to catch this by enabling -Wsign-compare and -Wsign-conversion,\nbut that produced so many thousands of uninteresting warnings that\nI soon gave up. I'm not sure there's any good way to catch remaining\nplaces like that except to commit the patch and wait for trouble\nreports.\n\nSo I'm inclined to propose pushing this and seeing what happens.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 02 Jul 2021 14:23:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "On Sat, 3 Jul 2021 at 06:23, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So I'm inclined to propose pushing this and seeing what happens.\n\nIs this really sane?\n\nAs much as I would like to see the 65k limit removed, I just have\nreservations about fixing it in this way. Even if we get all the\ncases fixed in core, there's likely a whole bunch of extensions\nthat'll have bugs as a result of this for many years to come.\n\n\"git grep \\sIndex\\s -- *.[ch] | wc -l\" is showing me 77 matches in the\nCitus code. That's not the only extension that uses the planner hook.\n\nI'm really just not sure it's worth all the dev hours fixing the\nfallout. To me, it seems much safer to jump bump 65k up to 1m. It'll\nbe a while before anyone complains about that.\n\nIt's also not that great to see the number of locations that you\nneeded to add run-time checks for negative varnos. That's not going to\ncome for free.\n\nDavid\n\n\n",
"msg_date": "Mon, 5 Jul 2021 01:51:42 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Is this really sane?\n\n> As much as I would like to see the 65k limit removed, I just have\n> reservations about fixing it in this way. Even if we get all the\n> cases fixed in core, there's likely a whole bunch of extensions\n> that'll have bugs as a result of this for many years to come.\n\nMaybe. I'm not that concerned about planner hacking: almost all of\nthe planner is only concerned with pre-setrefs.c representations and\nwill never see these values. Still, the fact that we had to inject\na couple of explicit IS_SPECIAL_VARNO tests is a bit worrisome.\n(I'm more surprised really that noplace in the executor needed it.)\nFWIW, experience with those places says that such bugs will be\nexposed immediately; it's not like they'd lurk undetected \"for years\".\n\nYou might argue that the int-vs-Index declaration changes are\nsomething that would be much harder to get right, but in reality\nthose are almost entirely cosmetic. We could make them completely\nso by changing the macro to\n\n#define IS_SPECIAL_VARNO(varno)\t\t((int) (varno) < 0)\n\nso that it'd still do the right thing when applied to a variable\ndeclared as Index. (In the light of morning, I'm not sure why\nI didn't do that already.) But we've always been extremely\ncavalier about whether RT indexes should be declared as int or\nIndex, so I felt that standardizing on the former was actually\na good side-effect of the patch.\n\nAnyway, to address your point more directly: as I recall, the main\nobjection to just increasing the values of these constants was the\nfear that it'd bloat bitmapsets containing these values. Now on\nthe one hand, this patch has proven that noplace in the core code\ndoes that today. On the other hand, there's no certainty that\nsomeone might not try to do that tomorrow (if we don't fix it as\nper this patch); or extensions might be doing so.\n\n> I'm really just not sure it's worth all the dev hours fixing the\n> fallout. To me, it seems much safer to jump bump 65k up to 1m. It'll\n> be a while before anyone complains about that.\n\nTBH, if we're to approach it that way, I'd be inclined to go for\nbroke and raise the values to ~2B. Then (a) we'll be shut of the\nproblem pretty much permanently, and (b) if someone does try to\nmake a bitmapset containing these values, hopefully they'll see\nperformance bad enough to expose the issue immediately.\n\n> It's also not that great to see the number of locations that you\n> needed to add run-time checks for negative varnos. That's not going to\n> come for free.\n\nSince the test is just \"< 0\", I pretty much disbelieve that argument.\nThere are only two such places in the patch, and neither of them\nare *that* performance-sensitive.\n\nAnyway, the raise-the-values solution does have the advantage of\nbeing a four-liner, so I can live with it if that's the consensus.\nBut I do think this way is cleaner in the long run, and I doubt\nthe argument that it'll create any hard-to-detect bugs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Jul 2021 11:37:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "On 2/7/21 21:23, Tom Lane wrote:\n> So I'm inclined to propose pushing this and seeing what happens.\n\n+1\nBut why the Index type still uses for indexing of range table entries?\nFor example:\n- we give int resultRelation value to create_modifytable_path() as Index \nnominalRelation value.\n- exec_rt_fetch(Index) calls list_nth(int).\n- generate_subquery_vars() accepts an 'Index varno' value\n\nIt looks sloppy. Do you plan to change this in the next commits?\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n",
"msg_date": "Mon, 5 Jul 2021 10:51:03 +0300",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "Hi hackers,\n\n> > So I'm inclined to propose pushing this and seeing what happens.\n>\n> +1\n\n+1. The proposed changes will be beneficial in the long term. They\nwill affect existing extensions. However, the scale of the problem\nseems to be exaggerated.\n\nI can confirm that the patch passes installcheck-world. After some\nsearching through the code, I was unable to identify any places where\nthe logic will break. Although this only proves my inattention, the\neasiest way to make any further progress seems to apply the patch.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Fri, 10 Sep 2021 17:44:25 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> +1. The proposed changes will be beneficial in the long term. They\n> will affect existing extensions. However, the scale of the problem\n> seems to be exaggerated.\n\nYeah, after thinking more about this I agree we should just do it.\nI do not say that David's concerns about effects on extensions are\nwithout merit, but I do think he's overblown it a bit. Most of\nthe patch is s/Index/int/ for various variables, and as I mentioned\nbefore, that's basically cosmetic; there's no strong reason why\nextensions have to follow suit. (In the attached v2, I modified\nIS_SPECIAL_VARNO() as discussed, so it will do the right thing\neven if the input is declared as Index.) There may be a few\nplaces where extensions need to add explicit IS_SPECIAL_VARNO()\ncalls, but not many, and I doubt they'll be hard to find.\n\nThe alternative of increasing the values of OUTER_VAR et al\nis not without risk to extensions either, so on the whole\nI don't think this patch is any more problematic than many\nother things we commit with little debate.\n\nIn any case, since it's still very early in the v15 cycle,\nthere is plenty of time for people to find problems. If I'm\nwrong and there are serious consequences, we can always revert\nthis and do it the other way.\n\n(v2 below is a rebase up to HEAD; no actual code changes except\nfor adjusting the definition of IS_SPECIAL_VARNO.)\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 11 Sep 2021 13:37:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n> But why the Index type still uses for indexing of range table entries?\n> For example:\n> - we give int resultRelation value to create_modifytable_path() as Index \n> nominalRelation value.\n> - exec_rt_fetch(Index) calls list_nth(int).\n> - generate_subquery_vars() accepts an 'Index varno' value\n\nAs I mentioned, the patch only intends to touch code that's possibly\nused with post-setrefs Vars. In the parser and most of the planner,\nthere's little need to do anything because only positive varno values\nwill appear. So touching that code would just make the patch more\ninvasive without accomplishing much.\n\nIf we'd had any strong convention about whether RT indexes should be\nint or Index, I might be worried about maintaining consistency.\nBut it's always been a horrid mishmash of both ways. Cleaning that\nup completely is a task I don't care to undertake right now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 11 Sep 2021 13:42:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "On 9/11/21 10:37 PM, Tom Lane wrote:\n> Aleksander Alekseev <aleksander@timescale.com> writes:\n> (v2 below is a rebase up to HEAD; no actual code changes except\n> for adjusting the definition of IS_SPECIAL_VARNO.)\nI have looked at this code. No problems found.\nAlso, as a test, I used two tables with 1E5 partitions each. I tried to \ndo plain SELECT, JOIN, join with plain table. No errors found, only \nperformance issues. But it is a subject for another research.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n",
"msg_date": "Tue, 14 Sep 2021 11:43:03 +0500",
"msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "Hi Andrey,\n\n> only performance issues\n\nThat's interesting. Any chance you could share the hardware\ndescription, the configuration file, and steps to reproduce with us?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 14 Sep 2021 14:37:26 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru> writes:\n> Also, as a test, I used two tables with 1E5 partitions each. I tried to \n> do plain SELECT, JOIN, join with plain table. No errors found, only \n> performance issues. But it is a subject for another research.\n\nYeah, there's no expectation that the performance would be any\ngood yet ;-)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Sep 2021 10:01:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "On 14/9/21 16:37, Aleksander Alekseev wrote:\n> Hi Andrey,\n> \n>> only performance issues\n> \n> That's interesting. Any chance you could share the hardware\n> description, the configuration file, and steps to reproduce with us?\n> \nI didn't control execution time exactly. Because it is a join of two \nempty tables. As I see, this join used most part of 48GB RAM memory, \nplanned all day on a typical 6 amd cores computer.\nI guess this is caused by sequental traversal of the partition list in \nsome places in the optimizer.\nIf it makes practical sense, I could investigate reasons for such poor \nperformance.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n",
"msg_date": "Wed, 15 Sep 2021 11:41:38 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Increase value of OUTER_VAR"
},
{
"msg_contents": "Hi Andrey,\n\n> >> only performance issues\n> >\n> > That's interesting. Any chance you could share the hardware\n> > description, the configuration file, and steps to reproduce with us?\n> >\n> I didn't control execution time exactly. Because it is a join of two\n> empty tables. As I see, this join used most part of 48GB RAM memory,\n> planned all day on a typical 6 amd cores computer.\n> I guess this is caused by sequental traversal of the partition list in\n> some places in the optimizer.\n> If it makes practical sense, I could investigate reasons for such poor\n> performance.\n\nLet's say, any information regarding bottlenecks that affect real users\nwith real queries is of interest. Artificially created queries that are\nunlikely to be ever executed by anyone are not.\n\n-- \nBest regards,\nAleksander Alekseev\n\nHi Andrey,> >> only performance issues> >> > That's interesting. Any chance you could share the hardware> > description, the configuration file, and steps to reproduce with us?> >> I didn't control execution time exactly. Because it is a join of two> empty tables. As I see, this join used most part of 48GB RAM memory,> planned all day on a typical 6 amd cores computer.> I guess this is caused by sequental traversal of the partition list in> some places in the optimizer.> If it makes practical sense, I could investigate reasons for such poor> performance.Let's say, any information regarding bottlenecks that affect real users with real queries is of interest. Artificially created queries that are unlikely to be ever executed by anyone are not.-- Best regards,Aleksander Alekseev",
"msg_date": "Wed, 15 Sep 2021 11:01:43 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Increase value of OUTER_VAR"
}
] |
[
{
"msg_contents": "Oracle:\nhttps://docs.oracle.com/en/database/oracle/oracle-database/18/adfns/regexp.html#GUID-F14733F3-B943-4BAD-8489-F9704986386B\nIBM:\nhttps://www.ibm.com/support/producthub/db2/docs/content/SSEPGG_11.5.0/com.ibm.db2.luw.sql.ref.doc/doc/r0061494.html?pos=2\nZ/OS:\nhttps://www.ibm.com/support/knowledgecenter/SSEPEK_12.0.0/sqlref/src/tpc/db2z_bif_regexplike.html\nEDB:\nhttps://www.enterprisedb.com/edb-docs/d/edb-postgres-advanced-server/reference/database-compatibility-for-oracle-developers-reference-guide/9.6/Database_Compatibility_for_Oracle_Developers_Reference_Guide.1.098.html\n\n\nHi,\n\n\nI would like to suggest adding the $subject functions to PostgreSQL. We\ncan do lot of things using regexp_matches() and regexp_replace() but\nsome time it consist on building complex queries that these functions\ncan greatly simplify.\n\n\nLook like all RDBMS that embedded a regexp engine implement these\nfunctions (Oracle, DB2, MySQL, etc) but I don't know if they are part of\nthe SQL standard. Probably using regexp_matches() can be enough even if\nit generates more complex statements but having these functions in\nPostgreSQL could be useful for users and code coming from theses RDBMS.\n\n\n - REGEXP_COUNT( string text, pattern text, [, position int] [, flags\ntext ] ) -> integer\n\n Return the number of times a pattern occurs in a source string\nafter a certain position, default from beginning.\n\n\n It can be implemented in PostgreSQL as a subquery using:\n\n SELECT count(*) FROM regexp_matches('A1B2C3', '[A-Z][0-9]',\n'g'); -> 3\n\n To support positioning we have to use substr(), for example\nstarting at position 2:\n\n SELECT count(*) FROM regexp_matches(substr('A1B2C3', 2),\n'[A-Z][0-9]'); -> 2\n\n With regexp_count() we can simply use it like this:\n\n SELECT regexp_count('A1B2C3', '[A-Z][0-9]'); -> 3\n SELECT regexp_count('A1B2C3', '[A-Z][0-9]', 2); -> 2\n\n\n - REGEXP_INSTR( string text, pattern text, [, position int] [,\noccurrence int] [, return_opt int ] [, flags text ] [, group int] ) ->\ninteger\n\n Return the position in a string for a regular expression\npattern. It returns an integer indicating the beginning or ending\nposition of the matched substring, depending on the value of the\nreturn_opt argument (default beginning). If no match is found, then the\nfunction returns 0.\n\n * position: indicates the character where the search should\nbegin.\n * occurrence: indicates which occurrence of pattern found in\nstring should be search.\n * return_opt: 0 mean returns the position of the first\ncharacter of the occurrence, 1 mean returns the position of the\ncharacter following the occurrence.\n * flags: regular expression modifiers.\n * group: indicates which subexpression in pattern is the\ntarget of the function.\n\n Example:\n\n SELECT regexp_instr('1234567890', '(123)(4(56)(78))', 1, 1,\n0, 'i', 4); -> 7\n\n to obtain a PostgreSQL equivalent:\n\n SELECT position((SELECT (regexp_matches('1234567890',\n'(123)(4(56)(78))', 'ig'))[4] offset 0 limit 1) IN '1234567890');\n\n\n - REGEXP_SUBSTR( string text, pattern text, [, position int] [,\noccurrence int] [, flags text ] [, group int] ) -> text\n\n It is similar to regexp_instr(), but instead of returning the\nposition of the substring, it returns the substring itself.\n\n Example:\n\n SELECT regexp_substr('500 gilles''s street, 38000 Grenoble,\nFR', ',[^,]+,'); -> , 38000 Grenoble,\n\n or with a more complex extraction:\n\n SELECT regexp_substr('1234567890', '(123)(4(56)(78))', 1, 1,\n'i', 4); -> 78\n SELECT regexp_substr('1234567890 1234557890',\n'(123)(4(5[56])(78))', 1, 2, 'i', 3); -> 55\n\n To obtain the same result for the last example we have to use:\n\n SELECT (SELECT * FROM regexp_matches('1234567890\n1234557890', '(123)(4(5[56])(78))', 'g') offset 1 limit 2)[3];\n\n\nI have not implemented the regexp_like() function, it is quite similar\nthan the ~ and ~* operators except that it can also support other\nmodifiers than 'i'. I can implement it easily and add it to the patch if\nwe want to supports all those common functions.\n\n - REGEXP_LIKE( string text, pattern text, [, flags text ] ) -> boolean\n\n Similar to the LIKE condition, except that it performs regular\nexpression matching instead of the simple pattern matching performed by\nLIKE.\n\n Example:\n\n SELECT * FROM t1 WHERE regexp_like(col1, '^d$', 'm');\n\n to obtain a PostgreSQL equivalent:\n\n SELECT * FROM t1 WHERE regexp_match (col1, '^d$', 'm' ) IS\nNOT NULL;\n\n\nThere is also a possible extension to regexp_replace() that I have not\nimplemented yet because it need more work than the previous functions.\n\n\n - REGEXP_REPLACE( string text, pattern text, replace_string text, [,\nposition int] [, occurrence int] [, flags text ] )\n\n Extend PostgreSQL regexp_replace() by adding position and occurrence\ncapabilities.\n\nThe patch is ready for testing with documentation and regression tests.\n\n\nBest regards,\n\n-- \nGilles Darold\nLzLabs GmbH\n\n\n\n\n",
"msg_date": "Wed, 3 Mar 2021 10:15:57 +0100",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "[PATCH] proposal for regexp_count, regexp_instr, regexp_substr and\n regexp_replace"
},
{
"msg_contents": "My apologies for the links in the head, the email formatting and the\nmissing patch, I accidently send the email too early.\n\n--\n\nGilles",
"msg_date": "Wed, 3 Mar 2021 10:22:16 +0100",
"msg_from": "Gilles Darold <gillesdarold@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr, regexp_substr\n and regexp_replace"
},
{
"msg_contents": "Hi,\n\n\nThis is a new version of the patch that now implements all the XQUERY\nregexp functions as described in the standard, minus the differences of\nPostgerSQL regular expression explain in [1].\n\n\nThe standard SQL describe functions like_regex(), occurrences_regex(),\nposition_regex(), substring_regex() and translate_regex() which\ncorrespond to the commonly named functions regexp_like(),\nregexp_count(), regexp_instr(), regexp_substr() and regexp_replace() as\nreported by Chapman Flack in [2]. All these function are implemented in\nthe patch. Syntax of the functions are:\n\n\n- regexp_like(string, pattern [, flags ])\n\n- regexp_count( string, pattern [, position ] [, flags ])\n\n- regexp_instr( string, pattern [, position ] [, occurrence ] [,\nreturnopt ] [, flags ] [, group ])\n\n- regexp_substr( string, pattern [, position ] [, occurrence ] [, flags\n] [, group ])\n\n- regexp_replace(source, pattern, replacement [, position ] [,\noccurrence ] [, flags ])\n\n\nIn addition to previous patch version I have added the regexp()_like\nfunction and extended the existsing regex_replace() function. The patch\ndocuments these functions and adds regression tests for all functions. I\nwill add it to the commitfest.\n\n\nAn other regexp functions regexp_positions() that returns all\noccurrences that matched a POSIX regular expression is also developped\nby Joel Jacobson, see [2]. This function expands the list of regexp\nfunctions described in XQUERY.\n\n\n[1]\nhttps://www.postgresql.org/docs/13/functions-matching.html#FUNCTIONS-POSIX-REGEXP\n\n[2]\nhttps://www.postgresql.org/message-id/flat/bf2222d5-909d-408b-8531-95b32f18d4ab%40www.fastmail.com#3ec8ba658eeabcae2ac6ccca33bd1aed\n\n\n-- \nGilles Darold\nLzLabs GmbH\nhttp://www.lzlabs.com/",
"msg_date": "Sat, 20 Mar 2021 19:48:48 +0100",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr, regexp_substr\n and regexp_replace"
},
{
"msg_contents": "Le 20/03/2021 à 19:48, Gilles Darold a écrit :\n>\n> Hi,\n>\n>\n> This is a new version of the patch that now implements all the XQUERY\n> regexp functions as described in the standard, minus the differences\n> of PostgerSQL regular expression explain in [1].\n>\n>\n> The standard SQL describe functions like_regex(), occurrences_regex(),\n> position_regex(), substring_regex() and translate_regex() which\n> correspond to the commonly named functions regexp_like(),\n> regexp_count(), regexp_instr(), regexp_substr() and regexp_replace()\n> as reported by Chapman Flack in [2]. All these function are\n> implemented in the patch. Syntax of the functions are:\n>\n>\n> - regexp_like(string, pattern [, flags ])\n>\n> - regexp_count( string, pattern [, position ] [, flags ])\n>\n> - regexp_instr( string, pattern [, position ] [, occurrence ] [,\n> returnopt ] [, flags ] [, group ])\n>\n> - regexp_substr( string, pattern [, position ] [, occurrence ] [,\n> flags ] [, group ])\n>\n> - regexp_replace(source, pattern, replacement [, position ] [,\n> occurrence ] [, flags ])\n>\n>\n> In addition to previous patch version I have added the regexp()_like\n> function and extended the existsing regex_replace() function. The\n> patch documents these functions and adds regression tests for all\n> functions. I will add it to the commitfest.\n>\n>\n> An other regexp functions regexp_positions() that returns all\n> occurrences that matched a POSIX regular expression is also developped\n> by Joel Jacobson, see [2]. This function expands the list of regexp\n> functions described in XQUERY.\n>\n>\n> [1]\n> https://www.postgresql.org/docs/13/functions-matching.html#FUNCTIONS-POSIX-REGEXP\n>\n> [2]\n> https://www.postgresql.org/message-id/flat/bf2222d5-909d-408b-8531-95b32f18d4ab%40www.fastmail.com#3ec8ba658eeabcae2ac6ccca33bd1aed\n>\n>\n\nI would like to see these functions in PG 14 but it is a bit too late,\nadded to commitfest 2021-07.\n\n\n-- \nGilles Darold\nLzLabs GmbH\nhttp://www.lzlabs.com/\n\n\n\n\n\n\n\nLe 20/03/2021 à 19:48, Gilles Darold a\n écrit :\n\n\n\nHi,\n\n\nThis is a new version of the patch that now implements all the\n XQUERY regexp functions as described in the standard, minus the\n differences of PostgerSQL regular expression explain in [1].\n\n\nThe standard SQL describe functions like_regex(),\n occurrences_regex(), position_regex(), substring_regex() and\n translate_regex() which correspond to the commonly named\n functions regexp_like(), regexp_count(), regexp_instr(),\n regexp_substr() and regexp_replace() as reported by Chapman\n Flack in [2]. All these function are implemented in the patch.\n Syntax of the functions are:\n\n\n- regexp_like(string, pattern [, flags ])\n- regexp_count( string, pattern [, position ] [, flags ])\n- regexp_instr( string, pattern [, position ] [, occurrence ]\n [, returnopt ] [, flags ] [, group ])\n\n- regexp_substr( string, pattern [, position ] [, occurrence ]\n [, flags ] [, group ])\n- regexp_replace(source, pattern, replacement [, position ] [,\n occurrence ] [, flags ])\n\n\nIn addition to previous patch version I have added the\n regexp()_like function and extended the existsing\n regex_replace() function. The patch documents these functions\n and adds regression tests for all functions. I will add it to\n the commitfest.\n\n\n\nAn other regexp functions regexp_positions() that returns all\n occurrences that matched a POSIX regular expression is also\n developped by Joel Jacobson, see [2]. This\n function expands the list of regexp\n functions described in XQUERY.\n\n\n\n[1]\n https://www.postgresql.org/docs/13/functions-matching.html#FUNCTIONS-POSIX-REGEXP\n\n[2]\n https://www.postgresql.org/message-id/flat/bf2222d5-909d-408b-8531-95b32f18d4ab%40www.fastmail.com#3ec8ba658eeabcae2ac6ccca33bd1aed\n\n\n\n\nI would like to see these functions in PG 14 but it is a bit too\n late, added to commitfest 2021-07.\n\n\n-- \n Gilles Darold\n LzLabs GmbH\nhttp://www.lzlabs.com/",
"msg_date": "Sun, 21 Mar 2021 10:21:17 +0100",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr, regexp_substr\n and regexp_replace"
},
{
"msg_contents": "> On 2021.03.20. 19:48 Gilles Darold <gilles@darold.net> wrote:\n> \n> This is a new version of the patch that now implements all the XQUERY\n> regexp functions as described in the standard, minus the differences of\n> PostgerSQL regular expression explain in [1].\n> \n> The standard SQL describe functions like_regex(), occurrences_regex(),\n> position_regex(), substring_regex() and translate_regex() which\n> correspond to the commonly named functions regexp_like(),\n> regexp_count(), regexp_instr(), regexp_substr() and regexp_replace() as\n> reported by Chapman Flack in [2]. All these function are implemented in\n\n> [v2-0001-xquery-regexp-functions.patch]\n\nHi,\n\nApply, compile and (world)check are fine. I haven't found errors in functionality.\n\nI went through the docs, and came up with these changes in func.sgml, and pg_proc.dat.\n\nUseful functions - thanks!\n\nErik Rijkers",
"msg_date": "Sun, 21 Mar 2021 12:07:37 +0100 (CET)",
"msg_from": "er@xs4all.nl",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr, regexp_substr\n and regexp_replace"
},
{
"msg_contents": "Le 21/03/2021 à 12:07, er@xs4all.nl a écrit :\n>> On 2021.03.20. 19:48 Gilles Darold <gilles@darold.net> wrote:\n>> \n>> This is a new version of the patch that now implements all the XQUERY\n>> regexp functions as described in the standard, minus the differences of\n>> PostgerSQL regular expression explain in [1].\n>>\n>> The standard SQL describe functions like_regex(), occurrences_regex(),\n>> position_regex(), substring_regex() and translate_regex() which\n>> correspond to the commonly named functions regexp_like(),\n>> regexp_count(), regexp_instr(), regexp_substr() and regexp_replace() as\n>> reported by Chapman Flack in [2]. All these function are implemented in\n>> [v2-0001-xquery-regexp-functions.patch]\n> Hi,\n>\n> Apply, compile and (world)check are fine. I haven't found errors in functionality.\n>\n> I went through the docs, and came up with these changes in func.sgml, and pg_proc.dat.\n>\n> Useful functions - thanks!\n>\n> Erik Rijkers\n\n\nThanks a lot Erik, here is a version of the patch with your corrections.\n\n\n-- \nGilles Darold\nLzLabs GmbH\nhttp://www.lzlabs.com/",
"msg_date": "Sun, 21 Mar 2021 14:19:13 +0100",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr, regexp_substr\n and regexp_replace"
},
{
"msg_contents": "On 03/21/21 09:19, Gilles Darold wrote:\n>>> On 2021.03.20. 19:48 Gilles Darold <gilles@darold.net> wrote:\n>>> \n>>> This is a new version of the patch that now implements all the XQUERY\n>>> regexp functions as described in the standard, minus the differences of\n>>> PostgerSQL regular expression explain in [1].\n>>>\n>>> The standard SQL describe functions like_regex(), occurrences_regex(),\n>>> position_regex(), substring_regex() and translate_regex() which\n>>> correspond to the commonly named functions regexp_like(),\n>>> regexp_count(), regexp_instr(), regexp_substr() and regexp_replace() as\n>>> reported by Chapman Flack in [2]. All these function are implemented in\n>>> [v2-0001-xquery-regexp-functions.patch]\n\nI quickly looked over this patch preparing to object if it actually\npurported to implement the ISO foo_regex() named functions without\nthe ISO semantics, but a quick grep reassured me that it doesn't\nimplement any of those functions. It only supplies functions in\nthe alternative, apparently common de facto naming scheme regexp_foo().\n\nTo be clear, I think that's the right call. I do not think it would be\na good idea to supply functions that have the ISO names but not the\nspecified regex dialect.\n\nA set of functions analogous to the ISO ones but differently named and\nwith a different regex dialect seems fine to me, especially if these\ndifferent names are de facto common, and as far as I can tell, that is\nwhat this patch provides. So I have no objection to that. :)\n\nIt might then be fair to say that the /description/ of the patch as\nimplementing the XQuery-based foo_regex functions isn't quite right,\nor at least carries a risk of jarring some readers into hasty\ndouble-takes on Sunday mornings before coffee.\n\nIt might be clearer to just mention the close correspondence between\nthe functions in this differently-named set and the corresponding ISO ones.\n\nIf this turns out to be a case of \"attached the wrong patch, here's\nthe one that does implement foo_regex functions!\" then I reserve an\nobjection to that. :)\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Sun, 21 Mar 2021 10:42:34 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr, regexp_substr\n and regexp_replace"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> If this turns out to be a case of \"attached the wrong patch, here's\n> the one that does implement foo_regex functions!\" then I reserve an\n> objection to that. :)\n\n+1 to that. Just to add a note, I do have some ideas about extending\nour regex parser so that it could duplicate the XQuery syntax --- none\nof the points we mention in 9.7.3.8 seem insurmountable. I'm not\nplanning to work on that in the near future, mind you, but I definitely\nthink that we don't want to paint ourselves into a corner where we've\nalready implemented the XQuery regex functions with the wrong behavior.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Mar 2021 10:53:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr,\n regexp_substr and regexp_replace"
},
{
"msg_contents": "Le 21/03/2021 à 15:53, Tom Lane a écrit :\n> Chapman Flack <chap@anastigmatix.net> writes:\n>> If this turns out to be a case of \"attached the wrong patch, here's\n>> the one that does implement foo_regex functions!\" then I reserve an\n>> objection to that. :)\n> +1 to that. Just to add a note, I do have some ideas about extending\n> our regex parser so that it could duplicate the XQuery syntax --- none\n> of the points we mention in 9.7.3.8 seem insurmountable. I'm not\n> planning to work on that in the near future, mind you, but I definitely\n> think that we don't want to paint ourselves into a corner where we've\n> already implemented the XQuery regex functions with the wrong behavior.\n>\n> \t\t\tregards, tom lane\n\n\nI apologize for confusing with the words and phrases I have used. This\npatch implements the regexp_foo () functions which are available in most\nRDBMS with the behavior described in the documentation. I have modified\nthe title of the patch in the commitfest to removed wrong use of XQUERY. \n\n\nI don't know too if the other RDBMS respect the XQUERY behavior but for\nwhat I've seen for Oracle they are using limited regexp modifiers with\nsometime not the same letter than PostgreSQL for the same behavior. I\nhave implemented these functions with the Oracle behavior in Orafce [1]\nwith a function that checks the modifiers used. This patch doesn't mimic\nthe Oracle behavior, it use the PostgreSQL behavior with regexp, the one\nused by regex_replace() and regex_matches(). All regexp modifiers can be\nused.\n\n\n[1] https://github.com/orafce/orafce/blob/master/orafce--3.14--3.15.sql\n\n\n-- \nGilles Darold\nhttp://www.darold.net/\n\n\n\n\n\n\n\nLe 21/03/2021 à 15:53, Tom Lane a\n écrit :\n\n\nChapman Flack <chap@anastigmatix.net> writes:\n\n\nIf this turns out to be a case of \"attached the wrong patch, here's\nthe one that does implement foo_regex functions!\" then I reserve an\nobjection to that. :)\n\n\n\n+1 to that. Just to add a note, I do have some ideas about extending\nour regex parser so that it could duplicate the XQuery syntax --- none\nof the points we mention in 9.7.3.8 seem insurmountable. I'm not\nplanning to work on that in the near future, mind you, but I definitely\nthink that we don't want to paint ourselves into a corner where we've\nalready implemented the XQuery regex functions with the wrong behavior.\n\n\t\t\tregards, tom lane\n\n\n\n\nI\n apologize for confusing with the words and phrases I have\n used. This\n patch implements the regexp_foo () functions which are\n available in most RDBMS with the behavior described in the\n documentation. I have modified the title of\n the patch in the commitfest to removed wrong use of XQUERY. \n\n\n\nI\n don't know too if the other RDBMS respect the XQUERY\n behavior but for what I've seen for Oracle they are using\n limited regexp modifiers with sometime not the same letter\n than PostgreSQL for the same behavior. I have implemented\n these functions with the Oracle behavior in Orafce [1] with\n a function that checks the modifiers used. This patch\n doesn't mimic the Oracle behavior, it use the PostgreSQL\n behavior with regexp, the one used by regex_replace() and\n regex_matches(). All regexp modifiers can be used.\n\n\n[1]\nhttps://github.com/orafce/orafce/blob/master/orafce--3.14--3.15.sql\n\n\n\n-- \nGilles Darold\nhttp://www.darold.net/",
"msg_date": "Sun, 21 Mar 2021 17:40:45 +0100",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr, regexp_substr\n and regexp_replace"
},
{
"msg_contents": "Le 21/03/2021 à 15:53, Tom Lane a écrit :\n> Chapman Flack <chap@anastigmatix.net> writes:\n>> If this turns out to be a case of \"attached the wrong patch, here's\n>> the one that does implement foo_regex functions!\" then I reserve an\n>> objection to that. :)\n>>\n\nAnd the patch renamed.",
"msg_date": "Sun, 21 Mar 2021 17:46:47 +0100",
"msg_from": "Gilles Darold <gillesdarold@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr, regexp_substr\n and regexp_replace"
},
{
"msg_contents": "Gilles Darold <gillesdarold@gmail.com> writes:\n> [ v4-0001-regexp-foo-functions.patch ]\n\nI started to work through this and was distressed to realize that\nit's trying to redefine regexp_replace() in an incompatible way.\nWe already have\n\nregression=# \\df regexp_replace\n List of functions\n Schema | Name | Result data type | Argument data types | Type \n------------+----------------+------------------+------------------------+------\n pg_catalog | regexp_replace | text | text, text, text | func\n pg_catalog | regexp_replace | text | text, text, text, text | func\n(2 rows)\n\nThe patch proposes to add (among other alternatives)\n\n+{ oid => '9608', descr => 'replace text using regexp',\n+ proname => 'regexp_replace', prorettype => 'text',\n+ proargtypes => 'text text text int4', prosrc => 'textregexreplace_extended_no_occurrence' },\n\nwhich is going to be impossibly confusing for both humans and machines.\nI don't think we should go there. Even if you managed to construct\nexamples that didn't result in \"ambiguous function\" failures, that\ndoesn't mean that ordinary mortals won't get bit that way.\n\nI'm inclined to just drop the regexp_replace additions. I don't think\nthat the extra parameters Oracle provides here are especially useful.\nThey're definitely not useful enough to justify creating compatibility\nhazards for.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 26 Jul 2021 15:56:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr,\n regexp_substr and regexp_replace"
},
{
"msg_contents": "Le 26/07/2021 à 21:56, Tom Lane a écrit :\n> Gilles Darold <gillesdarold@gmail.com> writes:\n>> [ v4-0001-regexp-foo-functions.patch ]\n> I started to work through this and was distressed to realize that\n> it's trying to redefine regexp_replace() in an incompatible way.\n> We already have\n>\n> regression=# \\df regexp_replace\n> List of functions\n> Schema | Name | Result data type | Argument data types | Type \n> ------------+----------------+------------------+------------------------+------\n> pg_catalog | regexp_replace | text | text, text, text | func\n> pg_catalog | regexp_replace | text | text, text, text, text | func\n> (2 rows)\n>\n> The patch proposes to add (among other alternatives)\n>\n> +{ oid => '9608', descr => 'replace text using regexp',\n> + proname => 'regexp_replace', prorettype => 'text',\n> + proargtypes => 'text text text int4', prosrc => 'textregexreplace_extended_no_occurrence' },\n>\n> which is going to be impossibly confusing for both humans and machines.\n> I don't think we should go there. Even if you managed to construct\n> examples that didn't result in \"ambiguous function\" failures, that\n> doesn't mean that ordinary mortals won't get bit that way.\n>\n> I'm inclined to just drop the regexp_replace additions. I don't think\n> that the extra parameters Oracle provides here are especially useful.\n> They're definitely not useful enough to justify creating compatibility\n> hazards for.\n\n\nI would not say that being able to replace the Nth occurrence of a\npattern matching is not useful but i agree that this is not a common\ncase with replacement. Both Oracle [1] and IBM DB2 [2] propose this form\nand I have though that we can not have compatibility issues because of\nthe different data type at the 4th parameter. Anyway, maybe we can just\nrename the function even if I would prefer that regexp_replace() be\nextended. For example:\n\n\n regexp_replace(source, pattern, replacement [, flags ]);\n\n regexp_substitute(source, pattern, replacement [, position ] [,\noccurrence ] [, flags ]);\n\n\nof course with only 3 parameters the two functions are the same.\n\n\nWhat do you think about the renaming proposal instead of simply drop the\nextended form of the function?\n\n\nBest regards,\n\n\n[1] https://docs.oracle.com/database/121/SQLRF/functions163.htm#SQLRF06302\n\n[2] https://www.ibm.com/docs/en/db2oc?topic=functions-regexp-replace\n\n\n-- \nGilles Darold\nhttp://www.darold.net/\n\n\n\n\n",
"msg_date": "Tue, 27 Jul 2021 11:38:36 +0200",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr, regexp_substr\n and regexp_replace"
},
{
"msg_contents": "Gilles Darold <gilles@darold.net> writes:\n> Le 26/07/2021 à 21:56, Tom Lane a écrit :\n>> I'm inclined to just drop the regexp_replace additions. I don't think\n>> that the extra parameters Oracle provides here are especially useful.\n>> They're definitely not useful enough to justify creating compatibility\n>> hazards for.\n\n> I would not say that being able to replace the Nth occurrence of a\n> pattern matching is not useful but i agree that this is not a common\n> case with replacement. Both Oracle [1] and IBM DB2 [2] propose this form\n> and I have though that we can not have compatibility issues because of\n> the different data type at the 4th parameter.\n\nWell, here's an example of the potential issues:\n\nregression=# create function rr(text,text,text,text) returns text\nregression-# language sql as $$select 'text'$$;\nCREATE FUNCTION\nregression=# create function rr(text,text,text,int4) returns text\nlanguage sql as $$select 'int4'$$;\nCREATE FUNCTION\nregression=# select rr('a','b','c','d');\n rr \n------\n text\n(1 row)\n\nregression=# select rr('a','b','c',42);\n rr \n------\n int4\n(1 row)\n\nSo far so good, but:\n\nregression=# prepare rr as select rr('a','b','c',$1);\nPREPARE\nregression=# execute rr(12); \n rr \n------\n text\n(1 row)\n\nSo somebody trying to use the 4-parameter Oracle form from, say, JDBC\nwould get bit if they were sloppy about specifying parameter types.\n\nThe one saving grace is that digits aren't valid regexp flags,\nso the outcome would be something like\n\nregression=# select regexp_replace('a','b','c','12');\nERROR: invalid regular expression option: \"1\"\n\nwhich'd be less difficult to debug than silent misbehavior.\nConversely, if you thought you were passing flags but it somehow\ngot interpreted as a start position, that would fail too:\n\nregression=# prepare rri as select rr('a','b','c', $1::int);\nPREPARE\nregression=# execute rri('gi');\nERROR: invalid input syntax for type integer: \"gi\"\nLINE 1: execute rri('gi');\n ^\n\nStill, I bet a lot that we'd see periodic bug reports complaining\nthat it doesn't work.\n\n> Anyway, maybe we can just\n> rename the function even if I would prefer that regexp_replace() be\n> extended. For example:\n> regexp_replace(source, pattern, replacement [, flags ]);\n> regexp_substitute(source, pattern, replacement [, position ] [,\n> occurrence ] [, flags ]);\n\nHmm. Of course the entire selling point of this patch seems to be\nbug-compatibility with Oracle, so using different names is largely\ndefeating the point :-(\n\nMaybe we should just hold our noses and do it. The point that\nyou'd get a recognizable failure if the wrong function were chosen\nreassures me a little bit. We've seen a lot of cases where this\nsort of ambiguity results in the system just silently doing something\ndifferent from what you expected, and I was afraid that that could\nhappen here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Jul 2021 17:38:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr,\n regexp_substr and regexp_replace"
},
{
"msg_contents": "I've been working through this patch, and trying to verify\ncompatibility against Oracle and DB2, and I see some points that need\ndiscussion or at least recording for the archives.\n\n* In Oracle, while the documentation for regexp_instr says that\nreturn_option should only be 0 or 1, experimentation with sqlfiddle\nshows that any nonzero value is silently treated as 1. The patch\nraises an error for other values, which I think is a good idea.\n(IBM's docs say that DB2 raises an error too, though I can't test\nthat.) We don't need to be bug-compatible to that extent.\n\n* What should happen when the subexpression/capture group number of\nregexp_instr or regexp_substr exceeds the number of parenthesized\nsubexpressions of the regexp? Oracle silently returns a no-match\nresult (0 or NULL), as does this patch. However, IBM's docs say\nthat DB2 raises an error. I'm inclined to think that this is\nlikewise taking bug-compatibility too far, and that we should\nraise an error like DB2. There are clearly cases where throwing\nan error would help debug a faulty call, while I'm less clear on\na use-case where not throwing an error would be useful.\n\n* IBM's docs say that both regexp_count and regexp_like have\narguments \"string, pattern [, start] [, flags]\" --- that is,\neach of start and flags can be independently specified or omitted.\nThe patch follows Oracle, which has no start option for \nregexp_like, and where you can't write flags for regexp_count\nwithout writing start. This is fine by me, because doing these\nlike DB2 would introduce the same which-argument-is-this issues\nas we're being forced to cope with for regexp_replace. I don't\nthink we need to accept ambiguity in these cases too. But it's\nworth memorializing this decision in the thread.\n\n* The patch has most of these functions silently ignoring the 'g'\nflag, but I think they should raise errors instead. Oracle doesn't\naccept a 'g' flag for these, so why should we? The only case where\nthat logic doesn't hold is regexp_replace, because depending on which\nsyntax you use the 'g' flag might or might not be meaningful. So\nfor regexp_replace, I'd vote for silently ignoring 'g' if the\noccurrence-number parameter is given, while honoring it if not.\n\nI've already made changes in my local copy per the last item,\nbut I've not done anything about throwing errors for out-of-range\nsubexpression numbers. Anybody have an opinion about that one?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 01 Aug 2021 13:23:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr,\n regexp_substr and regexp_replace"
},
{
"msg_contents": "Le 30/07/2021 à 23:38, Tom Lane a écrit :\n> Gilles Darold <gilles@darold.net> writes:\n>> Le 26/07/2021 à 21:56, Tom Lane a écrit :\n>>> I'm inclined to just drop the regexp_replace additions. I don't think\n>>> that the extra parameters Oracle provides here are especially useful.\n>>> They're definitely not useful enough to justify creating compatibility\n>>> hazards for.\n>> I would not say that being able to replace the Nth occurrence of a\n>> pattern matching is not useful but i agree that this is not a common\n>> case with replacement. Both Oracle [1] and IBM DB2 [2] propose this form\n>> and I have though that we can not have compatibility issues because of\n>> the different data type at the 4th parameter.\n> Well, here's an example of the potential issues:\n>\n> [...]\n\n\nThanks for pointing me this case, I did not think that the prepared\nstatement could lead to this confusion.\n\n\n>> Anyway, maybe we can just\n>> rename the function even if I would prefer that regexp_replace() be\n>> extended. For example:\n>> regexp_replace(source, pattern, replacement [, flags ]);\n>> regexp_substitute(source, pattern, replacement [, position ] [,\n>> occurrence ] [, flags ]);\n> Hmm. Of course the entire selling point of this patch seems to be\n> bug-compatibility with Oracle, so using different names is largely\n> defeating the point :-(\n>\n> Maybe we should just hold our noses and do it. The point that\n> you'd get a recognizable failure if the wrong function were chosen\n> reassures me a little bit. We've seen a lot of cases where this\n> sort of ambiguity results in the system just silently doing something\n> different from what you expected, and I was afraid that that could\n> happen here.\n\n\nI join a new version of the patch that include a check of the option\nparameter in the basic form of regexp_replace() and return an error in\nambiguous cases.\n\n\n PREPARE rr AS SELECT regexp_replace('healthy, wealthy, and\n wise','(\\w+)thy', '\\1ish', $1);\n EXECUTE rr(1);\n ERROR: ambiguous use of the option parameter in regex_replace(),\n value: 1\n HINT: you might set the occurrence parameter to force the use of\n the extended form of regex_replace()\n\n\nThis is done by checking if the option parameter value is an integer and\nthrow the error in this case. I don't think of anything better.\n\n\nBest regards,\n\n-- \nGilles Darold",
"msg_date": "Sun, 1 Aug 2021 21:22:41 +0200",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr, regexp_substr\n and regexp_replace"
},
{
"msg_contents": "Le 01/08/2021 à 19:23, Tom Lane a écrit :\n> I've been working through this patch, and trying to verify\n> compatibility against Oracle and DB2, and I see some points that need\n> discussion or at least recording for the archives.\n>\n> * In Oracle, while the documentation for regexp_instr says that\n> return_option should only be 0 or 1, experimentation with sqlfiddle\n> shows that any nonzero value is silently treated as 1. The patch\n> raises an error for other values, which I think is a good idea.\n> (IBM's docs say that DB2 raises an error too, though I can't test\n> that.) We don't need to be bug-compatible to that extent.\n>\n> * What should happen when the subexpression/capture group number of\n> regexp_instr or regexp_substr exceeds the number of parenthesized\n> subexpressions of the regexp? Oracle silently returns a no-match\n> result (0 or NULL), as does this patch. However, IBM's docs say\n> that DB2 raises an error. I'm inclined to think that this is\n> likewise taking bug-compatibility too far, and that we should\n> raise an error like DB2. There are clearly cases where throwing\n> an error would help debug a faulty call, while I'm less clear on\n> a use-case where not throwing an error would be useful.\n>\n> * IBM's docs say that both regexp_count and regexp_like have\n> arguments \"string, pattern [, start] [, flags]\" --- that is,\n> each of start and flags can be independently specified or omitted.\n> The patch follows Oracle, which has no start option for \n> regexp_like, and where you can't write flags for regexp_count\n> without writing start. This is fine by me, because doing these\n> like DB2 would introduce the same which-argument-is-this issues\n> as we're being forced to cope with for regexp_replace. I don't\n> think we need to accept ambiguity in these cases too. But it's\n> worth memorializing this decision in the thread.\n>\n> * The patch has most of these functions silently ignoring the 'g'\n> flag, but I think they should raise errors instead. Oracle doesn't\n> accept a 'g' flag for these, so why should we? The only case where\n> that logic doesn't hold is regexp_replace, because depending on which\n> syntax you use the 'g' flag might or might not be meaningful. So\n> for regexp_replace, I'd vote for silently ignoring 'g' if the\n> occurrence-number parameter is given, while honoring it if not.\n>\n> I've already made changes in my local copy per the last item,\n> but I've not done anything about throwing errors for out-of-range\n> subexpression numbers. Anybody have an opinion about that one?\n\n\nI thought about this while I was implementing the functions and chose to\nnot throw an error because of the Oracle behavior and also with others\nregular expression implementation. For example in Perl there is no error:\n\n\n $ perl -e '$str=\"hello world\"; $str =~ s/(l)/$20/; print \"$str\\n\";'\n helo world\n\n\nUsually a regular expression is always tested by its creator to be sure\nthat this the right one and that it does what is expected. But I agree\nthat it could help the writer to debug its RE.\n\n\nAlso if I recall well Oracle and DB2 limit the number of capture groups\nback references from \\1 to \\9 for Oracle and \\0 to \\9 for DB2. I have\nchosen to not apply this limit, I don't see the interest of such a\nlimitation.\n\n\n\n-- \nGilles Darold\nhttp://www.darold.net/\n\n\n\n\n\n\n\nLe 01/08/2021 à 19:23, Tom Lane a\n écrit :\n\n\nI've been working through this patch, and trying to verify\ncompatibility against Oracle and DB2, and I see some points that need\ndiscussion or at least recording for the archives.\n\n* In Oracle, while the documentation for regexp_instr says that\nreturn_option should only be 0 or 1, experimentation with sqlfiddle\nshows that any nonzero value is silently treated as 1. The patch\nraises an error for other values, which I think is a good idea.\n(IBM's docs say that DB2 raises an error too, though I can't test\nthat.) We don't need to be bug-compatible to that extent.\n\n* What should happen when the subexpression/capture group number of\nregexp_instr or regexp_substr exceeds the number of parenthesized\nsubexpressions of the regexp? Oracle silently returns a no-match\nresult (0 or NULL), as does this patch. However, IBM's docs say\nthat DB2 raises an error. I'm inclined to think that this is\nlikewise taking bug-compatibility too far, and that we should\nraise an error like DB2. There are clearly cases where throwing\nan error would help debug a faulty call, while I'm less clear on\na use-case where not throwing an error would be useful.\n\n* IBM's docs say that both regexp_count and regexp_like have\narguments \"string, pattern [, start] [, flags]\" --- that is,\neach of start and flags can be independently specified or omitted.\nThe patch follows Oracle, which has no start option for \nregexp_like, and where you can't write flags for regexp_count\nwithout writing start. This is fine by me, because doing these\nlike DB2 would introduce the same which-argument-is-this issues\nas we're being forced to cope with for regexp_replace. I don't\nthink we need to accept ambiguity in these cases too. But it's\nworth memorializing this decision in the thread.\n\n* The patch has most of these functions silently ignoring the 'g'\nflag, but I think they should raise errors instead. Oracle doesn't\naccept a 'g' flag for these, so why should we? The only case where\nthat logic doesn't hold is regexp_replace, because depending on which\nsyntax you use the 'g' flag might or might not be meaningful. So\nfor regexp_replace, I'd vote for silently ignoring 'g' if the\noccurrence-number parameter is given, while honoring it if not.\n\nI've already made changes in my local copy per the last item,\nbut I've not done anything about throwing errors for out-of-range\nsubexpression numbers. Anybody have an opinion about that one?\n\n\n\n\nI\n thought about this while I was implementing the functions\n and chose to not throw an error because of the Oracle\n behavior and also with others regular expression\n implementation. For example in Perl there is no error:\n\n\n\n$\n perl -e '$str=\"hello world\"; $str =~ s/(l)/$20/; print\n \"$str\\n\";'\n helo world\n\n\n\n\nUsually\n a regular expression is always tested by its creator to be\n sure that this the right one and that it does what is\n expected. But I agree that it could help the writer to debug\n its RE.\n\n\nAlso\n if I recall well Oracle and DB2 limit the number of capture\n groups back references from \\1 to \\9 for Oracle and \\0 to \\9\n for DB2. I have chosen to not apply this limit, I don't see\n the interest of such a limitation.\n\n\n\n\n \n-- \nGilles Darold\nhttp://www.darold.net/",
"msg_date": "Sun, 1 Aug 2021 21:48:00 +0200",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr, regexp_substr\n and regexp_replace"
},
{
"msg_contents": "Gilles Darold <gilles@darold.net> writes:\n> [ v5-0001-regexp-foo-functions.patch ]\n\nI've gone through this whole patch now, and found quite a lot that I did\nnot like. In no particular order:\n\n* Wrapping parentheses around the user's regexp doesn't work. It can\nturn an invalid regexp into a valid one: for example 'a)(b' should draw\na syntax error. With this patch, no error would be thrown, but the\n\"outer\" parens wouldn't do what you expected. Worse, it can turn a\nvalid regexp into an invalid one: the metasyntax options described in\n9.7.3.4 only work at the start of the regexp. So we have to handle\nwhole-regexp cases honestly rather than trying to turn them into an\ninstance of the parenthesized-subexpression case.\n\n* You did a lot of things quite inefficiently, apparently to avoid\ntouching any existing code. I think it's better to extend\nsetup_regexp_matches() and replace_text_regexp() a little bit so that\nthey can support the behaviors these new functions need. In both of\nthem, it's absolutely trivial to allow a search start position to be\npassed in; and it doesn't take much to teach replace_text_regexp()\nto replace only the N'th match.\n\n* Speaking of N'th, there is not much of anything that I like\nabout Oracle's terminology for the function arguments, and I don't\nthink we ought to adopt it. If we're documenting the functions as\nprocessing the \"N'th match\", it seems to me to be natural to call\nthe parameter \"N\" not \"occurrence\". Speaking of the \"occurrence'th\noccurrence\" is just silly, not to mention long and easy to misspell.\nLikewise, \"position\" is a horribly vague term for the search start\nposition; it could be interpreted to mean several other things.\n\"start\" seems much better. \"return_opt\" is likewise awfully unclear.\nI went with \"endoption\" below, though I could be talked into something\nelse. The only one of Oracle's choices that I like is \"subexpr\" for\nsubexpression number ... but you went with DB2's rather vague \"group\"\ninstead. I don't want to use their \"capture group\" terminology,\nbecause that appears nowhere else in our documentation. Our existing\nterminology is \"parenthesized subexpression\", which seems fine to me\n(and also agrees with Oracle's docs).\n\n* I spent a lot of time on the docs too. A lot of the syntax specs\nwere wrong (where you put the brackets matters), many of the examples\nseemed confusingly overcomplicated, and the text explanations needed\ncopy-editing.\n\n* Also, the regression tests seemed misguided. This patch is not\nresponsible for testing the regexp engine as such; we have tests\nelsewhere that do that. So I don't think we need complex regexps\nhere. We just need to verify that the parameters of these functions\nact properly, and check their error cases. That can be done much\nmore quickly and straightforwardly than what you had.\n\n\nSo here's a revised version that I like better. I think this\nis pretty nearly committable, aside from the question of whether\na too-large subexpression number should be an error or not.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 01 Aug 2021 19:21:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr,\n regexp_substr and regexp_replace"
},
{
"msg_contents": "I wrote:\n> ... aside from the question of whether\n> a too-large subexpression number should be an error or not.\n\nOh ... poking around some more, I noticed a very nearby precedent.\nregexp_replace's replacement string can include \\1 to \\9 to insert\nthe substring matching the N'th parenthesized subexpression. But\nif there is no such subexpression, you don't get an error, just\nan empty insertion. So that seems like an argument for not\nthrowing an error for an out-of-range subexpr parameter.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 01 Aug 2021 21:02:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr,\n regexp_substr and regexp_replace"
},
{
"msg_contents": "Le 02/08/2021 à 01:21, Tom Lane a écrit :\n> Gilles Darold <gilles@darold.net> writes:\n>> [ v5-0001-regexp-foo-functions.patch ]\n> I've gone through this whole patch now, and found quite a lot that I did\n> not like. In no particular order:\n>\n> * Wrapping parentheses around the user's regexp doesn't work. It can\n> turn an invalid regexp into a valid one: for example 'a)(b' should draw\n> a syntax error. With this patch, no error would be thrown, but the\n> \"outer\" parens wouldn't do what you expected. Worse, it can turn a\n> valid regexp into an invalid one: the metasyntax options described in\n> 9.7.3.4 only work at the start of the regexp. So we have to handle\n> whole-regexp cases honestly rather than trying to turn them into an\n> instance of the parenthesized-subexpression case.\n>\n> * You did a lot of things quite inefficiently, apparently to avoid\n> touching any existing code. I think it's better to extend\n> setup_regexp_matches() and replace_text_regexp() a little bit so that\n> they can support the behaviors these new functions need. In both of\n> them, it's absolutely trivial to allow a search start position to be\n> passed in; and it doesn't take much to teach replace_text_regexp()\n> to replace only the N'th match.\n>\n> * Speaking of N'th, there is not much of anything that I like\n> about Oracle's terminology for the function arguments, and I don't\n> think we ought to adopt it. If we're documenting the functions as\n> processing the \"N'th match\", it seems to me to be natural to call\n> the parameter \"N\" not \"occurrence\". Speaking of the \"occurrence'th\n> occurrence\" is just silly, not to mention long and easy to misspell.\n> Likewise, \"position\" is a horribly vague term for the search start\n> position; it could be interpreted to mean several other things.\n> \"start\" seems much better. \"return_opt\" is likewise awfully unclear.\n> I went with \"endoption\" below, though I could be talked into something\n> else. The only one of Oracle's choices that I like is \"subexpr\" for\n> subexpression number ... but you went with DB2's rather vague \"group\"\n> instead. I don't want to use their \"capture group\" terminology,\n> because that appears nowhere else in our documentation. Our existing\n> terminology is \"parenthesized subexpression\", which seems fine to me\n> (and also agrees with Oracle's docs).\n>\n> * I spent a lot of time on the docs too. A lot of the syntax specs\n> were wrong (where you put the brackets matters), many of the examples\n> seemed confusingly overcomplicated, and the text explanations needed\n> copy-editing.\n>\n> * Also, the regression tests seemed misguided. This patch is not\n> responsible for testing the regexp engine as such; we have tests\n> elsewhere that do that. So I don't think we need complex regexps\n> here. We just need to verify that the parameters of these functions\n> act properly, and check their error cases. That can be done much\n> more quickly and straightforwardly than what you had.\n>\n>\n> So here's a revised version that I like better. I think this\n> is pretty nearly committable, aside from the question of whether\n> a too-large subexpression number should be an error or not.\n\n\nThanks a lot for the patch improvement and the guidance. I have read the\npatch and I agree with your choices I think I was too much trying to\nmimic the oraclisms. I don't think we should take care of the too-large\nsubexpression number, the regexp writer should always test its regular\nexpression and also this will not prevent him to chose the wrong capture\ngroup number but just a non existing one.\n\n\nBest regards,\n\n-- \nGilles Darold\n\n\n\n\n",
"msg_date": "Mon, 2 Aug 2021 23:22:04 +0200",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr, regexp_substr\n and regexp_replace"
},
{
"msg_contents": "Le 02/08/2021 � 23:22, Gilles Darold a �crit�:\n> Le 02/08/2021 � 01:21, Tom Lane a �crit�:\n>> Gilles Darold <gilles@darold.net> writes:\n>>> [ v5-0001-regexp-foo-functions.patch ]\n>> I've gone through this whole patch now, and found quite a lot that I did\n>> not like. In no particular order:\n>>\n>> * Wrapping parentheses around the user's regexp doesn't work. It can\n>> turn an invalid regexp into a valid one: for example 'a)(b' should draw\n>> a syntax error. With this patch, no error would be thrown, but the\n>> \"outer\" parens wouldn't do what you expected. Worse, it can turn a\n>> valid regexp into an invalid one: the metasyntax options described in\n>> 9.7.3.4 only work at the start of the regexp. So we have to handle\n>> whole-regexp cases honestly rather than trying to turn them into an\n>> instance of the parenthesized-subexpression case.\n>>\n>> * You did a lot of things quite inefficiently, apparently to avoid\n>> touching any existing code. I think it's better to extend\n>> setup_regexp_matches() and replace_text_regexp() a little bit so that\n>> they can support the behaviors these new functions need. In both of\n>> them, it's absolutely trivial to allow a search start position to be\n>> passed in; and it doesn't take much to teach replace_text_regexp()\n>> to replace only the N'th match.\n>>\n>> * Speaking of N'th, there is not much of anything that I like\n>> about Oracle's terminology for the function arguments, and I don't\n>> think we ought to adopt it. If we're documenting the functions as\n>> processing the \"N'th match\", it seems to me to be natural to call\n>> the parameter \"N\" not \"occurrence\". Speaking of the \"occurrence'th\n>> occurrence\" is just silly, not to mention long and easy to misspell.\n>> Likewise, \"position\" is a horribly vague term for the search start\n>> position; it could be interpreted to mean several other things.\n>> \"start\" seems much better. \"return_opt\" is likewise awfully unclear.\n>> I went with \"endoption\" below, though I could be talked into something\n>> else. The only one of Oracle's choices that I like is \"subexpr\" for\n>> subexpression number ... but you went with DB2's rather vague \"group\"\n>> instead. I don't want to use their \"capture group\" terminology,\n>> because that appears nowhere else in our documentation. Our existing\n>> terminology is \"parenthesized subexpression\", which seems fine to me\n>> (and also agrees with Oracle's docs).\n>>\n>> * I spent a lot of time on the docs too. A lot of the syntax specs\n>> were wrong (where you put the brackets matters), many of the examples\n>> seemed confusingly overcomplicated, and the text explanations needed\n>> copy-editing.\n>>\n>> * Also, the regression tests seemed misguided. This patch is not\n>> responsible for testing the regexp engine as such; we have tests\n>> elsewhere that do that. So I don't think we need complex regexps\n>> here. We just need to verify that the parameters of these functions\n>> act properly, and check their error cases. That can be done much\n>> more quickly and straightforwardly than what you had.\n>>\n>>\n>> So here's a revised version that I like better. I think this\n>> is pretty nearly committable, aside from the question of whether\n>> a too-large subexpression number should be an error or not.\n>\n> Thanks a lot for the patch improvement and the guidance. I have read the\n> patch and I agree with your choices I think I was too much trying to\n> mimic the oraclisms. I don't think we should take care of the too-large\n> subexpression number, the regexp writer should always test its regular\n> expression and also this will not prevent him to chose the wrong capture\n> group number but just a non existing one.\n\n\nActually I just found that the regexp_like() function doesn't support \nthe start parameter which is something we should support. I saw that \nOracle do not support it but DB2 does and I think we should also support \nit. I will post a new version of the patch once it is done.\n\n\nBest regards,\n\n-- \nGilles Darold\n\n\n\n",
"msg_date": "Tue, 3 Aug 2021 11:45:09 +0200",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr, regexp_substr\n and regexp_replace"
},
{
"msg_contents": "Le 03/08/2021 à 11:45, Gilles Darold a écrit :\n> Actually I just found that the regexp_like() function doesn't support \n> the start parameter which is something we should support. I saw that \n> Oracle do not support it but DB2 does and I think we should also \n> support it. I will post a new version of the patch once it is done.\n\n\nHere is a new version of the patch that adds the start parameter to \nregexp_like() function but while I'm adding support to this parameter it \nbecome less obvious for me that we should implement it. However feel \nfree to not use this version if you think that adding the start \nparameter has no real interest.\n\n\nBest regards,\n\n-- \nGilles Darold",
"msg_date": "Tue, 3 Aug 2021 13:26:48 +0200",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr, regexp_substr\n and regexp_replace"
},
{
"msg_contents": "On 8/3/21 1:26 PM, Gilles Darold wrote:\n> Le 03/08/2021 � 11:45, Gilles Darold a �crit�:\n>> Actually I just found that the regexp_like() function doesn't support \n>> the start parameter which is something we should support. I saw that \n>> Oracle do not support it but DB2 does and I think we should also \n>> support it. I will post a new version of the patch once it is done.\n> \n\n+1\n\nI for one am in favor of this 'start'-argument addition. Slightly \nharder usage, but more precise manipulation.\n\n\nErik Rijkers\n\n\n> \n> Here is a new version of the patch that adds the start parameter to \n> regexp_like() function but while I'm adding support to this parameter it \n> become less obvious for me that we should implement it. However feel \n> free to not use this version if you think that adding the start \n> parameter has no real interest.\n> \n> \n> Best regards,\n> \n\n\n",
"msg_date": "Tue, 3 Aug 2021 15:02:43 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr, regexp_substr\n and regexp_replace"
},
{
"msg_contents": "Erik Rijkers <er@xs4all.nl> writes:\n> On 8/3/21 1:26 PM, Gilles Darold wrote:\n>> Le 03/08/2021 à 11:45, Gilles Darold a écrit :\n>>> Actually I just found that the regexp_like() function doesn't support \n>>> the start parameter which is something we should support. I saw that \n>>> Oracle do not support it but DB2 does and I think we should also \n>>> support it. I will post a new version of the patch once it is done.\n\n> +1\n\n> I for one am in favor of this 'start'-argument addition. Slightly \n> harder usage, but more precise manipulation.\n\nAs I said upthread, I am *not* in favor of making those DB2 additions.\nWe do not need to create ambiguities around those functions like the\none we have for regexp_replace. If Oracle doesn't have those options,\nwhy do we need them?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 Aug 2021 09:39:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr,\n regexp_substr and regexp_replace"
},
{
"msg_contents": "Le 03/08/2021 à 15:39, Tom Lane a écrit :\n> Erik Rijkers <er@xs4all.nl> writes:\n>> On 8/3/21 1:26 PM, Gilles Darold wrote:\n>>> Le 03/08/2021 à 11:45, Gilles Darold a écrit :\n>>>> Actually I just found that the regexp_like() function doesn't support\n>>>> the start parameter which is something we should support. I saw that\n>>>> Oracle do not support it but DB2 does and I think we should also\n>>>> support it. I will post a new version of the patch once it is done.\n>> +1\n>> I for one am in favor of this 'start'-argument addition. Slightly\n>> harder usage, but more precise manipulation.\n> As I said upthread, I am *not* in favor of making those DB2 additions.\n> We do not need to create ambiguities around those functions like the\n> one we have for regexp_replace. If Oracle doesn't have those options,\n> why do we need them?\n\n\nSorry I have missed that, but I'm fine with this implemenation so let's \nkeep the v6 version of the patch and drop this one.\n\n-- \nGilles Darold\n\n\n\n",
"msg_date": "Tue, 3 Aug 2021 16:11:24 +0200",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr, regexp_substr\n and regexp_replace"
},
{
"msg_contents": "Gilles Darold <gilles@darold.net> writes:\n> Sorry I have missed that, but I'm fine with this implemenation so let's \n> keep the v6 version of the patch and drop this one.\n\nPushed, then. There's still lots of time to tweak the behavior of course.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 Aug 2021 13:10:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr,\n regexp_substr and regexp_replace"
},
{
"msg_contents": "On 03.08.21 19:10, Tom Lane wrote:\n> Gilles Darold <gilles@darold.net> writes:\n>> Sorry I have missed that, but I'm fine with this implemenation so let's\n>> keep the v6 version of the patch and drop this one.\n> \n> Pushed, then. There's still lots of time to tweak the behavior of course.\n\nI have a documentation follow-up to this. It seems that these new \nfunctions are almost a de facto standard, whereas the SQL-standard \nfunctions are not implemented anywhere. I propose the attached patch to \nupdate the subsection in the pattern-matching section to give more \ndetail on this and suggest equivalent functions among these newly added \nones. What do you think?",
"msg_date": "Wed, 15 Dec 2021 13:41:16 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr, regexp_substr\n and regexp_replace"
},
{
"msg_contents": "Le 15/12/2021 à 13:41, Peter Eisentraut a écrit :\n> On 03.08.21 19:10, Tom Lane wrote:\n>> Gilles Darold <gilles@darold.net> writes:\n>>> Sorry I have missed that, but I'm fine with this implemenation so let's\n>>> keep the v6 version of the patch and drop this one.\n>>\n>> Pushed, then. There's still lots of time to tweak the behavior of \n>> course.\n>\n> I have a documentation follow-up to this. It seems that these new \n> functions are almost a de facto standard, whereas the SQL-standard \n> functions are not implemented anywhere. I propose the attached patch \n> to update the subsection in the pattern-matching section to give more \n> detail on this and suggest equivalent functions among these newly \n> added ones. What do you think?\n\n\nI'm in favor to apply your changes to documentation. It is a good thing \nto precise the relation between this implementation of the regex_* \nfunctions and the SQL stardard.\n\n-- \nGilles Darold\n\n\n\n\n",
"msg_date": "Wed, 15 Dec 2021 14:15:04 +0100",
"msg_from": "Gilles Darold <gilles@darold.net>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr, regexp_substr\n and regexp_replace"
},
{
"msg_contents": "On 15.12.21 14:15, Gilles Darold wrote:\n> Le 15/12/2021 à 13:41, Peter Eisentraut a écrit :\n>> On 03.08.21 19:10, Tom Lane wrote:\n>>> Gilles Darold <gilles@darold.net> writes:\n>>>> Sorry I have missed that, but I'm fine with this implemenation so let's\n>>>> keep the v6 version of the patch and drop this one.\n>>>\n>>> Pushed, then. There's still lots of time to tweak the behavior of \n>>> course.\n>>\n>> I have a documentation follow-up to this. It seems that these new \n>> functions are almost a de facto standard, whereas the SQL-standard \n>> functions are not implemented anywhere. I propose the attached patch \n>> to update the subsection in the pattern-matching section to give more \n>> detail on this and suggest equivalent functions among these newly \n>> added ones. What do you think?\n> \n> \n> I'm in favor to apply your changes to documentation. It is a good thing \n> to precise the relation between this implementation of the regex_* \n> functions and the SQL stardard.\n\nok, done\n\n\n",
"msg_date": "Mon, 20 Dec 2021 10:43:20 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] proposal for regexp_count, regexp_instr, regexp_substr\n and regexp_replace"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nCurrently, postgres increments command id in ri trigger every time when inserting into a referencing table(fk relation).\nRI_FKey_check-> ri_PerformCheck->SPI_execute_snapshot-> _SPI_execute_plan-> CommandCounterIncrement\n\nIt can be a block for supporting \"parallel insert into a referencing table\", because we do not allow increment command id in parallel mode.\n\nSo, I was wondering if we can avoid incrementing command id in some cases when executing INSERT.\n\nAs far as I can see, it’s only necessary to increment command id when the INSERT command modified the referenced table.\nAnd INSERT command only have one target table, the modification on other tables can happen in the following cases.\n\n1) has modifyingcte which modifies the referenced table\n2) has modifying function which modifies the referenced table.\n(If I missed something please let me know)\n\nSince the above two cases are not supported in parallel mode(parallel unsafe).\nIMO, It seems it’s not necessary to increment command id in parallel mode,\nwe can just skip commandCounterIncrement when in parallel mode.\n\nWith this change, we can smoothly support \"parallel insert into referencing table\" which is desirable.\n\nPart of what I plan to change is as the following:\n-----------\n RI_FKey_check_ins(PG_FUNCTION_ARGS)\n {\n+\tbool needCCI = true;\n+\n \t/* Check that this is a valid trigger call on the right time and event. */\n \tri_CheckTrigger(fcinfo, \"RI_FKey_check_ins\", RI_TRIGTYPE_INSERT);\n\n+\t/*\n+\t * We do not need to increment the command counter\n+\t * in parallel mode, because any other modifications\n+\t * other than the insert event itself are parallel unsafe.\n+\t * So, there is no chance to modify the pk relation.\n+\t */\n+\tif (IsInParallelMode())\n+\t\tneedCCI = false;\n+\n \t/* Share code with UPDATE case. */\n-\treturn RI_FKey_check((TriggerData *) fcinfo->context);\n+\treturn RI_FKey_check((TriggerData *) fcinfo->context, needCCI);\n...\n-----------\n\nThoughts ?\n\nBest regards,\nhouzj\n\n\n",
"msg_date": "Wed, 3 Mar 2021 10:30:55 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Avoid CommandCounterIncrement in RI trigger when INSERT INTO\n referencing table"
},
{
"msg_contents": "Excuse me for asking probably stupid questions...\n\nFrom: houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com>\n> As far as I can see, it’s only necessary to increment command id when the\n> INSERT command modified the referenced table.\n\nWhy do we have to increment the command ID when the INSERT's target table is a referenced table?\n\n\n> And INSERT command only have one target table, the modification on other\n> tables can happen in the following cases.\n> \n> 1) has modifyingcte which modifies the referenced table\n> 2) has modifying function which modifies the referenced table.\n> (If I missed something please let me know)\n\nAlso, why do we need CCI in these cases? What kind of problem would happen if we don't do CCI?\n\n\n> Since the above two cases are not supported in parallel mode(parallel unsafe).\n> IMO, It seems it’s not necessary to increment command id in parallel mode, we\n> can just skip commandCounterIncrement when in parallel mode.\n> \n> +\t/*\n> +\t * We do not need to increment the command counter\n> +\t * in parallel mode, because any other modifications\n> +\t * other than the insert event itself are parallel unsafe.\n> +\t * So, there is no chance to modify the pk relation.\n> +\t */\n> +\tif (IsInParallelMode())\n> +\t\tneedCCI = false;\n\nI'm worried about having this dependency in RI check, because the planner may allow parallel INSERT in these cases in the future.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Thu, 4 Mar 2021 02:52:11 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Avoid CommandCounterIncrement in RI trigger when INSERT INTO\n referencing table"
},
{
"msg_contents": "Hi,\n\n> Why do we have to increment the command ID when the INSERT's target table\n> is a referenced table?\n\nPlease refer to the explanation below.\n\n> > And INSERT command only have one target table, the modification on\n> > other tables can happen in the following cases.\n> >\n> > 1) has modifyingcte which modifies the referenced table\n> > 2) has modifying function which modifies the referenced table.\n> > (If I missed something please let me know)\n> \n> Also, why do we need CCI in these cases? What kind of problem would\n> happen if we don't do CCI?\n \n From the wiki[1], CCI is to let statements can not see the rows they modify.\n\nHere is an example of the case 1):\n(Note table referenced and referencing are both empty)\n-----\npostgres=# with cte as (insert into referenced values(1)) insert into referencing values(1);\n-----\nThe INSERT here will first modify the referenced table(pk table) and then modify the referencing table.\nWhen modifying the referencing table, it has to check if the tuple to be insert exists in referenced table.\nAt this moment, CCI can let the modification on referenced in WITH visible when check the existence of the tuple.\nIf we do not CCI here, postgres will report an constraint error because it can not find the tuple in referenced table.\n\nWhat do you think?\n\n\n> > Since the above two cases are not supported in parallel mode(parallel\n> unsafe).\n> > IMO, It seems it’s not necessary to increment command id in parallel\n> > mode, we can just skip commandCounterIncrement when in parallel mode.\n> >\n> > +\t/*\n> > +\t * We do not need to increment the command counter\n> > +\t * in parallel mode, because any other modifications\n> > +\t * other than the insert event itself are parallel unsafe.\n> > +\t * So, there is no chance to modify the pk relation.\n> > +\t */\n> > +\tif (IsInParallelMode())\n> > +\t\tneedCCI = false;\n\n> I'm worried about having this dependency in RI check, because the planner may allow parallel INSERT in these cases in the future.\n\nIf we support parallel insert that can have other modifications in the future,\nI think we will also be able to share or increment command ID in parallel wokers in that time.\nAnd it seems we can remove this dependency in that time.\nHow about add some more comments about this to remind future developer.\n/*\n * If extra parallel modification is support in the future, this dependency should be removed.\n */\n\n[1] https://wiki.postgresql.org/wiki/Developer_FAQ#What_is_CommandCounterIncrement.28.29.3F\n\nBest regards,\nhouzj\n\n\n\n",
"msg_date": "Thu, 4 Mar 2021 03:39:25 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Avoid CommandCounterIncrement in RI trigger when INSERT INTO\n referencing table"
},
{
"msg_contents": "> From the wiki[1], CCI is to let statements can not see the rows they modify.\nSorry, a typo here \"not\".\nI meant CCI is to let statements can see the rows they modify.\n\nBest regards,\nhouzj\n\n\n\n",
"msg_date": "Thu, 4 Mar 2021 03:41:39 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Avoid CommandCounterIncrement in RI trigger when INSERT INTO\n referencing table"
},
{
"msg_contents": "From: Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com>\n> From the wiki[1], CCI is to let statements can not see the rows they modify.\n> \n> Here is an example of the case 1):\n> (Note table referenced and referencing are both empty)\n> -----\n> postgres=# with cte as (insert into referenced values(1)) insert into\n> referencing values(1);\n> -----\n> The INSERT here will first modify the referenced table(pk table) and then\n> modify the referencing table.\n> When modifying the referencing table, it has to check if the tuple to be insert\n> exists in referenced table.\n> At this moment, CCI can let the modification on referenced in WITH visible\n> when check the existence of the tuple.\n> If we do not CCI here, postgres will report an constraint error because it can not\n> find the tuple in referenced table.\n\nAh, got it. Thank you. I'd regret if Postgres has to give up parallel execution because of SQL statements that don't seem to be used like the above.\n\n\n> > I'm worried about having this dependency in RI check, because the planner\n> may allow parallel INSERT in these cases in the future.\n> \n> If we support parallel insert that can have other modifications in the future, I\n> think we will also be able to share or increment command ID in parallel wokers\n> in that time.\n> And it seems we can remove this dependency in that time.\n> How about add some more comments about this to remind future developer.\n> /*\n> * If extra parallel modification is support in the future, this dependency should\n> be removed.\n> */\n\nAgreed.\n\nI'm excited to see PostgreSQL's parallel DML work in wider use cases and satisfy users' expectations.\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Fri, 5 Mar 2021 06:33:16 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Avoid CommandCounterIncrement in RI trigger when INSERT INTO\n referencing table"
},
{
"msg_contents": "> > > I'm worried about having this dependency in RI check, because the\n> > > planner\n> > may allow parallel INSERT in these cases in the future.\n> >\n> > If we support parallel insert that can have other modifications in the\n> > future, I think we will also be able to share or increment command ID\n> > in parallel wokers in that time.\n> > And it seems we can remove this dependency in that time.\n> > How about add some more comments about this to remind future\n> developer.\n> > /*\n> > * If extra parallel modification is support in the future, this\n> > dependency should be removed.\n> > */\n> \n> Agreed.\n> \n> I'm excited to see PostgreSQL's parallel DML work in wider use cases and\n> satisfy users' expectations.\n\nThanks !\nAttaching the first version patch which avoid CCI in RI trigger when insert into referencing table.\n\nBest regards,\nhouzj",
"msg_date": "Fri, 5 Mar 2021 06:57:06 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Avoid CommandCounterIncrement in RI trigger when INSERT INTO\n referencing table"
},
{
"msg_contents": "> Attaching the first version patch which avoid CCI in RI trigger when insert into\n> referencing table.\n\nAfter some more on how to support parallel insert into fk relation.\nIt seems we do not have a cheap way to implement this feature.\nPlease see the explanation below:\n\nIn RI_FKey_check, Currently, postgres execute \"select xx for key share\" to check that foreign key exists in PK table.\nHowever \"select for update/share\" is considered as parallel unsafe. It may be dangerous to do this in parallel mode, we may want to change this.\n\nAnd also, \"select for update/share\" is considered as \"not read only\" which will force readonly = false in _SPI_execute_plan.\nSo, it seems we have to completely change the implementation of RI_FKey_check.\n\nAt the same time, \" simplifying foreign key/RI checks \" thread is trying to replace \"select xx for key share\" with index_beginscan()+table_tuple_lock() (I think it’s parallel safe).\nMay be we can try to support parallel insert fk relation after \" simplifying foreign key/RI checks \" patch applied ?\n\nBest regards,\nhouzj\n\n\n\n",
"msg_date": "Wed, 10 Mar 2021 02:24:58 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Avoid CommandCounterIncrement in RI trigger when INSERT INTO\n referencing table"
},
{
"msg_contents": "From: Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com>\n> After some more on how to support parallel insert into fk relation.\n> It seems we do not have a cheap way to implement this feature.\n> \n> In RI_FKey_check, Currently, postgres execute \"select xx for key share\" to\n> check that foreign key exists in PK table.\n> However \"select for update/share\" is considered as parallel unsafe. It may be\n> dangerous to do this in parallel mode, we may want to change this.\n\nHmm, I guess the parallel leader and workers can execute SELECT FOR KEY SHARE, if the parallelism infrastructure allows execution of SPI calls. The lock manager supports tuple locking in parallel leader and workers by the group locking. Also, the tuple locking doesn't require combo Cid, which is necessary for parallel UPDATE and DELETE.\n\nPerhaps the reason why SELECT FOR is treated as parallel-unsafe is that tuple locking modifies data pages to store lock information in the tuple header. But now, page modification is possible in parallel processes, so I think we can consider SELECT FOR as parallel-safe. (I may be too optimistic.)\n\n\n> And also, \"select for update/share\" is considered as \"not read only\" which will\n> force readonly = false in _SPI_execute_plan.\n\nread_only is used to do CCI. Can we arrange to skip CCI?\n\n\n> At the same time, \" simplifying foreign key/RI checks \" thread is trying to\n> replace \"select xx for key share\" with index_beginscan()+table_tuple_lock() (I\n> think it’s parallel safe).\n> May be we can try to support parallel insert fk relation after \" simplifying foreign\n> key/RI checks \" patch applied ?\n\nWhy do you think it's parallel safe?\n\nCan you try running parallel INSERT SELECT on the target table with FK and see if any problem happens?\n\nIf some problem occurs due to the tuple locking, I think we can work around it by avoiding tuple locking. That is, we make parallel INSERT SELECT lock the parent tables in exclusive mode so that the check tuples won't be deleted. Some people may not like this, but it's worth considering because parallel INSERT SELECT would not have to be run concurrently with short OLTP transactions. Anyway, tuple locking partly disturbs parallel INSERT speedup because it modifies pages in the parent tables and emits WAL.\n\nSurprisingly, Oracle doesn't support parallel INSERT SELECT on a table with FK as follows. SQL Server doesn't mention anything, so I guess it's supported. This is a good chance for PostgreSQL to exceed Oracle.\n\nhttps://docs.oracle.com/en/database/oracle/oracle-database/21/vldbg/types-parallelism.html#GUID-D4CFC1F2-44D3-4BE3-B5ED-6A309EB8BF06\n\nTable 8-1 Referential Integrity Restrictions\nDML Statement\tIssued on Parent\tIssued on Child\tSelf-Referential\nINSERT\t(Not applicable)\tNot parallelized\tNot parallelized\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Wed, 10 Mar 2021 08:24:42 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Avoid CommandCounterIncrement in RI trigger when INSERT INTO\n referencing table"
},
{
"msg_contents": "> > After some more on how to support parallel insert into fk relation.\n> > It seems we do not have a cheap way to implement this feature.\n> >\n> > In RI_FKey_check, Currently, postgres execute \"select xx for key\n> > share\" to check that foreign key exists in PK table.\n> > However \"select for update/share\" is considered as parallel unsafe. It\n> > may be dangerous to do this in parallel mode, we may want to change this.\n> \n> Hmm, I guess the parallel leader and workers can execute SELECT FOR KEY\n> SHARE, if the parallelism infrastructure allows execution of SPI calls. The lock\n> manager supports tuple locking in parallel leader and workers by the group\n> locking. Also, the tuple locking doesn't require combo Cid, which is necessary\n> for parallel UPDATE and DELETE.\n> \n> Perhaps the reason why SELECT FOR is treated as parallel-unsafe is that tuple\n> locking modifies data pages to store lock information in the tuple header. But\n> now, page modification is possible in parallel processes, so I think we can\n> consider SELECT FOR as parallel-safe. (I may be too optimistic.)\n\nI think you are right.\nAfter reading the original parallel-safety check's commit message , README.tuplock, and have some discussions with the author.\nI think the reason why [SELECT FOR UPDATE/SHARE] is parallel unsafe is that [SELECT FOR] will call GetCurrentCommandId(true).\nGetCurrentCommandId(true) was not supported in parallel worker but [SELECT FOR] need command ID to mark the change(lock info).\n\nFortunately, With greg's parallel insert patch, we can use command ID in parallel worker.\nSo, IMO, In parallel insert case, the RI check is parallel safe.\n\n\n> > And also, \"select for update/share\" is considered as \"not read only\"\n> > which will force readonly = false in _SPI_execute_plan.\n> \n> read_only is used to do CCI. Can we arrange to skip CCI?\n\nYes, we can.\nCurrently, I try to add one parameter(need CCI) to SPI_execute_snapshot and _SPI_execute_plan to control CCI.\nI was still researching is there a more elegant way.\n\n> > At the same time, \" simplifying foreign key/RI checks \" thread is\n> > trying to replace \"select xx for key share\" with\n> > index_beginscan()+table_tuple_lock() (I think it’s parallel safe).\n> > May be we can try to support parallel insert fk relation after \"\n> > simplifying foreign key/RI checks \" patch applied ?\n> \n> Why do you think it's parallel safe?\n> \n> Can you try running parallel INSERT SELECT on the target table with FK and see\n> if any problem happens?\n\nI have tested this in various cases:\nAll the test results looks good.\n* test different worker lock on the same tuple.\n* test different worker lock on different tuple.\n* test no lock.\n* test lock with concurrent update\n* test constraint error.\n\n \n> \n> Surprisingly, Oracle doesn't support parallel INSERT SELECT on a table with FK\n> as follows. SQL Server doesn't mention anything, so I guess it's supported.\n> This is a good chance for PostgreSQL to exceed Oracle.\n> \n> https://docs.oracle.com/en/database/oracle/oracle-database/21/vldbg/types-parallelism.html#GUID-D4CFC1F2-44D3-4BE3-B5ED-6A309EB8BF06\n> \n> Table 8-1 Referential Integrity Restrictions\n> DML Statement\tIssued on Parent\tIssued on Child\n> \tSelf-Referential\n> INSERT\t(Not applicable)\tNot parallelized\tNot parallelized\n\nAh, that's really good chance. \n\nTo support parallel insert into FK relation.\nThere are two scenarios need attention.\n1) foreign key and primary key are on the same table(INSERT's target table).\n (referenced and referencing are the same table)\n2) referenced and referencing table are both partition of INSERT's target table.\n(These cases are really rare for me)\n\nIn the two cases, the referenced table could be modified when INSERTing and CCI is necessary,\nSo, I think we should treat these cases as parallel restricted while doing safety check.\n\nAttaching V1 patch that Implemented above feature and passed regression test.\n\nBest regards,\nhouzj",
"msg_date": "Thu, 11 Mar 2021 12:01:15 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Avoid CommandCounterIncrement in RI trigger when INSERT INTO\n referencing table"
},
{
"msg_contents": "> To support parallel insert into FK relation.\n> There are two scenarios need attention.\n> 1) foreign key and primary key are on the same table(INSERT's target table).\n> (referenced and referencing are the same table)\n> 2) referenced and referencing table are both partition of INSERT's target table.\n> (These cases are really rare for me)\n> \n> In the two cases, the referenced table could be modified when INSERTing and\n> CCI is necessary, So, I think we should treat these cases as parallel restricted\n> while doing safety check.\n> \n> Attaching V1 patch that Implemented above feature and passed regression\n> test.\n\nAttaching rebased patch based on HEAD.\n\nBest regards,\nhouzj",
"msg_date": "Tue, 16 Mar 2021 09:40:53 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Avoid CommandCounterIncrement in RI trigger when INSERT INTO\n referencing table"
},
{
"msg_contents": "On Tue, Mar 16, 2021 at 8:41 PM houzj.fnst@fujitsu.com <\nhouzj.fnst@fujitsu.com> wrote:\n>\n> > To support parallel insert into FK relation.\n> > There are two scenarios need attention.\n> > 1) foreign key and primary key are on the same table(INSERT's target\ntable).\n> > (referenced and referencing are the same table)\n> > 2) referenced and referencing table are both partition of INSERT's\ntarget table.\n> > (These cases are really rare for me)\n> >\n> > In the two cases, the referenced table could be modified when INSERTing\nand\n> > CCI is necessary, So, I think we should treat these cases as parallel\nrestricted\n> > while doing safety check.\n> >\n> > Attaching V1 patch that Implemented above feature and passed regression\n> > test.\n>\n> Attaching rebased patch based on HEAD.\n>\n\n\nI noticed some things on the first scan through:\n\nPatch 0001:\n1) Tidy up the comments a bit:\n\nSuggest the following update to part of the comments:\n\nIn RI check, we currently call CommandCounterIncrement every time we insert\ninto\na table with foreign key, which is not supported in a parallel worker.\nHowever, it's necessary\nto do CCI only if the referenced table is modified during an INSERT command.\n\nFor now, all the cases that will modify the referenced table are treated as\nparallel unsafe.\n\nWe can skip CCI to let the RI check (for now only RI_FKey_check_ins) to be\nexecuted in a parallel worker.\n\n\nPatch 0002:\n1) The new max_parallel_hazard_context member \"pk_rels\" is not being set\n(to NIL) in the is_parallel_safe() function, so it will have a junk value\nin that case - though it does look like nothing could reference it then\n(but the issue may be detected by a Valgrind build, as performed by the\nbuildfarm).\n\n2) Few things to tidy up the patch comments:\ni)\nCurrently, We can not support parallel insert into fk relation in all cases.\nshould be:\nCurrently, we cannot support parallel insert into a fk relation in all\ncases.\n\nii)\nWhen inserting into a table with foreign key, if the referenced could also\nbe modified in\nINSERT command, we will need to do CommandCounterIncrement to let the\nmodification\non referenced table visible for the RI check which is not supported in\nparallel worker.\n\nshould be:\n\nWhen inserting into a table with a foreign key, if the referenced table can\nalso be modified by\nthe INSERT command, we will need to do CommandCounterIncrement to let the\nmodification\non the referenced table be visible for the RI check, which is not supported\nin a parallel worker.\n\niii)\nSo, Extent the parallel-safety check to treat the following cases(could\nmodify referenced table) parallel restricted:\n\nshould be:\n\nSo, extend the parallel-safety check to treat the following cases (could\nmodify referenced table) as parallel restricted:\n\niv)\nHowever, the current parallel safery check already treat it as unsafe, we\ndo not need to\nanything about it.)\n\nshould be:\n\nHowever, the current parallel safety checks already treat it as unsafe, so\nwe do not need to\ndo anything about it.)\n\n3) In target_rel_trigger_max_parallel_hazard(), you have added a variable\ndeclaration \"int trigtype;\" after code, instead of before:\n\n Oid tgfoid = rel->trigdesc->triggers[i].tgfoid;\n+ int trigtype;\n\nshould be:\n\n+ int trigtype;\nOid tgfoid = rel->trigdesc->triggers[i].tgfoid;\n\n(need to avoid intermingled declarations and code)\nSee: https://www.postgresql.org/docs/13/source-conventions.html\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\nOn Tue, Mar 16, 2021 at 8:41 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:>> > To support parallel insert into FK relation.> > There are two scenarios need attention.> > 1) foreign key and primary key are on the same table(INSERT's target table).> > (referenced and referencing are the same table)> > 2) referenced and referencing table are both partition of INSERT's target table.> > (These cases are really rare for me)> >> > In the two cases, the referenced table could be modified when INSERTing and> > CCI is necessary, So, I think we should treat these cases as parallel restricted> > while doing safety check.> >> > Attaching V1 patch that Implemented above feature and passed regression> > test.>> Attaching rebased patch based on HEAD.>I noticed some things on the first scan through:Patch 0001:1) Tidy up the comments a bit:Suggest the following update to part of the comments:In RI check, we currently call CommandCounterIncrement every time we insert intoa table with foreign key, which is not supported in a parallel worker. However, it's necessaryto do CCI only if the referenced table is modified during an INSERT command.For now, all the cases that will modify the referenced table are treated as parallel unsafe.We can skip CCI to let the RI check (for now only RI_FKey_check_ins) to be executed in a parallel worker.Patch 0002:1) The new max_parallel_hazard_context member \"pk_rels\" is not being set (to NIL) in the is_parallel_safe() function, so it will have a junk value in that case - though it does look like nothing could reference it then (but the issue may be detected by a Valgrind build, as performed by the buildfarm).2) Few things to tidy up the patch comments:i)Currently, We can not support parallel insert into fk relation in all cases.should be:Currently, we cannot support parallel insert into a fk relation in all cases.ii)When inserting into a table with foreign key, if the referenced could also be modified inINSERT command, we will need to do CommandCounterIncrement to let the modificationon referenced table visible for the RI check which is not supported in parallel worker.should be:When inserting into a table with a foreign key, if the referenced table can also be modified bythe INSERT command, we will need to do CommandCounterIncrement to let the modificationon the referenced table be visible for the RI check, which is not supported in a parallel worker.iii)So, Extent the parallel-safety check to treat the following cases(could modify referenced table) parallel restricted:should be:So, extend the parallel-safety check to treat the following cases (could modify referenced table) as parallel restricted:iv)However, the current parallel safery check already treat it as unsafe, we do not need toanything about it.)should be:However, the current parallel safety checks already treat it as unsafe, so we do not need todo anything about it.)3) In target_rel_trigger_max_parallel_hazard(), you have added a variable declaration \"int trigtype;\" after code, instead of before: Oid tgfoid = rel->trigdesc->triggers[i].tgfoid;+ int trigtype;should be:+ int trigtype;Oid tgfoid = rel->trigdesc->triggers[i].tgfoid;(need to avoid intermingled declarations and code)See: https://www.postgresql.org/docs/13/source-conventions.htmlRegards,Greg NancarrowFujitsu Australia",
"msg_date": "Thu, 18 Mar 2021 20:48:26 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid CommandCounterIncrement in RI trigger when INSERT INTO\n referencing table"
},
{
"msg_contents": "Hi,\r\n\r\nThanks for the review.\r\n\r\n> I noticed some things on the first scan through:\r\n> \r\n> Patch 0001:\r\n> 1) Tidy up the comments a bit:\r\n> \r\n> Suggest the following update to part of the comments:\r\n\r\nChanged.\r\n\r\n> Patch 0002:\r\n> 1) The new max_parallel_hazard_context member \"pk_rels\" is not being \r\n> set (to\r\n> NIL) in the is_parallel_safe() function, so it will have a junk value \r\n> in that case - though it does look like nothing could reference it \r\n> then (but the issue may be detected by a Valgrind build, as performed by the buildfarm).\r\n\r\nChanged.\r\n\r\n> 2) Few things to tidy up the patch comments:\r\n\r\nChanged.\r\n\r\n> 3) In target_rel_trigger_max_parallel_hazard(), you have added a \r\n> variable declaration \"int trigtype;\" after code, instead of before:\r\n\r\nChanged.\r\n\r\nAttaching new version patch with these changes.\r\n\r\nAlso attaching the simple performance test results (insert into table with foreign key):\r\npostgres=# explain (analyze, verbose) insert into fk select a,func_xxx() from data where a%2=0 or a%3 = 0;\r\n workers | serial insert + parallel select | parallel insert | performace gain\r\n-----------+---------------------------------+-----------------+--------\r\n2 workers | 85512.153ms | 61384.957ms | 29%\r\n4 workers | 85436.957ms | 39335.797ms | 54%\r\n\r\n-------------data prepare-----------------------------\r\ncreate table pk(a int primary key,b int);\r\ncreate table fk(a int references pk,b int);\r\ncreate table data(a int, b int);\r\ninsert into data select * from generate_series(1,10000000,1) t;\r\n insert into pk select generate_series(1,10000000,1);\r\n------------------------------------------------\r\n\r\nBest regards,\r\nhouzj",
"msg_date": "Mon, 22 Mar 2021 05:57:01 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Avoid CommandCounterIncrement in RI trigger when INSERT INTO\n referencing table"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile reviewing a failed upgrade from Postgres v9.5 (to v9.6) I saw that the\ninstance had ~200 million (in-use) Large Objects. I was able to reproduce\nthis on a test instance which too fails with a similar error.\n\n\npg_restore: executing BLOB 4980622\npg_restore: WARNING: database with OID 0 must be vacuumed within 1000001\ntransactions\nHINT: To avoid a database shutdown, execute a database-wide VACUUM in that\ndatabase.\nYou might also need to commit or roll back old prepared transactions.\npg_restore: executing BLOB 4980623\npg_restore: [archiver (db)] Error while PROCESSING TOC:\npg_restore: [archiver (db)] Error from TOC entry 2565; 2613 4980623 BLOB\n4980623 postgres\npg_restore: [archiver (db)] could not execute query: ERROR: database is not\naccepting commands to avoid wraparound data loss in database with OID 0\nHINT: Stop the postmaster and vacuum that database in single-user mode.\nYou might also need to commit or roll back old prepared transactions.\nCommand was: SELECT pg_catalog.lo_create('4980623');\n\n\n\nTo remove the obvious possibilities, these Large Objects that are still\nin-use (so vacuumlo wouldn't help), giving more system resources doesn't\nhelp, moving Large Objects around to another database doesn't help (since\nthis is cluster-wide restriction), the source instance is nowhere close to\nwraparound and lastly recent-most minor versions don't help either (I tried\ncompiling 9_6_STABLE + upgrade database with 150 million LO and still\nencountered the same issue).\n\nDo let me know if I am missing something obvious but it appears that this is\nhappening owing to 2 things coming together:\n\n* Each Large Object is migrated in its own transaction during pg_upgrade\n* pg_resetxlog appears to be narrowing the window (available for pg_upgrade)\nto ~146 Million XIDs (2^31 - 1 million XID wraparound margin - 2 billion\nwhich is a hard-coded constant - see [1] - in what appears to be an attempt\nto force an Autovacuum Wraparound session soon after upgrade completes).\n\nIdeally such an XID based restriction, is limiting for an instance that's\nactively using a lot of Large Objects. Besides forcing AutoVacuum Wraparound\nlogic to kick in soon after, I am unclear what much else it aims to do. What\nit does seem to be doing is to block Major Version upgrades if the\npre-upgrade instance has >146 Million Large Objects (half that, if the LO\nadditionally requires ALTER LARGE OBJECT OWNER TO for each of those objects\nduring pg_restore)\n\nFor long-term these ideas came to mind, although am unsure which are\nlow-hanging fruits and which outright impossible - For e.g. clubbing\nmultiple objects in a transaction [2] / Force AutoVacuum post upgrade (and\nthus remove this limitation altogether) or see if \"pg_resetxlog -x\" (from\nwithin pg_upgrade) could help in some way to work-around this limitation.\n\nIs there a short-term recommendation for this scenario?\n\nI can understand a high number of small-sized objects is not a great way to\nuse pg_largeobject (since Large Objects was intended to be for, well, 'large\nobjects') but this magic number of Large Objects is now a stalemate at this\npoint (with respect to v9.5 EOL).\n\n\nReference:\n1) pg_resetxlog -\nhttps://github.com/postgres/postgres/blob/ca3b37487be333a1d241dab1bbdd17a211\na88f43/src/bin/pg_resetwal/pg_resetwal.c#L444\n2)\nhttps://www.postgresql.org/message-id/ed7d86a1-b907-4f53-9f6e-63482d2f2bac%4\n0manitou-mail.org\n\n-\nThanks\nRobins Tharakan",
"msg_date": "Wed, 3 Mar 2021 11:36:26 +0000",
"msg_from": "\"Tharakan, Robins\" <tharar@amazon.com>",
"msg_from_op": true,
"msg_subject": "pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Wed, Mar 03, 2021 at 11:36:26AM +0000, Tharakan, Robins wrote:\n> While reviewing a failed upgrade from Postgres v9.5 (to v9.6) I saw that the\n> instance had ~200 million (in-use) Large Objects. I was able to reproduce\n> this on a test instance which too fails with a similar error.\n\nIf pg_upgrade can't handle millions of objects/transactions/XIDs, that seems\nlike a legitimate complaint, since apparently the system is working okay\notherwise.\n\nBut it also seems like you're using it outside the range of its intended use\n(See also [1]). I'm guessing that not many people are going to spend time\nrunning tests of pg_upgrade, each of which takes 25hr, not to mention some\nmultiple of 128GB RAM+swap.\n\nCreating millions of large objects was too slow for me to test like this:\n| time { echo 'begin;'; for a in `seq 1 99999`; do echo '\\lo_import /dev/null'; done; echo 'commit;'; } |psql -qh /tmp postgres&\n\nThis seems to be enough for what's needed:\n| ALTER SYSTEM SET fsync=no; ALTER SYSTEM SET full_page_writes=no; SELECT pg_reload_conf();\n| INSERT INTO pg_largeobject_metadata SELECT a, 0 FROM generate_series(100000, 200111222)a;\n\nNow, testing the pg_upgrade was killed after runnning 100min and using 60GB\nRAM, so you might say that's a problem too. I converted getBlobs() to use a\ncursor, like dumpBlobs(), but it was still killed. I think a test case and a\nway to exercizes this failure with a more reasonable amount of time and\nresources might be a prerequisite for a patch to fix it.\n\npg_upgrade is meant for \"immediate\" upgrades, frequently allowing upgrade in\nminutes, where pg_dump |pg_restore might take hours or days. There's two\ncomponents to consider: the catalog/metadata part, and the data part. If the\ndata is large (let's say more than 100GB), then pg_upgrade is expected to be an\nimprovement over the \"dump and restore\" process, which is usually infeasible\nfor large DBs measure in TB.\n\nBut the *catalog* part is large, and pg_upgrade still has to run pg_dump, and\npg_restore. The time to do this might dominate over the data part. Our own\ncustomers DBs are 100s of GB to 10TB. For large customers, pg_upgrade takes\n45min. In the past, we had tables with many column defaults, which caused the\ndump+restore to be slow at a larger fraction of customers.\n\nIf it were me, in an EOL situation, I would look at either: 1) find a way to do\ndump+restore rather than pg_upgrade; and/or, 2) separately pg_dump the large\nobjects, drop as many as you can, then pg_upgrade the DB, then restore the\nlarge objects. (And find a better way to store them in the future).\n\nI was able to hack pg_upgrade to call pg_restore --single (with a separate\ninvocation to handle --create). That passes tests...but I can't say much\nbeyond that.\n\nRegarding your existing patch: \"make check\" only tests SQL features.\nFor development, you'll want to configure like:\n|./configure --enable-debug --enable-cassert --enable-tap-tests\nAnd then use \"make check-world\", and in particular:\ntime make check -C src/bin/pg_resetwal\ntime make check -C src/bin/pg_upgrade\n\nI don't think pg_restore needs a user-facing option for XIDs. I think it\nshould \"just work\", since a user might be as likely to shoot themselves in the\nfoot with a commandline option as they are to make an upgrade succeed that\nwould otherwise fail. pg_upgrade has a --check mode, and if that passes, the\nupgrade is intended to work, and not fail halfway through between the schema\ndump and restore, with the expectation that the user know to rerun with some\ncommandline flags. If you pursue the patch with setting a different XID\nthreshold, maybe you could count the number of objects to be created, or\ntransactions to be used, and use that as the argument to resetxlog ? I'm not\nsure, but pg_restore -l might be a good place to start looking.\n\nI think a goal for this patch should be to allow an increased number of\nobjects to be handled by pg_upgrade. Large objects may be a special case, and\nincreasing the number of other objects to be restored to the 100s of millions\nmight be unimportant.\n\n-- \nJustin\n\n[1] https://www.postgresql.org/message-id/502641.1606334432%40sss.pgh.pa.us\n| Does pg_dump really have sane performance for that situation, or\n| are we soon going to be fielding requests to make it not be O(N^2)\n| in the number of listed tables?",
"msg_date": "Tue, 9 Mar 2021 14:08:19 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
}
] |
[
{
"msg_contents": "New thread, was \"Re: new heapcheck contrib module\"\n\n\n> On Mar 2, 2021, at 10:24 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Tue, Mar 2, 2021 at 12:10 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> On further reflection, I decided to implement these changes and not worry about the behavioral change.\n> \n> Thanks.\n> \n>> I skipped this part. The initcmd argument is only handed to ParallelSlotsGetIdle(). Doing as you suggest would not really be simpler, it would just move that argument to ParallelSlotsSetup(). But I don't feel strongly about it, so I can move this, too, if you like.\n>> \n>> I didn't do this either, and for the same reason. It's just a parameter to ParallelSlotsGetIdle(), so nothing is really gained by moving it to ParallelSlotsSetup().\n> \n> OK. I thought it was more natural to pass a bunch of arguments at\n> setup time rather than passing a bunch of arguments at get-idle time,\n> but I don't feel strongly enough about it to insist, and somebody else\n> can always change it later if they decide I had the right idea.\n\nWhen you originally proposed the idea, I thought that it would work out as a simpler interface to have it your way, but in terms of the interface it came out about the same. Internally it is still simpler to do it your way, so since you seem to still like your way better, this next version has it that way.\n\n>> Rather than the slots user tweak the slot's ConnParams, ParallelSlotsGetIdle() takes a dbname argument, and uses it as ConnParams->override_dbname.\n> \n> OK, but you forgot to update the comments. ParallelSlotsGetIdle()\n> still talks about a cparams argument that it no longer has.\n\nFixed.\n\n> The usual idiom for sizing a memory allocation involving\n> FLEXIBLE_ARRAY_MEMBER is something like offsetof(ParallelSlotArray,\n> slots) + numslots * sizeof(ParallelSlot). Your version uses sizeof();\n> don't.\n\nFixed.\n\n> Other than that 0001 looks to me to be in pretty good shape now.\n\n\n\n\nAnd your other review email, also moved to this new thread....\n\n> On Mar 2, 2021, at 12:39 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Tue, Mar 2, 2021 at 1:24 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> Other than that 0001 looks to me to be in pretty good shape now.\n> \n> Incidentally, we might want to move this to a new thread with a better\n> subject line, since the current subject line really doesn't describe\n> the uncommitted portion of the work. And create a new CF entry, too.\n\nMoved here.\n\n> Moving onto 0002:\n> \n> The index checking options should really be called btree index\n> checking options. I think I'd put the table options first, and the\n> btree options second. Other kinds of indexes could follow some day. I\n> would personally omit the short forms of --heapallindexed and\n> --parent-check; I think we'll run out of option names too quickly if\n> people add more kinds of checks.\n\nDone.\n\nWhile doing this, I also renamed some of the variables to more closely match the option name. I think the code is clearer now.\n\n> \n> Perhaps VerifyBtreeSlotHandler should emit a warning of some kind if\n> PQntuples(res) != 0.\n\nThe functions bt_index_check and bt_index_parent_check are defined to return VOID, which results in PQntuples(res) == 1. I added code to verify this condition, but it only serves to alert the user if the amcheck version is behaving in an unexpected way, perhaps due to a amcheck/pg_amcheck version mismatch.\n\n> + /*\n> + * Test that this function works, but for now we're\n> not using the list\n> + * 'relations' that it builds.\n> + */\n> + conn = connectDatabase(&cparams, progname, opts.echo,\n> false, true);\n> \n> This comment appears to have nothing to do with the code, since\n> connectDatabase() does not build a list of 'relations'.\n\nTrue. Removed.\n\n> amcheck_sql seems to include paranoia, but do we need that if we're\n> using a secure search path? Similarly for other SQL queries, e.g. in\n> prepare_table_command.\n\nI removed the OPERATOR(pg_catalog.=) paranoia.\n\n> It might not be strictly necessary for the static functions in\n> pg_amcheck.c to use_three completelyDifferent NamingConventions for\n> its static functions.\n\nThe idea is that the functions that interoperate with parallel slots would follow its NamingConvention; those interoperating with patternToSQLRegex and PQExpBuffers would follow their namingConvention; and those not so interoperating would follow a less obnoxious naming_convention. To my eye, that color codes the function names in a useful way. To your eye, it just looks awful. I've changed it to use just one naming_convention.\n\n> should_processing_continue() is one semicolon over budget.\n\nThat's not the first time I've done that recently. Removed.\n\n> The initializer for opts puts a comma even after the last member\n> initializer. Is that going to be portable to all compilers?\n\nI don't know. I learned to put commas at the end of lists back when I did mostly perl programming, as you get cleaner diffs when you add more stuff to the list later. Whether I can get away with that in C using initializers I don't know. I don't have a multiplicity of compilers to check.\n\nI have removed the extra comma.\n\n> + for (failed = false, cell = opts.include.head; cell; cell = cell->next)\n> \n> I think failed has to be false here, because it gets initialized at\n> the top of the function. If we need to reinitialize it for some\n> reason, I would prefer you do that on the previous line, separate from\n> the for loop stuff.\n\nIt does have to be false there. There is no need to reinitialize it.\n\n> + char *dbrgx; /* Database regexp parsed from pattern, or\n> + * NULL */\n> + char *nsprgx; /* Schema regexp parsed from pattern, or NULL */\n> + char *relrgx; /* Relation regexp parsed from pattern, or\n> + * NULL */\n> + bool tblonly; /* true if relrgx should only match tables */\n> + bool idxonly; /* true if relrgx should only match indexes */\n> \n> Maybe: db_regex, nsp_regex, rel_regex, table_only, index_only?\n> \n> Just because it seems theoretically possible that someone will see\n> nsprgx and not immediately understand what it's supposed to mean, even\n> if they know that nsp is a common abbreviation for namespace in\n> PostgreSQL code, and even if they also know what a regular expression\n> is.\n\nChanged. Along the way, I noticed that \"tbl\" and \"idx\" were being used in C/SQL both to mean (\"table_only\", \"index_only\") in some contexts and (\"is_table\", \"is_index') in others, so I replaced all instances of \"tbl\" and \"idx\" with the unambiguous labels.\n\n> Your four messages about there being nothing to check seem like they\n> could be consolidated down to one: \"nothing to check for pattern\n> \\\"%s\\\"\".\n\nI anticipated your review comment, but I'm worried about the case that somebody runs\n\n pg_amcheck -t \"foo\" -i \"foo\"\n\nand one of those matches and the other does not. The message 'nothing to check for pattern \"foo\"' will be wrong (because there was something to check for it) and unhelpful (because it doesn't say which failed to match.)\n\n> I would favor changing things so that once argument parsing is\n> complete, we switch to reporting all errors that way. So in other\n> words here, and everything that follows:\n> \n> + fprintf(stderr, \"%s: no databases to check\\n\", progname);\n\nSame concern about the output for\n\n pg_amcheck -t \"foo\" -i \"foo\" -d \"foo\"\n\nYou might think I'm being silly here, as database names, table names, and index names should in normal usage not be hard for the user to distinguish. But consider\n\n pg_amcheck \"mydb.myschema.mytable\"\n\nIf it says, 'nothing to check for pattern \"mydb.myschema.mytable\"', you don't know if the database doesn't exist or if the table doesn't exist.\n\n> \n> + * ParallelSlots based event loop follows.\n> \n> \"Main event loop.\"\n\nChanged.\n\n> To me it would read slightly better to change each reference to\n> \"relations list\" to \"list of relations\", but perhaps that is too\n> nitpicky.\n\nNo harm picking those nits. Changed.\n\n> I think the two instances of goto finish could be avoided with not\n> much work. At most a few things need to happen only if !failed, and\n> maybe not even that, if you just said \"break;\" instead.\n\nGood point. Changed.\n\n> + * Note: Heap relation corruption is returned by verify_heapam() without the\n> + * use of raising errors, but running verify_heapam() on a corrupted table may\n> \n> How about \"Heap relation corruption() is reported by verify_heapam()\n> via the result set, rather than an ERROR, ...\"\n\nChanged, though I assumed your parens for corruption() were not intended.\n\n\n\nOk, so now you've moved on to reviewing the regression tests....\n\n> It seems mighty inefficient to have a whole bunch of consecutive calls\n> to remove_relation_file() or corrupt_first_page() when every such call\n> stops and restarts the database. I would guess these tests will run\n> noticeably faster if you don't do that. Either the functions need to\n> take a list of arguments, or the stop/start needs to be pulled up and\n> done in the caller.\n\nChanged.\n\n> corrupt_first_page() could use a comment explaining what exactly we're\n> overwriting, and in particular noting that we don't want to just\n> clobber the LSN, but rather something where we can detect a wrong\n> value.\n\nAdded comments that we're skipping past the PageHeader and overwriting garbage starting in the line pointers.\n\n> There's a long list of calls to command_checks_all() in 003_check.pl\n> that don't actually check anything but that the command failed, but\n> run it with a bunch of different options. I don't understand the value\n> of that, and suggest reducing the number of cases tested. If you want,\n> you can have tests elsewhere that focus -- perhaps by using verbose\n> mode -- on checking that the right tables are being checked.\n\nThis should be better in this next patch series.\n\n> \n> This is not yet a full review of everything in this patch -- I haven't\n> sorted through all of the tests yet, or all of the new query\n> construction logic -- but to me this looks pretty close to\n> committable.\n\nThanks for the review!\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 3 Mar 2021 07:22:28 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "pg_amcheck contrib application"
},
{
"msg_contents": "On Wed, Mar 3, 2021 at 10:22 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > Your four messages about there being nothing to check seem like they\n> > could be consolidated down to one: \"nothing to check for pattern\n> > \\\"%s\\\"\".\n>\n> I anticipated your review comment, but I'm worried about the case that somebody runs\n>\n> pg_amcheck -t \"foo\" -i \"foo\"\n>\n> and one of those matches and the other does not. The message 'nothing to check for pattern \"foo\"' will be wrong (because there was something to check for it) and unhelpful (because it doesn't say which failed to match.)\n\nFair point.\n\n> Changed, though I assumed your parens for corruption() were not intended.\n\nUh, yeah.\n\n> Thanks for the review!\n\n+ fprintf(stderr, \"%s: no relations to check\", progname);\n\nMissing newline.\n\nGenerally, I would favor using pg_log_whatever as a way of reporting\nmessages starting when option parsing is complete. In other words,\nstarting here:\n\n+ fprintf(stderr, \"%s: no databases to check\\n\", progname);\n\nI see no real advantage in having a bunch of these using\nfprintf(stderr, ...), which to me seems most appropriate only for very\nearly failures.\n\nPerhaps amcheck_sql could be spread across fewer lines, now that it\ndoesn't have so many decorations?\n\npg_basebackup uses -P as a short form for --progress, so maybe we\nshould match that here.\n\nWhen I do \"pg_amcheck --progress\", it just says \"259/259 (100%)\" which\nI don't find too clear. The corresponding pg_basebackup output is\n\"32332/32332 kB (100%), 1/1 tablespace\" which has the advantage of\nincluding units. I think if you just add the word \"relations\" to your\nmessage it will be nicer.\n\nWhen I do \"pg_amcheck -s public\" it tells me that there are no\nrelations to check in schemas for \"public\". I think \"schemas matching\"\nwould read better than \"schemas for.\" Similar with the other messages.\nWhen I try \"pg_amcheck -t nessie\" it tells me that there are no tables\nto check for \"nessie\" but saying that there are no tables to check\nmatching \"nessie\" to me sounds more natural.\n\nThe code doesn't seem real clear on the difference between a database\nname and a pattern. Consider:\n\ncreatedb rhaas\ncreatedb 'rh*s'\nPGDATABASE='rh*s' pg_amcheck\n\nIt checks the rhaas database, which I venture to say is just plain wrong.\n\nThe error message when I exclude the only checkable database is not\nvery clear. \"pg_amcheck -D rhaas\" says pg_amcheck: no checkable\ndatabase: \"rhaas\". Well, I get that there's no checkable database. But\nas a user I have no idea what \"rhaas\" is. I can even get it to issue\nthis complaint more than once:\n\ncreatedb q\ncreatedb qq\npg_amcheck -D 'q*' q qq\n\nNow it issues the \"no checkable database\" complaint twice, once for q\nand once for qq. But if there's no checkable database, I only need to\nknow that once. Either the message is wrongly-worded, or it should\nonly be issued once and doesn't need to include the pattern. I think\nit's the second one, but I could be wrong.\n\nUsing a pattern as the only or first argument doesn't work; i.e.\n\"pg_amcheck rhaas\" works but \"pg_amcheck rhaa?\" fails because there is\nno database with that exact literal name. This seems like another\ninstance of confusion between a literal database name and a database\nname pattern. I'm not quite sure what the right solution is here. We\ncould give up on having database patterns altogether -- the comparable\nissue does not arise for database and schema name patterns -- or the\nmaintenance database could default to something that's not going to be\na pattern, like \"postgres,\" rather than being taken from a\ncommand-line argument that is intended to be a pattern. Or some hybrid\napproach e.g. -d options are patterns, but don't set the maintenance\ndatabase, while extra command line arguments are literal database\nnames, and thus are presumably OK to use as the maintenance DB. But\nit's too weird IMHO to support patterns here and then have supplying\none inevitably fail unless you also specify --maintenance-db.\n\nIt's sorta annoying that there doesn't seem to be an easy way to find\nout exactly what relations got checked as a result of whatever I did.\nPerhaps pg_amcheck -v should print a line for each relation saying\nthat it's checking that relation; it's not actually that verbose as\nthings stand. If we thought that was overdoing it, we could set things\nup so that multiple -v options keep increasing the verbosity level, so\nthat you can get this via pg_amcheck -vv. I submit that pg_amcheck -e\nis not useful for this purpose because the queries, besides being\nlong, use the relation OIDs rather than the names, so it's not easy to\nsee what happened.\n\nI think that something's not working in terms of schema exclusion. If\nI create a brand-new database and then run \"pg_amcheck -S pg_catalog\n-S information_schema -S pg_toast\" it still checks stuff. In fact it\nseems to check the exact same amount of stuff that it checks if I run\nit with no command-line options at all. In fact, if I run \"pg_amcheck\n-S '*'\" that still checks everything. Unless I'm misunderstanding what\nthis option is supposed to do, the fact that a version of this patch\nwhere this seemingly doesn't work at all escaped to the list suggests\nthat your testing has got some gaps.\n\nI like the semantics of --no-toast-expansion and --no-index-expansion\nas you now have them, but I find I don't really like the names. Could\nI suggest --no-dependent-indexes and --no-dependent-toast?\n\nI tried pg_amcheck --startblock=tsgsdg and got an error message\nwithout a trailing newline. I tried --startblock=-525523 and got no\nerror. I tried --startblock=99999999999999999999999999 and got a\ncomplaint that the value was out of bounds, but without a trailing\nnewline. Maybe there's an argument that the bounds don't need to be\nchecked, but surely there's no argument for checking one and not the\nother. I haven't tried the corresponding cases with --endblock but you\nshould. I tried --startblock=2 --endblock=1 and got a complaint that\nthe ending block precedes the starting block, which is totally\nreasonable (though I might say \"start block\" and \"end block\" rather\nthan using the -ing forms) but this message is prefixed with\n\"pg_amcheck: \" whereas the messages about an altogether invalid\nstarting block where not so prefixed. Is there a reason not to make\nthis consistent?\n\nI also tried using a random positive integer for startblock, and for\nevery relation I am told \"ERROR: starting block number must be\nbetween 0 and <whatever>\". That makes sense, because I used a big\nnumber for the start block and I don't have any big relations, but it\nmakes for an absolute ton of output, because every verify_heapam query\nis 11 lines long. This suggests a couple of possible improvements.\nFirst, maybe we should only display the query that produced the error\nin verbose mode. Second, maybe the verify_heapam() query should be\ntightened up so that it doesn't stretch across quite so many lines. I\nthink the call to verify_heapam() could be spread across like 2 lines\nrather than 7, which would improve readability. On a related note, I\nwonder why we need every verify_heapam() call to join to pg_class and\npg_namespace just to fetch the schema and table name which,\npresumably, we should or at least could already have. This kinda\nrelates to my comment earlier about making -v print a message per\nrelation so that we can see, in human-readable format, which relations\nare getting checked. Right now, if you got an error checking just one\nrelation, how would you know which relation you got it from? Unless\nthe server happens to report that information in the message, you're\njust in the dark, because pg_amcheck won't tell you.\n\nThe line \"Read the description of the amcheck contrib module for\ndetails\" seems like it could be omitted. Perhaps the first line of the\nhelp message could be changed to read \"pg_amcheck uses amcheck to find\ncorruption in a PostgreSQL database.\" or something like that, instead.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 3 Mar 2021 12:15:49 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Mar 3, 2021, at 9:15 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Wed, Mar 3, 2021 at 10:22 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>>> Your four messages about there being nothing to check seem like they\n>>> could be consolidated down to one: \"nothing to check for pattern\n>>> \\\"%s\\\"\".\n>> \n>> I anticipated your review comment, but I'm worried about the case that somebody runs\n>> \n>> pg_amcheck -t \"foo\" -i \"foo\"\n>> \n>> and one of those matches and the other does not. The message 'nothing to check for pattern \"foo\"' will be wrong (because there was something to check for it) and unhelpful (because it doesn't say which failed to match.)\n> \n> Fair point.\n> \n>> Changed, though I assumed your parens for corruption() were not intended.\n> \n> Uh, yeah.\n> \n>> Thanks for the review!\n> \n> + fprintf(stderr, \"%s: no relations to check\", progname);\n> \n> Missing newline.\n> \n> Generally, I would favor using pg_log_whatever as a way of reporting\n> messages starting when option parsing is complete. In other words,\n> starting here:\n> \n> + fprintf(stderr, \"%s: no databases to check\\n\", progname);\n> \n> I see no real advantage in having a bunch of these using\n> fprintf(stderr, ...), which to me seems most appropriate only for very\n> early failures.\n\nOk, the newline issues should be fixed, and the use of pg_log_{error,warning,info} is now used more consistently.\n\n> Perhaps amcheck_sql could be spread across fewer lines, now that it\n> doesn't have so many decorations?\n\nDone.\n\n> pg_basebackup uses -P as a short form for --progress, so maybe we\n> should match that here.\n\nDone.\n\n> When I do \"pg_amcheck --progress\", it just says \"259/259 (100%)\" which\n> I don't find too clear. The corresponding pg_basebackup output is\n> \"32332/32332 kB (100%), 1/1 tablespace\" which has the advantage of\n> including units. I think if you just add the word \"relations\" to your\n> message it will be nicer.\n\nDone. It now shows:\n\n% pg_amcheck -P\n259/259 relations (100%) 870/870 pages (100%)\n\nAs you go along, the percent of relations processed may not be equal to the percent of pages, though at the end they are both 100%. The value of printing both can only be seen while things are underway.\n\n> When I do \"pg_amcheck -s public\" it tells me that there are no\n> relations to check in schemas for \"public\". I think \"schemas matching\"\n> would read better than \"schemas for.\" Similar with the other messages.\n> When I try \"pg_amcheck -t nessie\" it tells me that there are no tables\n> to check for \"nessie\" but saying that there are no tables to check\n> matching \"nessie\" to me sounds more natural.\n\nDone.\n\n% pg_amcheck -s public\npg_amcheck: error: no relations to check in schemas matching \"public\"\n\n> The code doesn't seem real clear on the difference between a database\n> name and a pattern. Consider:\n> \n> createdb rhaas\n> createdb 'rh*s'\n> PGDATABASE='rh*s' pg_amcheck\n> \n> It checks the rhaas database, which I venture to say is just plain wrong.\n\nThis next version treats any arguments supplied with -d and -D as database patterns, and all others as database names. Exclusion patterns (-D) only override inclusion patterns, not names. \n\n> The error message when I exclude the only checkable database is not\n> very clear. \"pg_amcheck -D rhaas\" says pg_amcheck: no checkable\n> database: \"rhaas\". Well, I get that there's no checkable database. But\n> as a user I have no idea what \"rhaas\" is. I can even get it to issue\n> this complaint more than once:\n> \n> createdb q\n> createdb qq\n> pg_amcheck -D 'q*' q qq\n> \n> Now it issues the \"no checkable database\" complaint twice, once for q\n> and once for qq. But if there's no checkable database, I only need to\n> know that once. Either the message is wrongly-worded, or it should\n> only be issued once and doesn't need to include the pattern. I think\n> it's the second one, but I could be wrong.\n\nI think this whole problem goes away with the change to how -D/-d work and don't interact with database names. At least, I don't get any problems like the one you mention:\n\n% PGDATABASE=postgres pg_amcheck -D postgres\npg_amcheck: warning: skipping database \"postgres\": amcheck is not installed\npg_amcheck: error: no relations to check\n\n% PGDATABASE=mark.dilger pg_amcheck -D mark.dilger --progress\n259/259 relations (100%) 870/870 pages (100%)\n\n> Using a pattern as the only or first argument doesn't work; i.e.\n> \"pg_amcheck rhaas\" works but \"pg_amcheck rhaa?\" fails because there is\n> no database with that exact literal name. This seems like another\n> instance of confusion between a literal database name and a database\n> name pattern. I'm not quite sure what the right solution is here. We\n> could give up on having database patterns altogether -- the comparable\n> issue does not arise for database and schema name patterns -- or the\n> maintenance database could default to something that's not going to be\n> a pattern, like \"postgres,\" rather than being taken from a\n> command-line argument that is intended to be a pattern. Or some hybrid\n> approach e.g. -d options are patterns, but don't set the maintenance\n> database, while extra command line arguments are literal database\n> names, and thus are presumably OK to use as the maintenance DB. But\n> it's too weird IMHO to support patterns here and then have supplying\n> one inevitably fail unless you also specify --maintenance-db.\n\nRight. I think the changes in this next version address all your concerns as stated, but here are some examples:\n\n% pg_amcheck \"mark.d*\" --progress \npg_amcheck: error: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed: FATAL: database \"mark.d*\" does not exist\n\n% PGDATABASE=postgres pg_amcheck \"mark.d*\" --progress\npg_amcheck: error: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed: FATAL: database \"mark.d*\" does not exist\n\n% PGDATABASE=postgres pg_amcheck -d \"mark.d*\" --progress\n520/520 relations (100%) 1815/1815 pages (100%)\n\n% pg_amcheck --all --maintenance-db=\"mark.d*\" --progress\npg_amcheck: error: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed: FATAL: database \"mark.d*\" does not exist\n\n% pg_amcheck --all -D=\"mark.d*\" --progress\npg_amcheck: warning: skipping database \"template1\": amcheck is not installed\n520/520 relations (100%) 1815/1815 pages (100%)\n\n> It's sorta annoying that there doesn't seem to be an easy way to find\n> out exactly what relations got checked as a result of whatever I did.\n> Perhaps pg_amcheck -v should print a line for each relation saying\n> that it's checking that relation; it's not actually that verbose as\n> things stand. If we thought that was overdoing it, we could set things\n> up so that multiple -v options keep increasing the verbosity level, so\n> that you can get this via pg_amcheck -vv. I submit that pg_amcheck -e\n> is not useful for this purpose because the queries, besides being\n> long, use the relation OIDs rather than the names, so it's not easy to\n> see what happened.\n\nI added that, as shown here:\n\n% pg_amcheck mark.dilger --table=pg_subscription --table=pg_publication -v \npg_amcheck: in database \"mark.dilger\": using amcheck version \"1.3\" in schema \"public\"\npg_amcheck: checking btree index \"mark.dilger\".\"pg_toast\".\"pg_toast_6100_index\" (oid 4184) (1/1 page)\npg_amcheck: checking btree index \"mark.dilger\".\"pg_catalog\".\"pg_publication_oid_index\" (oid 6110) (1/1 page)\npg_amcheck: checking btree index \"mark.dilger\".\"pg_catalog\".\"pg_publication_pubname_index\" (oid 6111) (1/1 page)\npg_amcheck: checking btree index \"mark.dilger\".\"pg_catalog\".\"pg_subscription_oid_index\" (oid 6114) (1/1 page)\npg_amcheck: checking btree index \"mark.dilger\".\"pg_catalog\".\"pg_subscription_subname_index\" (oid 6115) (1/1 page)\npg_amcheck: checking table \"mark.dilger\".\"pg_toast\".\"pg_toast_6100\" (oid 4183) (0/0 pages)\npg_amcheck: checking table \"mark.dilger\".\"pg_catalog\".\"pg_subscription\" (oid 6100) (0/0 pages)\npg_amcheck: checking table \"mark.dilger\".\"pg_catalog\".\"pg_publication\" (oid 6104) (0/0 pages)\n\n> I think that something's not working in terms of schema exclusion. If\n> I create a brand-new database and then run \"pg_amcheck -S pg_catalog\n> -S information_schema -S pg_toast\" it still checks stuff. In fact it\n> seems to check the exact same amount of stuff that it checks if I run\n> it with no command-line options at all. In fact, if I run \"pg_amcheck\n> -S '*'\" that still checks everything. Unless I'm misunderstanding what\n> this option is supposed to do, the fact that a version of this patch\n> where this seemingly doesn't work at all escaped to the list suggests\n> that your testing has got some gaps.\n\nGood catch. That works now, but beware that -S doesn't apply to excluding things brought in by toast or index expansion, so:\n\n% pg_amcheck mark.dilger -S pg_catalog -S pg_toast --progress -v\npg_amcheck: in database \"mark.dilger\": using amcheck version \"1.3\" in schema \"public\"\n 0/14 relations (0%) 0/90 pages (0%) \npg_amcheck: checking table \"mark.dilger\".\"public\".\"foo\" (oid 16385) (45/45 pages)\npg_amcheck: checking btree index \"mark.dilger\".\"public\".\"foo_idx\" (oid 16388) (30/30 pages)\npg_amcheck: checking table \"mark.dilger\".\"information_schema\".\"sql_features\" (oid 13051) (8/8 pages)\npg_amcheck: checking btree index \"mark.dilger\".\"pg_toast\".\"pg_toast_13051_index\" (oid 13055) (1/1 page)\npg_amcheck: checking table \"mark.dilger\".\"information_schema\".\"sql_implementation_info\" (oid 13056) (1/1 page)\npg_amcheck: checking btree index \"mark.dilger\".\"pg_toast\".\"pg_toast_13056_index\" (oid 13060) (1/1 page)\npg_amcheck: checking table \"mark.dilger\".\"information_schema\".\"sql_parts\" (oid 13061) (1/1 page)\npg_amcheck: checking btree index \"mark.dilger\".\"pg_toast\".\"pg_toast_13061_index\" (oid 13065) (1/1 page)\npg_amcheck: checking table \"mark.dilger\".\"information_schema\".\"sql_sizing\" (oid 13066) (1/1 page)\npg_amcheck: checking btree index \"mark.dilger\".\"pg_toast\".\"pg_toast_13066_index\" (oid 13070) (1/1 page)\npg_amcheck: checking table \"mark.dilger\".\"pg_toast\".\"pg_toast_13051\" (oid 13054) (0/0 pages)\npg_amcheck: checking table \"mark.dilger\".\"pg_toast\".\"pg_toast_13056\" (oid 13059) (0/0 pages)\npg_amcheck: checking table \"mark.dilger\".\"pg_toast\".\"pg_toast_13061\" (oid 13064) (0/0 pages)\npg_amcheck: checking table \"mark.dilger\".\"pg_toast\".\"pg_toast_13066\" (oid 13069) (0/0 pages)\n14/14 relations (100%) 90/90 pages (100%)\n\nbut\n\n% pg_amcheck mark.dilger -S pg_catalog -S pg_toast -S information_schema --progress -v\npg_amcheck: in database \"mark.dilger\": using amcheck version \"1.3\" in schema \"public\"\n0/2 relations (0%) 0/75 pages (0%) \npg_amcheck: checking table \"mark.dilger\".\"public\".\"foo\" (oid 16385) (45/45 pages)\npg_amcheck: checking btree index \"mark.dilger\".\"public\".\"foo_idx\" (oid 16388) (30/30 pages)\n2/2 relations (100%) 75/75 pages (100%)\n\nThe first one checks so much because the toast and indexes for tables in the \"information_schema\" are not excluded by -S, but:\n\n% pg_amcheck mark.dilger -S pg_catalog -S pg_toast --progress --no-dependent-indexes --no-dependent-toast -v\npg_amcheck: in database \"mark.dilger\": using amcheck version \"1.3\" in schema \"public\"\n0/5 relations (0%) 0/56 pages (0%) \npg_amcheck: checking table \"mark.dilger\".\"public\".\"foo\" (oid 16385) (45/45 pages)\npg_amcheck: checking table \"mark.dilger\".\"information_schema\".\"sql_features\" (oid 13051) (8/8 pages)\npg_amcheck: checking table \"mark.dilger\".\"information_schema\".\"sql_implementation_info\" (oid 13056) (1/1 page)\npg_amcheck: checking table \"mark.dilger\".\"information_schema\".\"sql_parts\" (oid 13061) (1/1 page)\npg_amcheck: checking table \"mark.dilger\".\"information_schema\".\"sql_sizing\" (oid 13066) (1/1 page)\n5/5 relations (100%) 56/56 pages (100%)\n\nworks as you might expect.\n\n> I like the semantics of --no-toast-expansion and --no-index-expansion\n> as you now have them, but I find I don't really like the names. Could\n> I suggest --no-dependent-indexes and --no-dependent-toast?\n\nChanged.\n\n> I tried pg_amcheck --startblock=tsgsdg and got an error message\n> without a trailing newline.\n\nFixed.\n\n> I tried --startblock=-525523 and got no\n> error.\n\nFixed.\n\n> I tried --startblock=99999999999999999999999999 and got a\n> complaint that the value was out of bounds, but without a trailing\n> newline.\n\nFixed.\n\n> Maybe there's an argument that the bounds don't need to be\n> checked, but surely there's no argument for checking one and not the\n> other.\n\nIt checks both now, and also for --endblock\n\n> I haven't tried the corresponding cases with --endblock but you\n> should. I tried --startblock=2 --endblock=1 and got a complaint that\n> the ending block precedes the starting block, which is totally\n> reasonable (though I might say \"start block\" and \"end block\" rather\n> than using the -ing forms)\n\nI think this is fixed up now. There is an interaction with amcheck's verify_heapam(), where that function raises an error if the startblock or endblock arguments are out of bounds for the relation in question. Rather than aborting the entire pg_amcheck run, it avoids passing inappropriate block ranges to verify_heapam() and outputs a warning, so:\n\n% pg_amcheck mark.dilger -t foo -t pg_class --progress -v --startblock=35 --endblock=77\npg_amcheck: in database \"mark.dilger\": using amcheck version \"1.3\" in schema \"public\"\n0/6 relations (0%) 0/55 pages (0%) \npg_amcheck: checking table \"mark.dilger\".\"public\".\"foo\" (oid 16385) (10/45 pages)\npg_amcheck: warning: ignoring endblock option 77 beyond end of table \"mark.dilger\".\"public\".\"foo\"\npg_amcheck: checking btree index \"mark.dilger\".\"public\".\"foo_idx\" (oid 16388) (30/30 pages)\npg_amcheck: checking table \"mark.dilger\".\"pg_catalog\".\"pg_class\" (oid 1259) (0/13 pages)\npg_amcheck: warning: ignoring startblock option 35 beyond end of table \"mark.dilger\".\"pg_catalog\".\"pg_class\"\npg_amcheck: warning: ignoring endblock option 77 beyond end of table \"mark.dilger\".\"pg_catalog\".\"pg_class\"\npg_amcheck: checking btree index \"mark.dilger\".\"pg_catalog\".\"pg_class_relname_nsp_index\" (oid 2663) (6/6 pages)\npg_amcheck: checking btree index \"mark.dilger\".\"pg_catalog\".\"pg_class_tblspc_relfilenode_index\" (oid 3455) (5/5 pages)\npg_amcheck: checking btree index \"mark.dilger\".\"pg_catalog\".\"pg_class_oid_index\" (oid 2662) (4/4 pages)\n6/6 relations (100%) 55/55 pages (100%) \n\nThe way the (x/y pages) is printed takes into account that the [startblock..endblock] range may reduce the number of pages to check (x) to something less than the number of pages in the relation (y), but the reporting is a bit of a lie when the startblock is beyond the end of the table, as it doesn't get passed to verify_heapam and so the number of blocks checked may be more than the zero blocks reported. I think I might need to fix this up tomorrow, but I want to get what I have in this patch set posted tonight, so it's not fixed here. Also, there are multiple ways of addressing this, and I'm having trouble deciding which way is best. I can exclude the relation from being checked at all, or realize earlier that I'm not going to honor the startblock argument and compute the blocks to check correctly. Thoughts?\n\n> but this message is prefixed with\n> \"pg_amcheck: \" whereas the messages about an altogether invalid\n> starting block where not so prefixed. Is there a reason not to make\n> this consistent?\n\nThat was a stray usage of pg_log_error where fprintf should have been used. Fixed.\n\n> I also tried using a random positive integer for startblock, and for\n> every relation I am told \"ERROR: starting block number must be\n> between 0 and <whatever>\". That makes sense, because I used a big\n> number for the start block and I don't have any big relations, but it\n> makes for an absolute ton of output, because every verify_heapam query\n> is 11 lines long.\n\nThis happens because the range was being passed down to verify_heapam. It won't do that now.\n\n> This suggests a couple of possible improvements.\n> First, maybe we should only display the query that produced the error\n> in verbose mode.\n\nNo longer relevant.\n\n> Second, maybe the verify_heapam() query should be\n> tightened up so that it doesn't stretch across quite so many lines.\n\nNot a bad idea, but no longer relevant to the startblock/endblock issues. Done.\n\n> I\n> think the call to verify_heapam() could be spread across like 2 lines\n> rather than 7, which would improve readability.\n\nDone.\n\n> On a related note, I\n> wonder why we need every verify_heapam() call to join to pg_class and\n> pg_namespace just to fetch the schema and table name which,\n> presumably, we should or at least could already have.\n\nWe didn't have it, but we do now, so that join is removed.\n\n> This kinda\n> relates to my comment earlier about making -v print a message per\n> relation so that we can see, in human-readable format, which relations\n> are getting checked.\n\nDone.\n\n> Right now, if you got an error checking just one\n> relation, how would you know which relation you got it from? Unless\n> the server happens to report that information in the message, you're\n> just in the dark, because pg_amcheck won't tell you.\n\nThat information is now included in the query text, so you can see it in the error message along with the oid.\n\n> \n> The line \"Read the description of the amcheck contrib module for\n> details\" seems like it could be omitted. Perhaps the first line of the\n> help message could be changed to read \"pg_amcheck uses amcheck to find\n> corruption in a PostgreSQL database.\" or something like that, instead.\n\nDone.\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 3 Mar 2021 22:25:56 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "Most of these changes sound good. I'll go through the whole patch\nagain today, or as much of it as I can. But before I do that, I want\nto comment on this point specifically.\n\nOn Thu, Mar 4, 2021 at 1:25 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> I think this is fixed up now. There is an interaction with amcheck's verify_heapam(), where that function raises an error if the startblock or endblock arguments are out of bounds for the relation in question. Rather than aborting the entire pg_amcheck run, it avoids passing inappropriate block ranges to verify_heapam() and outputs a warning, so:\n>\n> % pg_amcheck mark.dilger -t foo -t pg_class --progress -v --startblock=35 --endblock=77\n> pg_amcheck: in database \"mark.dilger\": using amcheck version \"1.3\" in schema \"public\"\n> 0/6 relations (0%) 0/55 pages (0%)\n> pg_amcheck: checking table \"mark.dilger\".\"public\".\"foo\" (oid 16385) (10/45 pages)\n> pg_amcheck: warning: ignoring endblock option 77 beyond end of table \"mark.dilger\".\"public\".\"foo\"\n> pg_amcheck: checking btree index \"mark.dilger\".\"public\".\"foo_idx\" (oid 16388) (30/30 pages)\n> pg_amcheck: checking table \"mark.dilger\".\"pg_catalog\".\"pg_class\" (oid 1259) (0/13 pages)\n> pg_amcheck: warning: ignoring startblock option 35 beyond end of table \"mark.dilger\".\"pg_catalog\".\"pg_class\"\n> pg_amcheck: warning: ignoring endblock option 77 beyond end of table \"mark.dilger\".\"pg_catalog\".\"pg_class\"\n> pg_amcheck: checking btree index \"mark.dilger\".\"pg_catalog\".\"pg_class_relname_nsp_index\" (oid 2663) (6/6 pages)\n> pg_amcheck: checking btree index \"mark.dilger\".\"pg_catalog\".\"pg_class_tblspc_relfilenode_index\" (oid 3455) (5/5 pages)\n> pg_amcheck: checking btree index \"mark.dilger\".\"pg_catalog\".\"pg_class_oid_index\" (oid 2662) (4/4 pages)\n> 6/6 relations (100%) 55/55 pages (100%)\n>\n> The way the (x/y pages) is printed takes into account that the [startblock..endblock] range may reduce the number of pages to check (x) to something less than the number of pages in the relation (y), but the reporting is a bit of a lie when the startblock is beyond the end of the table, as it doesn't get passed to verify_heapam and so the number of blocks checked may be more than the zero blocks reported. I think I might need to fix this up tomorrow, but I want to get what I have in this patch set posted tonight, so it's not fixed here. Also, there are multiple ways of addressing this, and I'm having trouble deciding which way is best. I can exclude the relation from being checked at all, or realize earlier that I'm not going to honor the startblock argument and compute the blocks to check correctly. Thoughts?\n\nI think this whole approach is pretty suspect because the number of\nblocks in the relation can increase (by relation extension) or\ndecrease (by VACUUM or TRUNCATE) between the time when we query for\nthe list of target relations and the time we get around to executing\nany queries against them. I think it's OK to use the number of\nrelation pages for progress reporting because progress reporting is\nonly approximate anyway, but I wouldn't print them out in the progress\nmessages, and I wouldn't try to fix up the startblock and endblock\narguments on the basis of how long you think that relation is going to\nbe. You seem to view the fact that the server reported the error as\nthe reason for the problem, but I don't agree. I think having the\nserver report the error here is right, and the problem is that the\nerror reporting sucked because it was long-winded and didn't\nnecessarily tell you which table had the problem.\n\nThere are a LOT of things that can go wrong when we go try to run\nverify_heapam on a table. The table might have been dropped; in fact,\non a busy production system, such cases are likely to occur routinely\nif DDL is common, which for many users it is. The system catalog\nentries might be screwed up, so that the relation can't be opened.\nThere might be an unreadable page in the relation, either because the\nOS reports an I/O error or something like that, or because checksum\nverification fails. There are various other possibilities. We\nshouldn't view such errors as low-level things that occur only in\nfringe cases; this is a corruption-checking tool, and we should expect\nthat running it against messed-up databases will be common. We\nshouldn't try to interpret the errors we get or make any big decisions\nabout them, but we should have a clear way of reporting them so that\nthe user can decide what to do.\n\nJust as an experiment, I suggest creating a database with 100 tables\nin it, each with 1 index, and then deleting a single pg_attribute\nentry for 10 of the tables, and then running pg_amcheck. I think you\nwill get 20 errors - one for each messed-up table and one for the\ncorresponding index. Maybe you'll get errors for the TOAST tables\nchecks too, if the tables have TOAST tables, although that seems like\nit should be avoidable. Now, now matter what you do, the tool is going\nto produce a lot of output here, because you have a lot of problems,\nand that's OK. But how understandable is that output, and how concise\nis it? If it says something like:\n\npg_amcheck: could not check \"SCHEMA_NAME\".\"TABLE_NAME\": ERROR: some\nattributes are missing or something\n\n...and that line is repeated 20 times, maybe with a context or detail\nline for each one or something like that, then you have got a good UI.\nIf it's not clear which tables have the problem, you have got a bad\nUI. If it dumps out 300 lines of output instead of 20 or 40, you have\na UI that is so verbose that usability is going to be somewhat\nimpaired, which is why I suggested only showing the query in verbose\nmode.\n\nBTW, another thing that might be interesting is to call\nPQsetErrorVerbosity(conn, PQERRORS_VERBOSE) in verbose mode. It's\nprobably possible to contrive a case where the server error message is\nsomething generic like \"cache lookup failed for relation %u\" which\noccurs in a whole bunch of places in the source code, and being able\nget the file and line number information can be really useful when\ntrying to track such things down.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 4 Mar 2021 10:29:12 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Thu, Mar 4, 2021 at 10:29 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Most of these changes sound good. I'll go through the whole patch\n> again today, or as much of it as I can. But before I do that, I want\n> to comment on this point specifically.\n\nJust a thought - I don't feel strongly about this - but you want want\nto consider storing your list of patterns in an array that gets\nresized as necessary rather than a list. Then the pattern ID would\njust be pattern_ptr - pattern_array, and finding the pattern by ID\nwould just be pattern_ptr = &pattern_array[pattern_id]. I don't think\nthere's a real efficiency issue here because the list of patterns is\nalmost always going to be short, and even if somebody decides to\nprovide a very long list of patterns (e.g. by using xargs) it's\nprobably still not that big a deal. A sufficiently obstinate user\nrunning an operating system where argument lists can be extremely long\ncould probably make this the dominant cost by providing a gigantic\nnumber of patterns that don't match anything, but such a person is\ntrying to prove a point, rather than accomplish anything useful, so I\ndon't care. But, the code might be more elegant the other way.\n\nThis patch increases the number of cases where we use ^ to assert that\nexactly one of two things is true from 4 to 5. I think it might be\nbetter to just write out (a && !b) || (b && !a), but there is some\nprecedent for the way you did it so perhaps it's fine.\n\nThe name prepare_table_commmand() is oddly non-parallel with\nverify_heapam_slot_handler(). Seems better to call it either a table\nthroughout, or a heapam throughout. Actually I think I would prefer\n\"heap\" to either of those, but I definitely think we shouldn't switch\nterminology. Note that prepare_btree_command() doesn't have this\nissue, since it matches verify_btree_slot_handler(). On a related\nnote, \"c.relam = 2\" is really a test for is_heap, not is_table. We\nmight have other table AMs in the future, but only one of those AMs\nwill be called heap, and only one will have OID 2.\n\nYou've got some weird round-tripping stuff where you sent literal\nvalues to the server so that you can turn around and get them back\nfrom the server. For example, you've got prepare_table_command()\nselect rel->nspname and rel->relname back from the server as literals,\nwhich seems silly because we have to already have that information or\nwe couldn't ask the server to give it to us ... and if we already have\nit, then why do we need to get it again? The reason it's like this\nseems to be that after calling prepare_table_command(), we use\nParallelSlotSetHandler() to set verify_heapam_slot_handler() as the\ncallback, and we set sql.data as the callback, so we don't have access\nto the RelationInfo object when we're handling the slot result. But\nthat's easy to fix: just store the sql as a field inside the\nRelationInfo, and then pass a pointer to the whole RelationInfo to the\nslot handler. Then you don't need to round-trip the table and schema\nnames; and you have the values available even if an error happens.\n\nOn a somewhat related note, I think it might make sense to have the\nslot handlers try to free memory. It seems hard to make pg_amcheck\nleak enough memory to matter, but I guess it's not entirely\nimplausible that someone could be checking let's say 10 million\nrelations. Freeing the query strings could probably prevent a half a\nGB or so of accumulated memory usage under those circumstances. I\nsuppose freeing nspname and relname would save a bit more, but it's\nhardly worth doing since they are a lot shorter and you've got to have\nall that information in memory at once at some point anyway; similarly\nwith the RelationInfo structures, which have the further complexity of\nbeing part of a linked list you might not want to corrupt. But you\ndon't need to have every query string in memory at the same time, just\nas many as are running at one in time.\n\nAlso, maybe compile_relation_list_one_db() should keep the result set\naround so that you don't need to pstrdup() the nspname and relname in\nthe first place. Right now, just before compile_relation_list_one_db()\ncalls PQclear() you have two copies of every nspname and relname\nallocated. If you just kept the result sets around forever, the peak\nmemory usage would be lower than it is currently. If you really wanted\nto get fancy you could arrange to free each result set when you've\nfinished that database, but that seems annoying to code and I'm pretty\nsure it doesn't matter.\n\nThe CTEs called \"include_raw\" and \"exclude_raw\" which are used as part\nof the query to construct a list of tables. The regexes are fished\nthrough there, and the pattern IDs, which makes sense, but the raw\npatterns are also fished through, and I don't see a reason for that.\nWe don't seem to need that for anything. The same seems to apply to\nthe query used to resolve database patterns.\n\nI see that most of the queries have now been adjusted to be spread\nacross fewer lines, which is good, but please make sure to do that\neverywhere. In particular, I notice that the bt_index_check calls are\nstill too spread out.\n\nMore in a bit, need to grab some lunch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 4 Mar 2021 12:27:54 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Thu, Mar 4, 2021 at 12:27 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> More in a bit, need to grab some lunch.\n\nMoving on to the tests, in 003_check.pl, I think it would be slightly\nbetter if relation_toast were to select ct.oid::regclass and then just\nhave the caller use that value directly. We'd certainly want to do\nthat if the name could contain any characters that might require\nquoting. Here that's not possible, but I think we might as well use\nthe same technique anyway.\n\nI'm not sure how far to go with it, but I think that you might want to\ntry to enhance the logging in some of the cases where the TAP tests\nmight fail. In particular, if either of these trip in the buildfarm,\nit doesn't seem like it will be too easy to figure out why they\nfailed:\n\n+ fail('Xid thresholds not as expected');\n+ fail('Page layout differs from our expectations');\n\nYou might want to rephrase the message to incorporate the values that\ntriggered the failure, e.g. \"datfrozenxid $datfrozenxid is not between\n3 and $relfrozenxid\", \"expected (a,b) = (12345678,abcdefg) but got\n($x,$y)\", so that if the buildfarm happens to fail there's a shred of\nhope that we might be able to guess the reason from the message. You\ncould also give some thought to whether there are any tests that can\nbe improved in similar ways. Test::More is nice in that when you run a\ntest with eq() or like() and it fails it will tell you about the input\nvalues in the diagnostic, but if you do something like is($x < 4, ...)\ninstead of cmp_ok($x, '<', 4, ...) then you lose that. I'm not saying\nyou're doing that exact thing, just saying that looking through the\ntest code with an eye to finding things where you could output a\nlittle more info about a potential failure might be a worthwhile\nactivity.\n\nIf it were me, I would get rid of ROWCOUNT and have a list of\nclosures, and then loop over the list and call each one e.g. my\n@corruption = ( sub { ... }, sub { ... }, sub { ... }) or maybe\nsomething like what I did with @scenario in\nsrc/bin/pg_verifybackup/t/003_corruption.pl, but this is ultimately a\nstyle preference and I think the way you actually did it is also\nreasonable, and some people might find it more readable than the other\nway.\n\nThe name int4_fickle_ops is positively delightful and I love having a\ntest case like this.\n\nOn the whole, I think these tests look quite solid. I am a little\nconcerned, as you may gather from the comment above, that they will\nnot survive contact with the buildfarm, because they will turn out to\nbe platform or OS-dependent in some way. However, I can see that\nyou've taken steps to avoid such dependencies, and maybe we'll be\nlucky and those will work. Also, while I am suspicious something's\ngoing to break, I don't know what it's going to be, so I can't suggest\nany method to avoid it. I think we'll just have to keep an eye on the\nbuildfarm post-commit and see what crops up.\n\nTurning to the documentation, I see that it is documented that a bare\ncommand-line argument can be a connection string rather than a\ndatabase name. That sounds like a good plan, but when I try\n'pg_amcheck sslmode=require' it does not work: FATAL: database\n\"sslmode=require\" does not exist. The argument to -e is also\ndocumented to be a connection string, but that also seems not to work.\nSome thought might need to be given to what exactly these connection\nopens are supposed to mean. Like, do the connection options I set via\n-e apply to all the connections I make, or just the one to the\nmaintenance database? How do I set connection options for connections\nto databases whose names aren't specified explicitly but are\ndiscovered by querying pg_database? Maybe instead of allowing these to\nbe a connection string, we should have a separate option that can be\nused just for the purpose of setting connection options that then\napply to all connections. That seems a little bit oddly unlike other\ntools, but if I want sslmode=verify-ca or something on all my\nconnections, there should be an easy way to get it.\n\nThe documentation makes many references to patterns, but does not\nexplain what a pattern is. I see that psql's documentation contains an\nexplanation, and pg_dump's documentation links to psql's\ndocumentation. pg_amcheck should probably link to psql's\ndocumentation, too.\n\nIn the documentation for -d, you say that \"If -a --all is also\nspecified, -d --database does not additionally affect which databases\nare checked.\" I suggest replacing \"does not additionally affect which\ndatabases are checked\" with \"has no effect.\"\n\nIn two places you say \"without regard for\" but I think it should be\n\"without regard to\".\n\nIn the documentation for --no-strict-names you use \"nor\" where I think\nit should say \"or\".\n\nI kind of wonder whether we need --quiet. It seems like right now it\nonly does two things. One is to control complaints about ignoring the\nstartblock and endblock options, but I don't agree with that behavior\nanyway. The other is control whether we complain about unmatched\npatterns, but I think that could just be controlled --no-strict-names\ni.e. normally an unmatched pattern results in a complaint and a\nfailure, but with --no-strict-names there is neither a complaint nor a\nfailure. Having a flag to control whether we get the message\nseparately from whether we get the failure doesn't seem helpful.\n\nI don't think it's good to say \"This is an alias for\" in the\ndocumentation of -i -I -t -T. I suggest instead saying \"This is\nsimilar to\".\n\nInstead of \"Option BLAH takes precedence over...\" I suggest \"The BLAH\noption takes precedence over...\"\n\nOK, that's it from me for this review pass.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 4 Mar 2021 16:23:42 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Thu, Mar 4, 2021 at 7:29 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think this whole approach is pretty suspect because the number of\n> blocks in the relation can increase (by relation extension) or\n> decrease (by VACUUM or TRUNCATE) between the time when we query for\n> the list of target relations and the time we get around to executing\n> any queries against them. I think it's OK to use the number of\n> relation pages for progress reporting because progress reporting is\n> only approximate anyway, but I wouldn't print them out in the progress\n> messages, and I wouldn't try to fix up the startblock and endblock\n> arguments on the basis of how long you think that relation is going to\n> be.\n\nI don't think that the struct AmcheckOptions block fields (e.g.,\nstartblock) should be of type 'long' -- that doesn't work well on\nWindows, where 'long' is only 32-bit. To be fair we already do the\nsame thing elsewhere, but there is no reason to repeat those mistakes.\n(I'm rather suspicious of 'long' in general.)\n\nI think that you could use BlockNumber + strtoul() without breaking Windows.\n\n> There are a LOT of things that can go wrong when we go try to run\n> verify_heapam on a table. The table might have been dropped; in fact,\n> on a busy production system, such cases are likely to occur routinely\n> if DDL is common, which for many users it is. The system catalog\n> entries might be screwed up, so that the relation can't be opened.\n> There might be an unreadable page in the relation, either because the\n> OS reports an I/O error or something like that, or because checksum\n> verification fails. There are various other possibilities. We\n> shouldn't view such errors as low-level things that occur only in\n> fringe cases; this is a corruption-checking tool, and we should expect\n> that running it against messed-up databases will be common. We\n> shouldn't try to interpret the errors we get or make any big decisions\n> about them, but we should have a clear way of reporting them so that\n> the user can decide what to do.\n\nI agree.\n\nYour database is not supposed to be corrupt. Once your database has\nbecome corrupt, all bets are off -- something happened that was\nsupposed to be impossible -- which seems like a good reason to be\nmodest about what we think we know.\n\nThe user should always see the unvarnished truth. pg_amcheck should\nnot presume to suppress errors from lower level code, except perhaps\nin well-scoped special cases.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 4 Mar 2021 14:04:37 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Mar 4, 2021, at 2:04 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> On Thu, Mar 4, 2021 at 7:29 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> I think this whole approach is pretty suspect because the number of\n>> blocks in the relation can increase (by relation extension) or\n>> decrease (by VACUUM or TRUNCATE) between the time when we query for\n>> the list of target relations and the time we get around to executing\n>> any queries against them. I think it's OK to use the number of\n>> relation pages for progress reporting because progress reporting is\n>> only approximate anyway, but I wouldn't print them out in the progress\n>> messages, and I wouldn't try to fix up the startblock and endblock\n>> arguments on the basis of how long you think that relation is going to\n>> be.\n> \n> I don't think that the struct AmcheckOptions block fields (e.g.,\n> startblock) should be of type 'long' -- that doesn't work well on\n> Windows, where 'long' is only 32-bit. To be fair we already do the\n> same thing elsewhere, but there is no reason to repeat those mistakes.\n> (I'm rather suspicious of 'long' in general.)\n> \n> I think that you could use BlockNumber + strtoul() without breaking Windows.\n\nFair enough.\n\n>> There are a LOT of things that can go wrong when we go try to run\n>> verify_heapam on a table. The table might have been dropped; in fact,\n>> on a busy production system, such cases are likely to occur routinely\n>> if DDL is common, which for many users it is. The system catalog\n>> entries might be screwed up, so that the relation can't be opened.\n>> There might be an unreadable page in the relation, either because the\n>> OS reports an I/O error or something like that, or because checksum\n>> verification fails. There are various other possibilities. We\n>> shouldn't view such errors as low-level things that occur only in\n>> fringe cases; this is a corruption-checking tool, and we should expect\n>> that running it against messed-up databases will be common. We\n>> shouldn't try to interpret the errors we get or make any big decisions\n>> about them, but we should have a clear way of reporting them so that\n>> the user can decide what to do.\n> \n> I agree.\n> \n> Your database is not supposed to be corrupt. Once your database has\n> become corrupt, all bets are off -- something happened that was\n> supposed to be impossible -- which seems like a good reason to be\n> modest about what we think we know.\n> \n> The user should always see the unvarnished truth. pg_amcheck should\n> not presume to suppress errors from lower level code, except perhaps\n> in well-scoped special cases.\n\nI think Robert mistook why I was doing that. I was thinking about a different usage pattern. If somebody thinks a subset of relations have been badly corrupted, but doesn't know which relations those might be, they might try to find them with pg_amcheck, but wanting to just check the first few blocks per relation in order to sample the relations. So,\n\n pg_amcheck --startblock=0 --endblock=9 --no-dependent-indexes\n\nor something like that. I don't think it's very fun to have it error out for each relation that doesn't have at least ten blocks, nor is it fun to have those relations skipped by error'ing out before checking any blocks, as they might be the corrupt relations you are looking for. But using --startblock and --endblock for this is not a natural fit, as evidenced by how I was trying to \"fix things up\" for the user, so I'll punt on this usage until some future version, when I might add a sampling option.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 4 Mar 2021 14:39:04 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Thu, Mar 4, 2021 at 5:39 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> I think Robert mistook why I was doing that. I was thinking about a different usage pattern. If somebody thinks a subset of relations have been badly corrupted, but doesn't know which relations those might be, they might try to find them with pg_amcheck, but wanting to just check the first few blocks per relation in order to sample the relations. So,\n>\n> pg_amcheck --startblock=0 --endblock=9 --no-dependent-indexes\n>\n> or something like that. I don't think it's very fun to have it error out for each relation that doesn't have at least ten blocks, nor is it fun to have those relations skipped by error'ing out before checking any blocks, as they might be the corrupt relations you are looking for. But using --startblock and --endblock for this is not a natural fit, as evidenced by how I was trying to \"fix things up\" for the user, so I'll punt on this usage until some future version, when I might add a sampling option.\n\nI admit I hadn't thought of that use case. I guess somebody could want\nto do that, but it doesn't seem all that useful. Checking the first\nup-to-ten blocks of every relation is not a very representative\nsample, and it's not clear to me that sampling is a good idea even if\nit were representative. What good is it to know that 10% of my\ndatabase is probably not corrupted?\n\nOn the other hand, people want to do all kinds of things that seem\nstrange to me, and this might be another one. But, if that's so, then\nI think the right place to implement it is in amcheck itself, not\npg_amcheck. I think pg_amcheck should be, now and in the future, a\nthin wrapper around the functionality provided by amcheck, just\nproviding target selection and parallel execution. If you put\nsomething into pg_amcheck that figures out how long the relation is\nand runs it on some of the blocks, that functionality is only\naccessible to people who are accessing amcheck via pg_amcheck. If you\nput it in amcheck itself and just expose it through pg_amcheck, then\nit's accessible either way. It's probably cleaner and more performant\nto do it that way, too.\n\nSo if you did add a sampling option in the future, that's the way I\nwould recommend doing it, but I think it is probably best not to go\nthere right now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 8 Mar 2021 11:26:08 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Mar 8, 2021, at 8:26 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Thu, Mar 4, 2021 at 5:39 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>> I think Robert mistook why I was doing that. I was thinking about a different usage pattern. If somebody thinks a subset of relations have been badly corrupted, but doesn't know which relations those might be, they might try to find them with pg_amcheck, but wanting to just check the first few blocks per relation in order to sample the relations. So,\n>> \n>> pg_amcheck --startblock=0 --endblock=9 --no-dependent-indexes\n>> \n>> or something like that. I don't think it's very fun to have it error out for each relation that doesn't have at least ten blocks, nor is it fun to have those relations skipped by error'ing out before checking any blocks, as they might be the corrupt relations you are looking for. But using --startblock and --endblock for this is not a natural fit, as evidenced by how I was trying to \"fix things up\" for the user, so I'll punt on this usage until some future version, when I might add a sampling option.\n> \n> I admit I hadn't thought of that use case. I guess somebody could want\n> to do that, but it doesn't seem all that useful. Checking the first\n> up-to-ten blocks of every relation is not a very representative\n> sample, and it's not clear to me that sampling is a good idea even if\n> it were representative. What good is it to know that 10% of my\n> database is probably not corrupted?\n\n\n`cd $PGDATA; tar xfz my_csv_data.tgz` ctrl-C ctrl-C ctrl-C\n`rm -rf $PGDATA` ctrl-C ctrl-C ctrl-C\n`/my/stupid/backup/and/restore/script.sh` ctrl-C ctrl-C ctrl-C\n\n# oh wow, i wonder if any relations got overwritten with csv file data, or had their relation files unlinked, or ...?\n\n`pg_amcheck --jobs=8 --startblock=0 --endblock=10`\n\n# ah, darn, it's spewing lots of irrelevant errors because some relations are too short\n\n`pg_amcheck --jobs=8 --startblock=0 --endblock=0`\n\n# ah, darn, it's still spewing lots of irrelevant errors because I have lots of indexes with zero blocks of data\n\n`pg_amcheck --jobs=8`\n\n# ah, darn, it's taking forever, because it's processing huge tables in their entirety\n\nI agree this can be left to later, and the --startblock and --endblock options are the wrong way to do it.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 8 Mar 2021 12:30:16 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "Robert, Peter, in response to your review comments spanning multiple emails:\n\n> On Mar 4, 2021, at 7:29 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> Most of these changes sound good. I'll go through the whole patch\n> again today, or as much of it as I can. But before I do that, I want\n> to comment on this point specifically.\n> \n> On Thu, Mar 4, 2021 at 1:25 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>> I think this is fixed up now. There is an interaction with amcheck's verify_heapam(), where that function raises an error if the startblock or endblock arguments are out of bounds for the relation in question. Rather than aborting the entire pg_amcheck run, it avoids passing inappropriate block ranges to verify_heapam() and outputs a warning, so:\n>> \n>> % pg_amcheck mark.dilger -t foo -t pg_class --progress -v --startblock=35 --endblock=77\n>> pg_amcheck: in database \"mark.dilger\": using amcheck version \"1.3\" in schema \"public\"\n>> 0/6 relations (0%) 0/55 pages (0%)\n>> pg_amcheck: checking table \"mark.dilger\".\"public\".\"foo\" (oid 16385) (10/45 pages)\n>> pg_amcheck: warning: ignoring endblock option 77 beyond end of table \"mark.dilger\".\"public\".\"foo\"\n>> pg_amcheck: checking btree index \"mark.dilger\".\"public\".\"foo_idx\" (oid 16388) (30/30 pages)\n>> pg_amcheck: checking table \"mark.dilger\".\"pg_catalog\".\"pg_class\" (oid 1259) (0/13 pages)\n>> pg_amcheck: warning: ignoring startblock option 35 beyond end of table \"mark.dilger\".\"pg_catalog\".\"pg_class\"\n>> pg_amcheck: warning: ignoring endblock option 77 beyond end of table \"mark.dilger\".\"pg_catalog\".\"pg_class\"\n>> pg_amcheck: checking btree index \"mark.dilger\".\"pg_catalog\".\"pg_class_relname_nsp_index\" (oid 2663) (6/6 pages)\n>> pg_amcheck: checking btree index \"mark.dilger\".\"pg_catalog\".\"pg_class_tblspc_relfilenode_index\" (oid 3455) (5/5 pages)\n>> pg_amcheck: checking btree index \"mark.dilger\".\"pg_catalog\".\"pg_class_oid_index\" (oid 2662) (4/4 pages)\n>> 6/6 relations (100%) 55/55 pages (100%)\n>> \n>> The way the (x/y pages) is printed takes into account that the [startblock..endblock] range may reduce the number of pages to check (x) to something less than the number of pages in the relation (y), but the reporting is a bit of a lie when the startblock is beyond the end of the table, as it doesn't get passed to verify_heapam and so the number of blocks checked may be more than the zero blocks reported. I think I might need to fix this up tomorrow, but I want to get what I have in this patch set posted tonight, so it's not fixed here. Also, there are multiple ways of addressing this, and I'm having trouble deciding which way is best. I can exclude the relation from being checked at all, or realize earlier that I'm not going to honor the startblock argument and compute the blocks to check correctly. Thoughts?\n> \n> I think this whole approach is pretty suspect because the number of\n> blocks in the relation can increase (by relation extension) or\n> decrease (by VACUUM or TRUNCATE) between the time when we query for\n> the list of target relations and the time we get around to executing\n> any queries against them. I think it's OK to use the number of\n> relation pages for progress reporting because progress reporting is\n> only approximate anyway,\n\nFair point.\n\n> but I wouldn't print them out in the progress\n> messages,\n\nRemoved. \n\n> and I wouldn't try to fix up the startblock and endblock\n> arguments on the basis of how long you think that relation is going to\n> be.\n\nYeah, in light of a new day, that seems like a bad idea to me, too. Removed.\n\n> You seem to view the fact that the server reported the error as\n> the reason for the problem, but I don't agree. I think having the\n> server report the error here is right, and the problem is that the\n> error reporting sucked because it was long-winded and didn't\n> necessarily tell you which table had the problem.\n\nNo, I was thinking about a different usage pattern, but I've answer that already elsewhere on this thread.\n\n> There are a LOT of things that can go wrong when we go try to run\n> verify_heapam on a table. The table might have been dropped; in fact,\n> on a busy production system, such cases are likely to occur routinely\n> if DDL is common, which for many users it is. The system catalog\n> entries might be screwed up, so that the relation can't be opened.\n> There might be an unreadable page in the relation, either because the\n> OS reports an I/O error or something like that, or because checksum\n> verification fails. There are various other possibilities. We\n> shouldn't view such errors as low-level things that occur only in\n> fringe cases; this is a corruption-checking tool, and we should expect\n> that running it against messed-up databases will be common. We\n> shouldn't try to interpret the errors we get or make any big decisions\n> about them, but we should have a clear way of reporting them so that\n> the user can decide what to do.\n\nOnce again, I think you are right and have removed the objectionable behavior, but....\n\nThe --startblock and --endblock options make the most sense when the user is only checking one table, like\n\n pg_amcheck --startblock=17 --endblock=19 --table=my_schema.my_corrupt_table\n\nbecause the user likely has some knowledge about that table, perhaps from a prior run of pg_amcheck. The --startblock and --endblock arguments are a bit strange when used globally, as relations don't all have the same number of blocks, so\n\n pg_amcheck --startblock=17 --endblock=19 mydb\n\nwill very likely emit lots of error messages for tables which don't have blocks in that range. That's not entirely pg_amcheck's fault, as it just did what the user asked, but it also doesn't seem super helpful. I'm not going to do anything about it in this release.\n\n> Just as an experiment, I suggest creating a database with 100 tables\n> in it, each with 1 index, and then deleting a single pg_attribute\n> entry for 10 of the tables, and then running pg_amcheck. I think you\n> will get 20 errors - one for each messed-up table and one for the\n> corresponding index. Maybe you'll get errors for the TOAST tables\n> checks too, if the tables have TOAST tables, although that seems like\n> it should be avoidable. Now, now matter what you do, the tool is going\n> to produce a lot of output here, because you have a lot of problems,\n> and that's OK. But how understandable is that output, and how concise\n> is it? If it says something like:\n> \n> pg_amcheck: could not check \"SCHEMA_NAME\".\"TABLE_NAME\": ERROR: some\n> attributes are missing or something\n> \n> ...and that line is repeated 20 times, maybe with a context or detail\n> line for each one or something like that, then you have got a good UI.\n> If it's not clear which tables have the problem, you have got a bad\n> UI. If it dumps out 300 lines of output instead of 20 or 40, you have\n> a UI that is so verbose that usability is going to be somewhat\n> impaired, which is why I suggested only showing the query in verbose\n> mode.\n\nAfter running 'make installcheck', if I delete all entries from pg_class where relnamespace = 'pg_toast'::regclass, by running 'pg_amcheck regression', I get lines that look like this:\n\nheap relation \"regression\".\"public\".\"quad_poly_tbl\":\n ERROR: could not open relation with OID 17177\nheap relation \"regression\".\"public\".\"gin_test_tbl\":\n ERROR: could not open relation with OID 24793\nheap relation \"regression\".\"pg_catalog\".\"pg_depend\":\n ERROR: could not open relation with OID 8888\nheap relation \"regression\".\"public\".\"spgist_text_tbl\":\n ERROR: could not open relation with OID 25624\n\nwhich seems ok.\n\n\nIf instead I delete pg_attribute entries, as you suggest above, I get rows like this:\n\nheap relation \"regression\".\"regress_rls_schema\".\"rls_tbl\":\n ERROR: catalog is missing 1 attribute(s) for relid 26467\nheap relation \"regression\".\"regress_rls_schema\".\"rls_tbl_force\":\n ERROR: catalog is missing 1 attribute(s) for relid 26474\n\nwhich also seems ok.\n\n\nIf instead, I manually corrupt relation files belonging to the regression database, I get lines that look like this for corrupt heap relations:\n\nrelation \"regression\".\"public\".\"functional_dependencies\", block 28, offset 54, attribute 0\n attribute 0 with length 4294967295 ends at offset 50 beyond total tuple length 43\nrelation \"regression\".\"public\".\"functional_dependencies\", block 28, offset 55\n multitransaction ID is invalid\nrelation \"regression\".\"public\".\"functional_dependencies\", block 28, offset 57\n multitransaction ID is invalid\n\nand for corrupt btree relations:\n\nbtree relation \"regression\".\"public\".\"tenk1_unique1\":\n ERROR: high key invariant violated for index \"tenk1_unique1\"\n DETAIL: Index tid=(1,38) points to heap tid=(70,26) page lsn=0/33A96D0.\nbtree relation \"regression\".\"public\".\"tenk1_unique2\":\n ERROR: index tuple size does not equal lp_len in index \"tenk1_unique2\"\n DETAIL: Index tid=(1,35) tuple size=4913 lp_len=16 page lsn=0/33DFD98.\n HINT: This could be a torn page problem.\nbtree relation \"regression\".\"public\".\"tenk1_thous_tenthous\":\n ERROR: index tuple size does not equal lp_len in index \"tenk1_thous_tenthous\"\n DETAIL: Index tid=(1,36) tuple size=4402 lp_len=16 page lsn=0/34C0770.\n HINT: This could be a torn page problem.\n\nwhich likewise seems ok.\n\n\n> BTW, another thing that might be interesting is to call\n> PQsetErrorVerbosity(conn, PQERRORS_VERBOSE) in verbose mode. It's\n> probably possible to contrive a case where the server error message is\n> something generic like \"cache lookup failed for relation %u\" which\n> occurs in a whole bunch of places in the source code, and being able\n> get the file and line number information can be really useful when\n> trying to track such things down.\n\nGood idea. I decided to also honor the --quiet flag\n\n if (opts.verbose)\n PQsetErrorVerbosity(free_slot->connection, PQERRORS_VERBOSE);\n else if (opts.quiet)\n PQsetErrorVerbosity(free_slot->connection, PQERRORS_TERSE);\n\n> On Mar 4, 2021, at 2:04 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> I don't think that the struct AmcheckOptions block fields (e.g.,\n> startblock) should be of type 'long' -- that doesn't work well on\n> Windows, where 'long' is only 32-bit. To be fair we already do the\n> same thing elsewhere, but there is no reason to repeat those mistakes.\n> (I'm rather suspicious of 'long' in general.)\n> \n> I think that you could use BlockNumber + strtoul() without breaking Windows.\n\n\nThanks for reviewing!\n\nGood points. I decided to use int64 instead of BlockNumber. The option processing needs to give a sensible error message if the user gives a negative number for the argument, so unsigned types are a bad fit.\n\n> On Thu, Mar 4, 2021 at 10:29 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> Most of these changes sound good. I'll go through the whole patch\n>> again today, or as much of it as I can. But before I do that, I want\n>> to comment on this point specifically.\n> \n> Just a thought - I don't feel strongly about this - but you want want\n> to consider storing your list of patterns in an array that gets\n> resized as necessary rather than a list. Then the pattern ID would\n> just be pattern_ptr - pattern_array, and finding the pattern by ID\n> would just be pattern_ptr = &pattern_array[pattern_id]. I don't think\n> there's a real efficiency issue here because the list of patterns is\n> almost always going to be short, and even if somebody decides to\n> provide a very long list of patterns (e.g. by using xargs) it's\n> probably still not that big a deal. A sufficiently obstinate user\n> running an operating system where argument lists can be extremely long\n> could probably make this the dominant cost by providing a gigantic\n> number of patterns that don't match anything, but such a person is\n> trying to prove a point, rather than accomplish anything useful, so I\n> don't care. But, the code might be more elegant the other way.\n\nDone. I was not too motivated by the efficiency argument, but the code to look up patterns is cleaner when the pattern_id is an index into an array than when it is a field in a struct that has to be searched in a list.\n\n> This patch increases the number of cases where we use ^ to assert that\n> exactly one of two things is true from 4 to 5. I think it might be\n> better to just write out (a && !b) || (b && !a), but there is some\n> precedent for the way you did it so perhaps it's fine.\n\nYour formulation takes longer for me to read and understand (by, perhaps, some milliseconds), but checking what C compilers guarantee to store in\n\n bool a = (i == j);\n bool b = (k == l);\n\nI found it hard to be sure that some compiler wouldn't do weird things with that. Two \"true\" values a and b could pass the (a ^ b) test if they represent \"true\" in two different bit patterns. I don't really think there is a risk here in practice, but looking up the relevant C standards isn't quick for future readers of this code, so I went with your formulation.\n\n> The name prepare_table_commmand() is oddly non-parallel with\n> verify_heapam_slot_handler(). Seems better to call it either a table\n> throughout, or a heapam throughout. Actually I think I would prefer\n> \"heap\" to either of those, but I definitely think we shouldn't switch\n> terminology. Note that prepare_btree_command() doesn't have this\n> issue, since it matches verify_btree_slot_handler(). On a related\n> note, \"c.relam = 2\" is really a test for is_heap, not is_table. We\n> might have other table AMs in the future, but only one of those AMs\n> will be called heap, and only one will have OID 2.\n\nChanged to use \"heap\" in many places where \"table\" was used previously, and to use \"btree\" in many places where \"index\" was used previously. The term \"heapam\" now only occurs as part of \"verify_heapam\", a function defined in contrib/amcheck and not changed here.\n\n> You've got some weird round-tripping stuff where you sent literal\n> values to the server so that you can turn around and get them back\n> from the server. For example, you've got prepare_table_command()\n> select rel->nspname and rel->relname back from the server as literals,\n> which seems silly because we have to already have that information or\n> we couldn't ask the server to give it to us ... and if we already have\n> it, then why do we need to get it again? The reason it's like this\n> seems to be that after calling prepare_table_command(), we use\n> ParallelSlotSetHandler() to set verify_heapam_slot_handler() as the\n> callback, and we set sql.data as the callback, so we don't have access\n> to the RelationInfo object when we're handling the slot result. But\n> that's easy to fix: just store the sql as a field inside the\n> RelationInfo, and then pass a pointer to the whole RelationInfo to the\n> slot handler. Then you don't need to round-trip the table and schema\n> names; and you have the values available even if an error happens.\n\nChanged. I was doing that mostly so that people examining the server logs would have something more than the oid in the sql to suggest which table or index is being checked.\n\n> On a somewhat related note, I think it might make sense to have the\n> slot handlers try to free memory. It seems hard to make pg_amcheck\n> leak enough memory to matter, but I guess it's not entirely\n> implausible that someone could be checking let's say 10 million\n> relations. Freeing the query strings could probably prevent a half a\n> GB or so of accumulated memory usage under those circumstances. I\n> suppose freeing nspname and relname would save a bit more, but it's\n> hardly worth doing since they are a lot shorter and you've got to have\n> all that information in memory at once at some point anyway; similarly\n> with the RelationInfo structures, which have the further complexity of\n> being part of a linked list you might not want to corrupt. But you\n> don't need to have every query string in memory at the same time, just\n> as many as are running at one in time.\n\nChanged.\n\n> Also, maybe compile_relation_list_one_db() should keep the result set\n> around so that you don't need to pstrdup() the nspname and relname in\n> the first place. Right now, just before compile_relation_list_one_db()\n> calls PQclear() you have two copies of every nspname and relname\n> allocated. If you just kept the result sets around forever, the peak\n> memory usage would be lower than it is currently. If you really wanted\n> to get fancy you could arrange to free each result set when you've\n> finished that database, but that seems annoying to code and I'm pretty\n> sure it doesn't matter.\n\n\nHmm. When compile_relation_list_one_db() is processing the ith database out of N databases, all (nspname,relname) pairs are allocated for databases in [0..i], and additionally the result set for database i is in memory. The result sets for [0..i-1] have already been freed. Keeping around the result sets for all N databases seems more expensive, considering how much stuff is in struct pg_result, if N is large and the relations are spread across the databases rather than clumped together in the last one.\n\nI think your proposal might be a win for some users and a loss for others. Given that it is not a clear win, I don't care to implement it that way, as it takes more effort to remember which object owns which bit of memory.\n\nI have added pfree()s to the handlers to free the nspname and relname when finished. This does little to reduce the peak memory usage, though.\n\n> The CTEs called \"include_raw\" and \"exclude_raw\" which are used as part\n> of the query to construct a list of tables. The regexes are fished\n> through there, and the pattern IDs, which makes sense, but the raw\n> patterns are also fished through, and I don't see a reason for that.\n> We don't seem to need that for anything. The same seems to apply to\n> the query used to resolve database patterns.\n\nChanged.\n\nBoth queries are changed to no longer have a \"pat\" column, and the \"id\" field (renamed as \"pattern_id\" for clarity) is used instead.\n\n> I see that most of the queries have now been adjusted to be spread\n> across fewer lines, which is good, but please make sure to do that\n> everywhere. In particular, I notice that the bt_index_check calls are\n> still too spread out.\n\nWhen running` pg_amcheck --echo`, the queries for a table and index now print as:\n\nSELECT blkno, offnum, attnum, msg FROM \"public\".verify_heapam(\nrelation := 33024, on_error_stop := false, check_toast := true, skip := 'none')\nSELECT * FROM \"public\".bt_index_check(index := '33029'::regclass, heapallindexed := false)\n\nWhich is two lines per heap table, and just one line per btree index.\n\n> On Thu, Mar 4, 2021 at 12:27 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> More in a bit, need to grab some lunch.\n> \n> Moving on to the tests, in 003_check.pl, I think it would be slightly\n> better if relation_toast were to select ct.oid::regclass and then just\n> have the caller use that value directly. We'd certainly want to do\n> that if the name could contain any characters that might require\n> quoting. Here that's not possible, but I think we might as well use\n> the same technique anyway.\n\nUsing c.reltoastrelid::regclass, which is basically the same idea.\n\n> I'm not sure how far to go with it, but I think that you might want to\n> try to enhance the logging in some of the cases where the TAP tests\n> might fail. In particular, if either of these trip in the buildfarm,\n> it doesn't seem like it will be too easy to figure out why they\n> failed:\n> \n> + fail('Xid thresholds not as expected');\n> + fail('Page layout differs from our expectations');\n\nOk, I've extended these messages with the extra debugging information. I have also changed them to use 'plan skip_all', since what we are really talking about here is an inability for the test to properly exercise pg_amcheck, not an actual failure of pg_amcheck to function correctly. This should save us some grief if the test isn't portable to all platforms in the build farm, though we'll have to check whether the skip messages are happening on any farm animals.\n\n> You might want to rephrase the message to incorporate the values that\n> triggered the failure, e.g. \"datfrozenxid $datfrozenxid is not between\n> 3 and $relfrozenxid\", \"expected (a,b) = (12345678,abcdefg) but got\n> ($x,$y)\", so that if the buildfarm happens to fail there's a shred of\n> hope that we might be able to guess the reason from the message.\n\nAdded to the skip_all message.\n\n> You\n> could also give some thought to whether there are any tests that can\n> be improved in similar ways. Test::More is nice in that when you run a\n> test with eq() or like() and it fails it will tell you about the input\n> values in the diagnostic, but if you do something like is($x < 4, ...)\n> instead of cmp_ok($x, '<', 4, ...) then you lose that. I'm not saying\n> you're doing that exact thing, just saying that looking through the\n> test code with an eye to finding things where you could output a\n> little more info about a potential failure might be a worthwhile\n> activity.\n\nI'm mostly using command_checks_all and command_fails_like. The main annoyance is that when a pattern fails to match, you get a rather long error message. I'm not sure that it's lacking information, though.\n\n> If it were me, I would get rid of ROWCOUNT and have a list of\n> closures, and then loop over the list and call each one e.g. my\n> @corruption = ( sub { ... }, sub { ... }, sub { ... }) or maybe\n> something like what I did with @scenario in\n> src/bin/pg_verifybackup/t/003_corruption.pl, but this is ultimately a\n> style preference and I think the way you actually did it is also\n> reasonable, and some people might find it more readable than the other\n> way.\n\nUnchanged. I think the closure idea is ok, but I am using the ROWCOUNT constant elsewhere (specifically, when inserting rows into the table) and using a constant for this helps keep the number of rows of data and the number of corruptions synchronized.\n\n> The name int4_fickle_ops is positively delightful and I love having a\n> test case like this.\n\nI know you know this already, but for others reading this thread, the test using int4_fickle_ops is testing the kind of index corruption that might happen if you changed the sort order underlying an index, such as by updating collation definitions. It was simpler to not muck around with collations in the test itself, but to achieve the sort order breakage this way.\n\n> On the whole, I think these tests look quite solid. I am a little\n> concerned, as you may gather from the comment above, that they will\n> not survive contact with the buildfarm, because they will turn out to\n> be platform or OS-dependent in some way. However, I can see that\n> you've taken steps to avoid such dependencies, and maybe we'll be\n> lucky and those will work. Also, while I am suspicious something's\n> going to break, I don't know what it's going to be, so I can't suggest\n> any method to avoid it. I think we'll just have to keep an eye on the\n> buildfarm post-commit and see what crops up.\n\nAs I mentioned above, I've changed some failures to 'plan skip_all => reason', so that the build farm won't break if the tests aren't portable in ways I'm already thinking about. We'll just see if it breaks for additional ways that I'm not thinking about.\n\n> Turning to the documentation, I see that it is documented that a bare\n> command-line argument can be a connection string rather than a\n> database name. That sounds like a good plan, but when I try\n> 'pg_amcheck sslmode=require' it does not work: FATAL: database\n> \"sslmode=require\" does not exist. The argument to -e is also\n> documented to be a connection string, but that also seems not to work.\n> Some thought might need to be given to what exactly these connection\n> opens are supposed to mean. Like, do the connection options I set via\n> -e apply to all the connections I make, or just the one to the\n> maintenance database? How do I set connection options for connections\n> to databases whose names aren't specified explicitly but are\n> discovered by querying pg_database? Maybe instead of allowing these to\n> be a connection string, we should have a separate option that can be\n> used just for the purpose of setting connection options that then\n> apply to all connections. That seems a little bit oddly unlike other\n> tools, but if I want sslmode=verify-ca or something on all my\n> connections, there should be an easy way to get it.\n\nI'm not sure where you are getting the '-e' from. That is the short form of --echo, and not what you are likely to want. However, your larger point is valid.\n\nI don't like the idea that pg_amcheck would handle these options in a way that is incompatible with reindexdb or vacuumdb. I think pg_amcheck can have a superset of those tools' options, but it should not have options that are incompatible with those tools' options. That way, if the extra options that pg_amcheck offers become popular, we can add support for them in those other tools. But if the options are incompatible, we'd not be able to do that without breaking backward compatibility of those tools' interfaces, which we wouldn't want to do.\n\nAs such, I have solved the problem by reducing the number of dbname arguments you can provide on the command-line to just one. (This does not limit the number of database *patterns* that you can supply.) Those tools only allow one dbname on the command line, so this is not a regression of functionality from what those tools offer. Only the single dbname argument, or single maintenance-db argument, can be a connection string. The database patterns do not support that, nor would it make sense for them to do so.\n\nAll of the following should now work:\n\n pg_amcheck --all \"port=5555 sslmode=require\"\n\n pg_amcheck --maintenance-db=\"host=myhost port=5555 dbname=mydb sslmode=require\" --all\n\n pg_amcheck -d foo -d bar -d baz mydb\n\n pg_amcheck -d foo -d bar -d baz \"host=myhost dbname=mydb\"\n\nNote that using --all with a connection string is a pg_amcheck extension. It doesn't currently work in reindexdb, which complains.\n\nThere is a strange case, `pg_amcheck --maintenance-db=\"port=5555 dbname=postgres\" \"port=5432 dbname=regression\"`, which doesn't complain, despite there being nothing listening on port 5555. This is because pg_amcheck completely ignores the maintenance-db argument in this instance, but I have not changed this behavior, because reindexdb does the same thing.\n\n> The documentation makes many references to patterns, but does not\n> explain what a pattern is. I see that psql's documentation contains an\n> explanation, and pg_dump's documentation links to psql's\n> documentation. pg_amcheck should probably link to psql's\n> documentation, too.\n\nA prior version of this patch had a reference to that, but no more. Thanks for noticing. I've put it back in. There is some tension here between the desire to keep the docs concise and the desire to explain things better with examples, etc. I'm not sure I've got that balance right, but I'm too close to the project to be the right person to make that call. Does it seem ok?\n\n> In the documentation for -d, you say that \"If -a --all is also\n> specified, -d --database does not additionally affect which databases\n> are checked.\" I suggest replacing \"does not additionally affect which\n> databases are checked\" with \"has no effect.\"\n\nChanged.\n\n> In two places you say \"without regard for\" but I think it should be\n> \"without regard to\".\n\nChanged.\n\n> In the documentation for --no-strict-names you use \"nor\" where I think\n> it should say \"or\".\n\nChanged.\n\n> I kind of wonder whether we need --quiet. It seems like right now it\n> only does two things. One is to control complaints about ignoring the\n> startblock and endblock options, but I don't agree with that behavior\n> anyway. The other is control whether we complain about unmatched\n> patterns, but I think that could just be controlled --no-strict-names\n> i.e. normally an unmatched pattern results in a complaint and a\n> failure, but with --no-strict-names there is neither a complaint nor a\n> failure. Having a flag to control whether we get the message\n> separately from whether we get the failure doesn't seem helpful.\n\nHmm. I think that having --quiet plus --no-strict-names suppress the warnings about unmatched patterns has some value.\n\nAlso, as discussed above, I also now decrease the PGVerbosity to PQERRORS_TERSE, which has additional value, I think.\n\nBut I don't feel strongly about this, and if you'd rather --quiet be removed, that's fine, too. But I'll wait to hear back about that.\n\n> I don't think it's good to say \"This is an alias for\" in the\n> documentation of -i -I -t -T. I suggest instead saying \"This is\n> similar to\".\n\nChanged.\n\n> Instead of \"Option BLAH takes precedence over...\" I suggest \"The BLAH\n> option takes precedence over...\"\n\nChanged.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 10 Mar 2021 08:10:02 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Wed, Mar 10, 2021 at 11:10 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Once again, I think you are right and have removed the objectionable behavior, but....\n>\n> The --startblock and --endblock options make the most sense when the user is only checking one table, like\n>\n> pg_amcheck --startblock=17 --endblock=19 --table=my_schema.my_corrupt_table\n>\n> because the user likely has some knowledge about that table, perhaps from a prior run of pg_amcheck. The --startblock and --endblock arguments are a bit strange when used globally, as relations don't all have the same number of blocks, so\n>\n> pg_amcheck --startblock=17 --endblock=19 mydb\n>\n> will very likely emit lots of error messages for tables which don't have blocks in that range. That's not entirely pg_amcheck's fault, as it just did what the user asked, but it also doesn't seem super helpful. I'm not going to do anything about it in this release.\n\n+1 to all that. I tend toward the opinion that trying to make\n--startblock and --endblock do anything useful in the context of\nchecking multiple relations is not really possible, and therefore we\njust shouldn't put any effort into it. But if user feedback shows\notherwise, we can always do something about it later.\n\n> After running 'make installcheck', if I delete all entries from pg_class where relnamespace = 'pg_toast'::regclass, by running 'pg_amcheck regression', I get lines that look like this:\n>\n> heap relation \"regression\".\"public\".\"quad_poly_tbl\":\n> ERROR: could not open relation with OID 17177\n\nIn this here example, the first line ends in a colon.\n\n> relation \"regression\".\"public\".\"functional_dependencies\", block 28, offset 54, attribute 0\n> attribute 0 with length 4294967295 ends at offset 50 beyond total tuple length 43\n\nBut this here one does not. Seems like it should be consistent.\n\nThe QUALIFIED_NAME_FIELDS macro doesn't seem to be used anywhere,\nwhich is good, because macros with unbalanced parentheses are usually\nnot a good plan; and a macro that expands to a comma-separated list of\nthings is suspect too.\n\n\"invalid skip options\\n\" seems too plural.\n\nWith regard to your use of strtol() for --{start,end}block, telling\nthe user that their input is garbage seems pejorative, even though it\nmay be accurate. Compare:\n\n[rhaas EDBAS]$ pg_dump -jdsgdsgd\npg_dump: error: invalid number of parallel jobs\n\nIn the message \"relation end block argument precedes start block\nargument\\n\", I think you could lose both instances of the word\n\"argument\" and probably the word \"relation\" as well. I actually don't\nknow why all of these messages about start and end block mention\n\"relation\". It's not like there is some other kind of\nnon-relation-related start block with which it could be confused.\n\nThe comment for run_command() explains some things about the cparams\nargument, but those things are false. In fact the argument is unused.\n\nUsual PostgreSQL practice when freeing memory in e.g.\nverify_heap_slot_handler is to set the pointers to NULL as well. The\nperformance cost of this is trivial, and it makes debugging a lot\neasier should somebody accidentally write code to access one of those\nthings after it's been freed.\n\nThe documentation says that -D \"does exclude any database that was\nlisted explicitly as dbname on the command line, nor does it exclude\nthe database chosen in the absence of any dbname argument.\" The first\npart of this makes complete sense to me, but I'm not sure about the\nsecond part. If I type pg_amcheck --all -D 'r*', I think I'm expecting\nthat \"rhaas\" won't be checked. Likewise, if I say pg_amcheck -d\n'bob*', I think I only want to check the bob-related databases and not\nrhaas.\n\nI suggest documenting --endblock as \"Check table blocks up to and\nincluding the specified ending block number. An error will occur if a\nrelation being checked has fewer than this number of blocks.\" And\nsimilarly for --startblock: \"Check table blocks beginning with the\nspecified block number. An error will occur, etc.\" Perhaps even\nmention something like \"This option is probably only useful when\nchecking a single table.\" Also, the documentation here isn't clear\nthat this affects only table checking, not index checking.\n\nIt appears that pg_amcheck sometimes makes dummy connections to the\ndatabase that don't do anything, e.g. pg_amcheck -t 'q*' resulted in:\n\n2021-03-10 15:00:14.273 EST [95473] LOG: connection received: host=[local]\n2021-03-10 15:00:14.274 EST [95473] LOG: connection authorized:\nuser=rhaas database=rhaas application_name=pg_amcheck\n2021-03-10 15:00:14.286 EST [95473] LOG: statement: SELECT\npg_catalog.set_config('search_path', '', false);\n2021-03-10 15:00:14.290 EST [95464] DEBUG: forked new backend,\npid=95474 socket=11\n2021-03-10 15:00:14.291 EST [95464] DEBUG: server process (PID 95473)\nexited with exit code 0\n2021-03-10 15:00:14.291 EST [95474] LOG: connection received: host=[local]\n2021-03-10 15:00:14.293 EST [95474] LOG: connection authorized:\nuser=rhaas database=rhaas application_name=pg_amcheck\n2021-03-10 15:00:14.296 EST [95474] LOG: statement: SELECT\npg_catalog.set_config('search_path', '', false);\n<...more queries from PID 95474...>\n2021-03-10 15:00:14.321 EST [95464] DEBUG: server process (PID 95474)\nexited with exit code 0\n\nIt doesn't seem to make sense to connect to a database, set the search\npath, exit, and then immediately reconnect to the same database.\n\nThis is slightly inconsistent:\n\npg_amcheck: checking heap table \"rhaas\".\"public\".\"foo\"\nheap relation \"rhaas\".\"public\".\"foo\":\n ERROR: XX000: catalog is missing 144 attribute(s) for relid 16392\n LOCATION: RelationBuildTupleDesc, relcache.c:652\nquery was: SELECT blkno, offnum, attnum, msg FROM \"public\".verify_heapam(\nrelation := 16392, on_error_stop := false, check_toast := true, skip := 'none')\n\nIn line 1 it's a heap table, but in line 2 it's a heap relation.\n\nThat's all I've got.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Mar 2021 15:28:39 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Mar 10, 2021, at 12:28 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Wed, Mar 10, 2021 at 11:10 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> Once again, I think you are right and have removed the objectionable behavior, but....\n>> \n>> The --startblock and --endblock options make the most sense when the user is only checking one table, like\n>> \n>> pg_amcheck --startblock=17 --endblock=19 --table=my_schema.my_corrupt_table\n>> \n>> because the user likely has some knowledge about that table, perhaps from a prior run of pg_amcheck. The --startblock and --endblock arguments are a bit strange when used globally, as relations don't all have the same number of blocks, so\n>> \n>> pg_amcheck --startblock=17 --endblock=19 mydb\n>> \n>> will very likely emit lots of error messages for tables which don't have blocks in that range. That's not entirely pg_amcheck's fault, as it just did what the user asked, but it also doesn't seem super helpful. I'm not going to do anything about it in this release.\n> \n> +1 to all that. I tend toward the opinion that trying to make\n> --startblock and --endblock do anything useful in the context of\n> checking multiple relations is not really possible, and therefore we\n> just shouldn't put any effort into it. But if user feedback shows\n> otherwise, we can always do something about it later.\n> \n>> After running 'make installcheck', if I delete all entries from pg_class where relnamespace = 'pg_toast'::regclass, by running 'pg_amcheck regression', I get lines that look like this:\n>> \n>> heap relation \"regression\".\"public\".\"quad_poly_tbl\":\n>> ERROR: could not open relation with OID 17177\n> \n> In this here example, the first line ends in a colon.\n> \n>> relation \"regression\".\"public\".\"functional_dependencies\", block 28, offset 54, attribute 0\n>> attribute 0 with length 4294967295 ends at offset 50 beyond total tuple length 43\n> \n> But this here one does not. Seems like it should be consistent.\n\nGood point. It also seems inconsistent that in one it refers to a \"relation\" and in the other to a \"heap relation\", but they're both heap relations. Changed to use \"heap relation\" both places, and to both use colons.\n\n> \n> The QUALIFIED_NAME_FIELDS macro doesn't seem to be used anywhere,\n> which is good, because macros with unbalanced parentheses are usually\n> not a good plan; and a macro that expands to a comma-separated list of\n> things is suspect too.\n\nYeah, that whole macro was supposed to be removed. Looks like I somehow only removed the end of it, plus some functions that were using it. Not sure how I fat fingered that in the editor, but I've removed the rest now.\n\n> \"invalid skip options\\n\" seems too plural.\n\nChanged to something less plural.\n\n> With regard to your use of strtol() for --{start,end}block, telling\n> the user that their input is garbage seems pejorative, even though it\n> may be accurate. Compare:\n> \n> [rhaas EDBAS]$ pg_dump -jdsgdsgd\n> pg_dump: error: invalid number of parallel jobs\n> \n> In the message \"relation end block argument precedes start block\n> argument\\n\", I think you could lose both instances of the word\n> \"argument\" and probably the word \"relation\" as well. I actually don't\n> know why all of these messages about start and end block mention\n> \"relation\". It's not like there is some other kind of\n> non-relation-related start block with which it could be confused.\n\nChanged.\n\n> The comment for run_command() explains some things about the cparams\n> argument, but those things are false. In fact the argument is unused.\n\nRemoved unused argument and associated comment.\n\n> Usual PostgreSQL practice when freeing memory in e.g.\n> verify_heap_slot_handler is to set the pointers to NULL as well. The\n> performance cost of this is trivial, and it makes debugging a lot\n> easier should somebody accidentally write code to access one of those\n> things after it's been freed.\n\nI had been doing that and removed it, anticipating a complaint about useless code. Ok, I put it back.\n\n> The documentation says that -D \"does exclude any database that was\n> listed explicitly as dbname on the command line, nor does it exclude\n> the database chosen in the absence of any dbname argument.\" The first\n> part of this makes complete sense to me, but I'm not sure about the\n> second part. If I type pg_amcheck --all -D 'r*', I think I'm expecting\n> that \"rhaas\" won't be checked. Likewise, if I say pg_amcheck -d\n> 'bob*', I think I only want to check the bob-related databases and not\n> rhaas.\n\nI think it's a tricky definitional problem. I'll argue the other side for the moment:\n\nIf you say `pg_amcheck bob`, I think it is fair to assume that \"bob\" gets checked. If you say `pg_amcheck bob -d=\"b*\" -D=\"bo*\"`, it is fair to expect all databases starting with /b/ to be checked, except those starting with /bo/, except that since you *explicitly* asked for \"bob\", that \"bob\" gets checked. We both agree on this point, I think.\n\nIf you say `pg_amcheck --maintenance-db=bob -d=\"b*\" -D=\"bo*\", you don't expect \"bob\" to get checked, even though it was explicitly stated.\n\nIf you are named \"bob\", and run `pg_amcheck`, you expect it to get your name \"bob\" from the environment, and check database \"bob\". It's implicit rather than explicit, but that doesn't change what you expect to happen. It's just a short-hand for saying `pg_amcheck bob`.\n\nSaying that `pg_amcheck -d=\"b*\" -D=\"bo*\" should not check \"bob\" implies that the database being retrieved from the environment is acting like a maintenance-db. But that's not how it is treated when you just say `pg_amcheck` with no arguments. I think treating it as a maintenance-db in some situations but not in others is strangely non-orthogonal.\n\nOn the other hand, I would expect some users to come back with precisely your complaint, so I don't know how best to solve this. \n\n> I suggest documenting --endblock as \"Check table blocks up to and\n> including the specified ending block number. An error will occur if a\n> relation being checked has fewer than this number of blocks.\" And\n> similarly for --startblock: \"Check table blocks beginning with the\n> specified block number. An error will occur, etc.\" Perhaps even\n> mention something like \"This option is probably only useful when\n> checking a single table.\" Also, the documentation here isn't clear\n> that this affects only table checking, not index checking.\n\nChanged.\n\n> It appears that pg_amcheck sometimes makes dummy connections to the\n> database that don't do anything, e.g. pg_amcheck -t 'q*' resulted in:\n> \n> 2021-03-10 15:00:14.273 EST [95473] LOG: connection received: host=[local]\n> 2021-03-10 15:00:14.274 EST [95473] LOG: connection authorized:\n> user=rhaas database=rhaas application_name=pg_amcheck\n> 2021-03-10 15:00:14.286 EST [95473] LOG: statement: SELECT\n> pg_catalog.set_config('search_path', '', false);\n> 2021-03-10 15:00:14.290 EST [95464] DEBUG: forked new backend,\n> pid=95474 socket=11\n> 2021-03-10 15:00:14.291 EST [95464] DEBUG: server process (PID 95473)\n> exited with exit code 0\n> 2021-03-10 15:00:14.291 EST [95474] LOG: connection received: host=[local]\n> 2021-03-10 15:00:14.293 EST [95474] LOG: connection authorized:\n> user=rhaas database=rhaas application_name=pg_amcheck\n> 2021-03-10 15:00:14.296 EST [95474] LOG: statement: SELECT\n> pg_catalog.set_config('search_path', '', false);\n> <...more queries from PID 95474...>\n> 2021-03-10 15:00:14.321 EST [95464] DEBUG: server process (PID 95474)\n> exited with exit code 0\n> \n> It doesn't seem to make sense to connect to a database, set the search\n> path, exit, and then immediately reconnect to the same database.\n\nI think I've cleaned that up now.\n\n> This is slightly inconsistent:\n> \n> pg_amcheck: checking heap table \"rhaas\".\"public\".\"foo\"\n> heap relation \"rhaas\".\"public\".\"foo\":\n> ERROR: XX000: catalog is missing 144 attribute(s) for relid 16392\n> LOCATION: RelationBuildTupleDesc, relcache.c:652\n> query was: SELECT blkno, offnum, attnum, msg FROM \"public\".verify_heapam(\n> relation := 16392, on_error_stop := false, check_toast := true, skip := 'none')\n> \n> In line 1 it's a heap table, but in line 2 it's a heap relation.\n\nChanged to use \"heap table\" consistently, and along those lines, to use \"btree index\" rather than \"btree relation\".\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 10 Mar 2021 20:02:00 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "(I'm still not a fan of adding more client-side tools whose sole task is \nto execute server-side functionality in a slightly filtered way, but it \nseems people are really interested in this, so ...)\n\nI want to register, if we are going to add this, it ought to be in \nsrc/bin/. If we think it's a useful tool, it should be there with all \nthe other useful tools.\n\nI realize there is a dependency on a module in contrib, and it's \nprobably now not the time to re-debate reorganizing contrib. But if we \never get to that, this program should be the prime example why the \ncurrent organization is problematic, and we should be prepared to make \nthe necessary moves then.\n\n\n\n",
"msg_date": "Thu, 11 Mar 2021 09:12:22 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> 11 марта 2021 г., в 13:12, Peter Eisentraut <peter.eisentraut@enterprisedb.com> написал(а):\n> \n> client-side tools whose sole task is to execute server-side functionality in a slightly filtered way\n\nBy the way, can we teach pg_amcheck to verify database without creating local PGDATA and using bare minimum of file system quota?\n\nWe can implement a way for a pg_amcheck to ask for some specific file, which will be downloaded by backup tool and streamed to pg_amcheck.\nE.g. pg_amcheck could have a restore_file_command = 'backup-tool bring-my-file %backup_id %file_name' and probably list_files_command='backup-tool list-files %backup_id'. And pg_amcheck could then fetch bare minimum of what is needed.\n\nI see that this is somewhat orthogonal idea, but from my POV interesting one.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 11 Mar 2021 16:36:17 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Mar 11, 2021, at 12:12 AM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> (I'm still not a fan of adding more client-side tools whose sole task is to execute server-side functionality in a slightly filtered way, but it seems people are really interested in this, so ...)\n> \n> I want to register, if we are going to add this, it ought to be in src/bin/. If we think it's a useful tool, it should be there with all the other useful tools.\n\nI considered putting it in src/bin/scripts where reindexdb and vacuumdb also live. It seems most similar to those two tools.\n\n> I realize there is a dependency on a module in contrib, and it's probably now not the time to re-debate reorganizing contrib. But if we ever get to that, this program should be the prime example why the current organization is problematic, and we should be prepared to make the necessary moves then.\n\nBefore settling on contrib/pg_amcheck as the location, I checked whether any tools under src/bin had dependencies on a contrib module, and couldn't find any current examples. (There seems to have been one in the past, though I forget which that was at the moment.)\n\nI have no argument with changing the location of this tool before it gets committed, but I wonder if we should do that now, or wait until some future time when contrib gets reorganized? I can't quite tell which you prefer from your comments above.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 11 Mar 2021 07:14:49 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Mar 11, 2021, at 3:36 AM, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> \n> \n>> 11 марта 2021 г., в 13:12, Peter Eisentraut <peter.eisentraut@enterprisedb.com> написал(а):\n>> \n>> client-side tools whose sole task is to execute server-side functionality in a slightly filtered way\n> \n> By the way, can we teach pg_amcheck to verify database without creating local PGDATA and using bare minimum of file system quota?\n\npg_amcheck does not need a local data directory to check a remote database server, though it does need to connect to that server. The local file system quota should not be a problem, as pg_amcheck does not download and save any data to disk. I am uncertain if this answers your question. If you are imagining pg_amcheck running on the same server as the database cluster, then of course running pg_amcheck puts a burden on the server to read all the relation files necessary, much as running queries over the same relations would do.\n\n> We can implement a way for a pg_amcheck to ask for some specific file, which will be downloaded by backup tool and streamed to pg_amcheck.\n> E.g. pg_amcheck could have a restore_file_command = 'backup-tool bring-my-file %backup_id %file_name' and probably list_files_command='backup-tool list-files %backup_id'. And pg_amcheck could then fetch bare minimum of what is needed.\n> \n> I see that this is somewhat orthogonal idea, but from my POV interesting one.\n\npg_amcheck is not designed to detect corruption directly, but rather to open one or more connections to the database and execute sql queries which employ the contrib/amcheck sql functions.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 11 Mar 2021 07:30:13 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Thu, Mar 11, 2021 at 3:12 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> (I'm still not a fan of adding more client-side tools whose sole task is\n> to execute server-side functionality in a slightly filtered way, but it\n> seems people are really interested in this, so ...)\n>\n> I want to register, if we are going to add this, it ought to be in\n> src/bin/. If we think it's a useful tool, it should be there with all\n> the other useful tools.\n\nI think this provides a pretty massive gain in usability. If you\nwanted to check all of your tables and btree indexes without this, or\nworse yet some subset of them that satisfied certain criteria, it\nwould be a real nuisance. You don't want to run all of the check\ncommands in a single transaction, because that keeps snapshots open,\nand there's a good chance you do want to use parallelism. Even if you\nignore all that, the output you're going to get from running the\nqueries individually in psql is not going to be easy to sort through,\nwhereas the tool is going to distill that down to what you really need\nto know.\n\nPerhaps we should try to think of some way that some of these tools\ncould be unified, since it does seem a bit silly to have reindexdb,\nvacuumdb, and pg_amcheck all as separate commands basically doing the\nsame kind of thing but for different maintenance operations, but I\ndon't think getting rid of them entirely is the way - and I don't\nthink that unifying them is a v14 project.\n\nI also had the thought that maybe this should go in src/bin, because I\nthink this is going to be awfully handy for a lot of people. However,\nI don't think there's a rule that binaries can't go in contrib --\noid2name and vacuumlo are existing precedents. But I guess that's only\n2 out of quite a large number of binaries that we ship, so maybe it's\nbest not to add to it, especially for a tool which I at least suspect\nis going to get a lot more use than either of those.\n\nAnyone else want to vote for or against moving this to src/bin?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Mar 2021 10:46:12 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Wed, Mar 10, 2021 at 11:02 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > The documentation says that -D \"does exclude any database that was\n> > listed explicitly as dbname on the command line, nor does it exclude\n> > the database chosen in the absence of any dbname argument.\" The first\n> > part of this makes complete sense to me, but I'm not sure about the\n> > second part. If I type pg_amcheck --all -D 'r*', I think I'm expecting\n> > that \"rhaas\" won't be checked. Likewise, if I say pg_amcheck -d\n> > 'bob*', I think I only want to check the bob-related databases and not\n> > rhaas.\n>\n> I think it's a tricky definitional problem. I'll argue the other side for the moment:\n>\n> If you say `pg_amcheck bob`, I think it is fair to assume that \"bob\" gets checked. If you say `pg_amcheck bob -d=\"b*\" -D=\"bo*\"`, it is fair to expect all databases starting with /b/ to be checked, except those starting with /bo/, except that since you *explicitly* asked for \"bob\", that \"bob\" gets checked. We both agree on this point, I think.\n\n+1.\n\n> If you say `pg_amcheck --maintenance-db=bob -d=\"b*\" -D=\"bo*\", you don't expect \"bob\" to get checked, even though it was explicitly stated.\n\nI expect that specifying --maintenance-db has zero effect on what gets\nchecked. The only thing that should do is tell me which database to\nuse to get the list of databases that I am going to check, just in\ncase the default is unsuitable and will fail.\n\n> If you are named \"bob\", and run `pg_amcheck`, you expect it to get your name \"bob\" from the environment, and check database \"bob\". It's implicit rather than explicit, but that doesn't change what you expect to happen. It's just a short-hand for saying `pg_amcheck bob`.\n\n+1.\n\n> Saying that `pg_amcheck -d=\"b*\" -D=\"bo*\" should not check \"bob\" implies that the database being retrieved from the environment is acting like a maintenance-db. But that's not how it is treated when you just say `pg_amcheck` with no arguments. I think treating it as a maintenance-db in some situations but not in others is strangely non-orthogonal.\n\nI don't think I agree with this. A maintenance DB in my mind doesn't\nmean \"a database we're not actually checking,\" but rather \"a database\nthat we're using to get a list of other databases.\"\n\nTBH, I guess I actually don't know why we ever treat a bare\ncommand-line argument as a maintenance DB. I probably wouldn't do\nthat. We should only need a maintenance DB if we need to query for a\nlist of database to check, and if the user has explicitly named the\ndatabase to check, then we do not need to do that... unless they've\nalso done something like -D or -d, but then the explicitly-specified\ndatabase name is playing a double role. It is both one of the\ndatabases we will check, and also the database we will use to figure\nout what other databases to check. I think that's why this seems\nnon-orthogonal.\n\nHere's my proposal:\n\n1. If there are options present which require querying for a list of\ndatabases (e.g. --all, -d, -D) then use connectMaintenanceDatabase()\nand go figure out what they mean. The cparams passed to that function\nare only affected by the use of --maintenance-db, not by any bare\ncommand line arguments. If there are no arguments present which\nrequire querying for a list of databases, then --maintenance-db has no\neffect.\n\n2. If there is a bare command line argument, add the named database to\nthe list of databases to be checked. This might be empty if no\nrelevant options were specified in step 1, or if those options matched\nnothing. It might be a noop if the named database was already selected\nby the options mentioned in step 1.\n\n3. If there were no options present which required querying for a list\nof databases, and if there is also no bare command line argument, then\ndefault to the checking whatever database we connect to by default.\n\nWith this approach, --maintenance-db only ever affects how we get the\nlist of databases to check, and a bare command-line argument only ever\nspecifies a database to be checked. That seems cleaner.\n\nAn alternate possibility would be to say that there should only ever\nbe EITHER a bare command-line argument OR options that require\nquerying for a list of databases OR neither BUT NOT both. Then it's\nsimple:\n\n0. If you have both options which require querying for a list of\ndatabases and also a bare database name, error and die.\n1. As above.\n2. As above except the only possibility is now increasing the list of\ntarget databases from length 0 to length 1.\n3. As above.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Mar 2021 11:09:52 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> 11 марта 2021 г., в 20:30, Mark Dilger <mark.dilger@enterprisedb.com> написал(а):\n> \n> \n> pg_amcheck does not need a local data directory to check a remote database server, though it does need to connect to that server.\nNo, I mean it it would be great if we did not need to materialise whole DB anywhere. Let's say I have a backup of 10Tb cluster in S3. And don't have that clusters hardware anymore. I want to spawn tiny VM with few GiBs of RAM and storage no larger than biggest index within DB + WAL from start to end. And stream-check all backup, mark it safe and sleep well. It would be perfect if we could do backup verification at cost of corruption monitoring (and not vice versa, which is trivial).\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 11 Mar 2021 22:10:38 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Mar 11, 2021, at 9:10 AM, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> \n> \n>> 11 марта 2021 г., в 20:30, Mark Dilger <mark.dilger@enterprisedb.com> написал(а):\n>> \n>> \n>> pg_amcheck does not need a local data directory to check a remote database server, though it does need to connect to that server.\n> No, I mean it it would be great if we did not need to materialise whole DB anywhere. Let's say I have a backup of 10Tb cluster in S3. And don't have that clusters hardware anymore. I want to spawn tiny VM with few GiBs of RAM and storage no larger than biggest index within DB + WAL from start to end. And stream-check all backup, mark it safe and sleep well. It would be perfect if we could do backup verification at cost of corruption monitoring (and not vice versa, which is trivial).\n\nThanks for clarifying. I agree that would be useful. I don't see any way to make that part of this project, but maybe after the v14 cycle you'll look over the code a propose a way forward for that?'\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 11 Mar 2021 09:42:22 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Wed, Mar 10, 2021 at 11:02 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> [ new patches ]\n\nSeems like this is mostly ready to commit now, modulo exactly what to\ndo about the maintenance DB stuff, and whether to move it to src/bin.\nSince neither of those affects 0001, I went ahead and committed that\npart.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Mar 2021 14:34:03 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Thu, Mar 11, 2021 at 11:09 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> An alternate possibility would be to say that there should only ever\n> be EITHER a bare command-line argument OR options that require\n> querying for a list of databases OR neither BUT NOT both. Then it's\n> simple:\n>\n> 0. If you have both options which require querying for a list of\n> databases and also a bare database name, error and die.\n> 1. As above.\n> 2. As above except the only possibility is now increasing the list of\n> target databases from length 0 to length 1.\n> 3. As above.\n\nHere's a proposed incremental patch, applying on top of your last\nversion, that describes the above behavior, plus makes a lot of other\nchanges to the documentation that seemed like good ideas to me. Your\nmileage may vary, but I think this version is substantially more\nconcise than what you have while basically containing the same\ninformation.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 11 Mar 2021 16:59:00 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Mar 11, 2021, at 1:59 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Thu, Mar 11, 2021 at 11:09 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> An alternate possibility would be to say that there should only ever\n>> be EITHER a bare command-line argument OR options that require\n>> querying for a list of databases OR neither BUT NOT both. Then it's\n>> simple:\n>> \n>> 0. If you have both options which require querying for a list of\n>> databases and also a bare database name, error and die.\n>> 1. As above.\n>> 2. As above except the only possibility is now increasing the list of\n>> target databases from length 0 to length 1.\n>> 3. As above.\n> \n> Here's a proposed incremental patch, applying on top of your last\n> version, that describes the above behavior, plus makes a lot of other\n> changes to the documentation that seemed like good ideas to me. Your\n> mileage may vary, but I think this version is substantially more\n> concise than what you have while basically containing the same\n> information.\n\nYour proposal is used in this next version of the patch, along with a resolution to the solution to the -D option handling, discussed before, and a change to make --schema and --exclude-schema options accept \"database.schema\" patterns as well as \"schema\" patterns. It previously only interpreted the parameter as a schema without treating embedded dots as separators, but that seems strangely inconsistent with the way all the other pattern options work, so I made it consistent. (I think the previous behavior was defensible, but harder to explain and perhaps less intuitive.)\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 11 Mar 2021 21:00:07 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Fri, Mar 12, 2021 at 12:00 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Your proposal is used in this next version of the patch, along with a resolution to the solution to the -D option handling, discussed before, and a change to make --schema and --exclude-schema options accept \"database.schema\" patterns as well as \"schema\" patterns. It previously only interpreted the parameter as a schema without treating embedded dots as separators, but that seems strangely inconsistent with the way all the other pattern options work, so I made it consistent. (I think the previous behavior was defensible, but harder to explain and perhaps less intuitive.)\n\nWell, OK. In that case I guess we need to patch the docs a little\nmore. Here's a patch documentation that revised behavior, and also\ntidying up a few other things I noticed along the way.\n\nSince nobody is saying we *shouldn't* move this to src/bin, I think\nyou may as well go put it there per Peter's suggestion.\n\nThen I think it's time to get this committed and move on to the next thing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 12 Mar 2021 08:33:14 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Mar 12, 2021, at 5:33 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> Well, OK. In that case I guess we need to patch the docs a little\n> more. Here's a patch documentation that revised behavior, and also\n> tidying up a few other things I noticed along the way.\n> \n> Since nobody is saying we *shouldn't* move this to src/bin, I think\n> you may as well go put it there per Peter's suggestion.\n> \n> Then I think it's time to get this committed and move on to the next thing.\n\nIn this next patch, your documentation patch has been applied, and the whole project has been relocated from contrib/pg_amcheck to src/bin/pg_amcheck.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 12 Mar 2021 08:41:44 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Fri, Mar 12, 2021 at 11:41 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> In this next patch, your documentation patch has been applied, and the whole project has been relocated from contrib/pg_amcheck to src/bin/pg_amcheck.\n\nCommitted that way with some small adjustments. Let's see what the\nbuildfarm thinks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Mar 2021 13:10:03 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Fri, Mar 12, 2021 at 10:10 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Committed that way with some small adjustments. Let's see what the\n> buildfarm thinks.\n\nThank you both, Mark and Robert. This is excellent work!\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 12 Mar 2021 10:32:10 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Fri, Mar 12, 2021 at 10:32 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Thank you both, Mark and Robert. This is excellent work!\n\nFYI I see these compiler warnings just now:\n\npg_amcheck.c:1653:4: warning: ISO C90 forbids mixed declarations and\ncode [-Wdeclaration-after-statement]\n 1653 | DatabaseInfo *dat = (DatabaseInfo *)\npg_malloc0(sizeof(DatabaseInfo));\n | ^~~~~~~~~~~~\npg_amcheck.c: In function ‘compile_relation_list_one_db’:\npg_amcheck.c:2060:9: warning: variable ‘is_btree’ set but not used\n[-Wunused-but-set-variable]\n 2060 | bool is_btree = false;\n | ^~~~~~~~\n\nLooks like this 'is_btree' variable should be PG_USED_FOR_ASSERTS_ONLY.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 12 Mar 2021 10:35:32 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On 2021.03.12. 19:10 Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> \n> On Fri, Mar 12, 2021 at 11:41 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> > In this next patch, your documentation patch has been applied, and the whole project has been relocated from contrib/pg_amcheck to src/bin/pg_amcheck.\n> \n> Committed that way with some small adjustments. Let's see what the\n> buildfarm thinks.\n> \n\nHi,\n\nAn output-formatting error, I think:\n\nI ran pg_amcheck against a 1.5 GB table:\n\n-- pg_amcheck --progress --on-error-stop --heapallindexed -vt myjsonfile100k\n\npg_amcheck: including database: \"testdb\"\npg_amcheck: in database \"testdb\": using amcheck version \"1.3\" in schema \"public\"\n0/4 relations (0%) 0/187978 pages (0%) \npg_amcheck: checking heap table \"testdb\".\"public\".\"myjsonfile100k\"\npg_amcheck: checking btree index \"testdb\".\"public\".\"myjsonfile100k_pkey\"\n2/4 relations (50%) 187977/187978 pages (99%), (testdb )\npg_amcheck: checking btree index \"testdb\".\"pg_toast\".\"pg_toast_26110_index\"\n3/4 relations (75%) 187978/187978 pages (100%), (testdb )\npg_amcheck: checking heap table \"testdb\".\"pg_toast\".\"pg_toast_26110\"\n4/4 relations (100%) 187978/187978 pages (100%) \n\n\nI think there is a formatting glitch in lines like:\n\n2/4 relations (50%) 187977/187978 pages (99%), (testdb )\n\nI suppose that last part should show up trimmed as '(testdb)', right?\n\nThanks,\n\nErik Rijkers\n\n\n",
"msg_date": "Fri, 12 Mar 2021 20:05:03 +0100 (CET)",
"msg_from": "er@xs4all.nl",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Fri, Mar 12, 2021 at 2:05 PM <er@xs4all.nl> wrote:\n> I think there is a formatting glitch in lines like:\n>\n> 2/4 relations (50%) 187977/187978 pages (99%), (testdb )\n>\n> I suppose that last part should show up trimmed as '(testdb)', right?\n\nActually I think this is intentional. The idea is that as the line is\nrewritten we don't want the close-paren to move around. It's the same\nthing pg_basebackup does with its progress reporting.\n\nNow that is not to say that some other behavior might not be better,\njust that Mark was copying something that already exists, probably\nbecause he knows that I'm finnicky about consistency....\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Mar 2021 14:24:01 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Fri, Mar 12, 2021 at 1:35 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Fri, Mar 12, 2021 at 10:32 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Thank you both, Mark and Robert. This is excellent work!\n\nThanks.\n\n> FYI I see these compiler warnings just now:\n>\n> pg_amcheck.c:1653:4: warning: ISO C90 forbids mixed declarations and\n> code [-Wdeclaration-after-statement]\n> 1653 | DatabaseInfo *dat = (DatabaseInfo *)\n> pg_malloc0(sizeof(DatabaseInfo));\n> | ^~~~~~~~~~~~\n> pg_amcheck.c: In function ‘compile_relation_list_one_db’:\n> pg_amcheck.c:2060:9: warning: variable ‘is_btree’ set but not used\n> [-Wunused-but-set-variable]\n> 2060 | bool is_btree = false;\n> | ^~~~~~~~\n>\n> Looks like this 'is_btree' variable should be PG_USED_FOR_ASSERTS_ONLY.\n\nI'll commit something shortly to address these.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Mar 2021 14:31:36 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Fri, Mar 12, 2021 at 2:31 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I'll commit something shortly to address these.\n\nThere are some interesting failures in the test cases on the\nbuildfarm. One of the tests ($offnum == 13) corrupts the TOAST pointer\nwith a garbage value, expecting to get the message \"final toast chunk\nnumber 0 differs from expected value 6\". But on florican and maybe\nother systems we instead get \"final toast chunk number 0 differs from\nexpected value 5\". That's because the value of TOAST_MAX_CHUNK_SIZE\ndepends on MAXIMUM_ALIGNOF. I think that on 4-byte alignment systems\nit works out to 2000 and on 8-byte alignment systems it works out to\n1996, and the value being stored is 10000 bytes, hence the problem.\nThe place where the calculation goes different seems to be in\nMaximumBytesPerTuple(), where it uses MAXALIGN_DOWN() on a value that,\naccording to my calculations, will be 2038 on all platforms, but the\noutput of MAXALIGN_DOWN() will be 2032 or 2036 depending on the\nplatform. I think the solution to this is just to change the message\nto match \\d+ chunks instead of exactly 6. We should do that right away\nto avoid having the buildfarm barf.\n\nBut, I also notice a couple of other things I think could be improved here:\n\n1. amcheck is really reporting the complete absence of any TOAST rows\nhere due to a corrupted va_valueid. It could pick a better phrasing of\nthat message than \"final toast chunk number 0 differs from expected\nvalue XXX\". I mean, there is no chunk 0. There are no chunks at all.\n\n2. Using SSSSSSSSS as the perl unpack code for the varlena header is\nnot ideal, because it's really 2 1-byte fields followed by 4 4-byte\nfields. So I think you should be using CCllLL, for two unsigned bytes\nand then two signed 4-byte quantities and then two unsigned 4-byte\nquantities. I think if you did that you'd be overwriting the\nva_valueid with the *same* garbage value on every platform, which\nwould be better than different ones. Perhaps when we improve the\nmessage as suggested in (1) this will become a live issue, since we\nmight choose to say something like \"no TOAST entries for value %u\".\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Mar 2021 16:43:05 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Fri, Mar 12, 2021 at 1:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> There are some interesting failures in the test cases on the\n> buildfarm.\n\nI wonder if Andrew Dunstan (now CC'd) could configure his crake\nbuildfarm member to run pg_amcheck with the most expensive and\nthorough options on the master branch (plus all new major version\nbranches going forward).\n\nThat would give us some degree of amcheck test coverage in the back\nbranches right away. It might even detect cross-version\ninconsistencies. Or even pg_upgrade bugs.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 12 Mar 2021 14:08:49 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Mar 12, 2021, at 1:43 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Fri, Mar 12, 2021 at 2:31 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> I'll commit something shortly to address these.\n> \n> There are some interesting failures in the test cases on the\n> buildfarm. One of the tests ($offnum == 13) corrupts the TOAST pointer\n> with a garbage value, expecting to get the message \"final toast chunk\n> number 0 differs from expected value 6\". But on florican and maybe\n> other systems we instead get \"final toast chunk number 0 differs from\n> expected value 5\". That's because the value of TOAST_MAX_CHUNK_SIZE\n> depends on MAXIMUM_ALIGNOF. I think that on 4-byte alignment systems\n> it works out to 2000 and on 8-byte alignment systems it works out to\n> 1996, and the value being stored is 10000 bytes, hence the problem.\n> The place where the calculation goes different seems to be in\n> MaximumBytesPerTuple(), where it uses MAXALIGN_DOWN() on a value that,\n> according to my calculations, will be 2038 on all platforms, but the\n> output of MAXALIGN_DOWN() will be 2032 or 2036 depending on the\n> platform. I think the solution to this is just to change the message\n> to match \\d+ chunks instead of exactly 6. We should do that right away\n> to avoid having the buildfarm barf.\n> \n> But, I also notice a couple of other things I think could be improved here:\n> \n> 1. amcheck is really reporting the complete absence of any TOAST rows\n> here due to a corrupted va_valueid. It could pick a better phrasing of\n> that message than \"final toast chunk number 0 differs from expected\n> value XXX\". I mean, there is no chunk 0. There are no chunks at all.\n> \n> 2. Using SSSSSSSSS as the perl unpack code for the varlena header is\n> not ideal, because it's really 2 1-byte fields followed by 4 4-byte\n> fields. So I think you should be using CCllLL, for two unsigned bytes\n> and then two signed 4-byte quantities and then two unsigned 4-byte\n> quantities. I think if you did that you'd be overwriting the\n> va_valueid with the *same* garbage value on every platform, which\n> would be better than different ones. Perhaps when we improve the\n> message as suggested in (1) this will become a live issue, since we\n> might choose to say something like \"no TOAST entries for value %u\".\n> \n> -- \n> Robert Haas\n> EDB: http://www.enterprisedb.com\n\n\nThis does nothing to change the verbiage from contrib/amcheck, but it should address the problems discussed here in pg_amcheck's regression tests.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 12 Mar 2021 14:24:18 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Fri, Mar 12, 2021 at 5:24 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> This does nothing to change the verbiage from contrib/amcheck, but it should address the problems discussed here in pg_amcheck's regression tests.\n\nCommitted.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Mar 2021 17:55:35 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Mar 12, 2021, at 2:55 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Fri, Mar 12, 2021 at 5:24 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> This does nothing to change the verbiage from contrib/amcheck, but it should address the problems discussed here in pg_amcheck's regression tests.\n> \n> Committed.\n\nThanks.\n\nThere are two more, attached here. The first deals with error message text which differs between farm animals, and the second deals with an apparent problem with IPC::Run shell expanding an asterisk on some platforms but not others. That second one, if true, seems like a problem with scope beyond the pg_amcheck project, as TestLib::command_checks_all uses IPC::Run, and it would be desirable to have consistent behavior across platforms.\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 12 Mar 2021 15:24:32 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Mar 12, 2021, at 11:24 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Fri, Mar 12, 2021 at 2:05 PM <er@xs4all.nl> wrote:\n>> I think there is a formatting glitch in lines like:\n>> \n>> 2/4 relations (50%) 187977/187978 pages (99%), (testdb )\n>> \n>> I suppose that last part should show up trimmed as '(testdb)', right?\n> \n> Actually I think this is intentional. The idea is that as the line is\n> rewritten we don't want the close-paren to move around. It's the same\n> thing pg_basebackup does with its progress reporting.\n> \n> Now that is not to say that some other behavior might not be better,\n> just that Mark was copying something that already exists, probably\n> because he knows that I'm finnicky about consistency....\n\nI think Erik's test case only checked one database, which might be why it looked odd to him. But consider:\n\n pg_amcheck -d foo -d bar -d myreallylongdatabasename -d myshortername -d baz --progress\n\nThe tool will respect your database ordering, and check foo, then bar, etc. If you have --jobs greater than one, it will start checking some relations in bar before finishing all relations in foo, but with a fudge factor, pg_amcheck can report that the processing has moved on to database bar, etc.\n\nYou wouldn't want the parens to jump around when the long database names get processed. So it keeps the parens in the same location, space pads shorter database names, and truncates overlong database names.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 12 Mar 2021 15:32:03 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Mar 12, 2021, at 3:24 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> and the second deals with an apparent problem with IPC::Run shell expanding an asterisk on some platforms but not others. That second one, if true, seems like a problem with scope beyond the pg_amcheck project, as TestLib::command_checks_all uses IPC::Run, and it would be desirable to have consistent behavior across platforms.\n\nThe problem with IPC::Run appears to be real, though I might just need to wait longer for the farm animals to prove me wrong about that. But there is a similar symptom caused by an unrelated problem, one entirely my fault and spotted by Robert. Here is a patch:\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 12 Mar 2021 17:04:09 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Fri, Mar 12, 2021 at 8:04 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> The problem with IPC::Run appears to be real, though I might just need to wait longer for the farm animals to prove me wrong about that. But there is a similar symptom caused by an unrelated problem, one entirely my fault and spotted by Robert. Here is a patch:\n\nOK, I committed this too, along with the one I hadn't committed yet\nfrom your previous email. Gah, tests are so annoying. :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Mar 2021 20:16:22 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Mar 12, 2021, at 5:16 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> Gah, tests are so annoying. :-)\n\nThere is another problem of non-portable option ordering in the tests.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 12 Mar 2021 20:41:50 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> There is another problem of non-portable option ordering in the tests.\n\nDon't almost all of the following tests have the same issue?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Mar 2021 23:53:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "... btw, prairiedog (which has a rather old Perl) has a\ndifferent complaint:\n\nInvalid type 'q' in unpack at t/004_verify_heapam.pl line 104.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Mar 2021 23:56:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "I wrote:\n> Don't almost all of the following tests have the same issue?\n\nAh, nevermind, I was looking at an older version of 003_check.pl.\nI concur that 24189277f missed only one here.\n\nPushed your fix.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Mar 2021 00:08:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Mar 12, 2021, at 9:08 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I wrote:\n>> Don't almost all of the following tests have the same issue?\n> \n> Ah, nevermind, I was looking at an older version of 003_check.pl.\n> I concur that 24189277f missed only one here.\n> \n> Pushed your fix.\n> \n> \t\t\tregards, tom lane\n\nThanks! Was just responding to your other email, but now I don't have to send it.\n\nSorry for painting so many farm animals red this evening.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 12 Mar 2021 21:09:21 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "I wrote:\n> ... btw, prairiedog (which has a rather old Perl) has a\n> different complaint:\n> Invalid type 'q' in unpack at t/004_verify_heapam.pl line 104.\n\nHmm ... \"man perlfunc\" on that system quoth\n\n q A signed quad (64-bit) value.\n Q An unsigned quad value.\n (Quads are available only if your system supports 64-bit\n integer values _and_ if Perl has been compiled to support those.\n Causes a fatal error otherwise.)\n\nIt does not seem unreasonable for us to rely on Perl having that\nin 2021, so I'll see about upgrading this perl installation.\n\n(I suppose gaur will need it too, sigh.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Mar 2021 00:27:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> Sorry for painting so many farm animals red this evening.\n\nNot to worry. We go through this sort of fire drill regularly\nwhen somebody pushes a batch of brand new test code.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Mar 2021 00:30:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "I wrote:\n>> ... btw, prairiedog (which has a rather old Perl) has a\n>> different complaint:\n>> Invalid type 'q' in unpack at t/004_verify_heapam.pl line 104.\n\n> Hmm ... \"man perlfunc\" on that system quoth\n> q A signed quad (64-bit) value.\n> Q An unsigned quad value.\n> (Quads are available only if your system supports 64-bit\n> integer values _and_ if Perl has been compiled to support those.\n> Causes a fatal error otherwise.)\n> It does not seem unreasonable for us to rely on Perl having that\n> in 2021, so I'll see about upgrading this perl installation.\n\nHm, wait a minute: hoverfly is showing the same failure, even though\nit claims to be running a 64-bit Perl. Now I'm confused.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Mar 2021 01:07:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Sat, Mar 13, 2021 at 01:07:15AM -0500, Tom Lane wrote:\n> I wrote:\n> >> ... btw, prairiedog (which has a rather old Perl) has a\n> >> different complaint:\n> >> Invalid type 'q' in unpack at t/004_verify_heapam.pl line 104.\n> \n> > Hmm ... \"man perlfunc\" on that system quoth\n> > q A signed quad (64-bit) value.\n> > Q An unsigned quad value.\n> > (Quads are available only if your system supports 64-bit\n> > integer values _and_ if Perl has been compiled to support those.\n> > Causes a fatal error otherwise.)\n> > It does not seem unreasonable for us to rely on Perl having that\n> > in 2021, so I'll see about upgrading this perl installation.\n> \n> Hm, wait a minute: hoverfly is showing the same failure, even though\n> it claims to be running a 64-bit Perl. Now I'm confused.\n\nOn that machine:\n\n[nm@power8-aix 7:0 2021-03-13T06:09:08 ~ 0]$ /usr/bin/perl64 -e 'unpack \"q\", \"\"'\n[nm@power8-aix 7:0 2021-03-13T06:09:10 ~ 0]$ /usr/bin/perl -e 'unpack \"q\", \"\"'\nInvalid type 'q' in unpack at -e line 1.\n\nhoverfly does configure with PERL=perl64. /usr/bin/prove is from the 32-bit\nPerl, so I suspect the TAP suites get 32-bit Perl that way. (There's no\n\"prove64\".) This test should unpack the field as two 32-bit values, not a\n64-bit value, to avoid requiring more from the Perl installation.\n\n\n",
"msg_date": "Fri, 12 Mar 2021 22:16:55 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Mar 12, 2021, at 10:16 PM, Noah Misch <noah@leadboat.com> wrote:\n> \n> On Sat, Mar 13, 2021 at 01:07:15AM -0500, Tom Lane wrote:\n>> I wrote:\n>>>> ... btw, prairiedog (which has a rather old Perl) has a\n>>>> different complaint:\n>>>> Invalid type 'q' in unpack at t/004_verify_heapam.pl line 104.\n>> \n>>> Hmm ... \"man perlfunc\" on that system quoth\n>>> q A signed quad (64-bit) value.\n>>> Q An unsigned quad value.\n>>> (Quads are available only if your system supports 64-bit\n>>> integer values _and_ if Perl has been compiled to support those.\n>>> Causes a fatal error otherwise.)\n>>> It does not seem unreasonable for us to rely on Perl having that\n>>> in 2021, so I'll see about upgrading this perl installation.\n>> \n>> Hm, wait a minute: hoverfly is showing the same failure, even though\n>> it claims to be running a 64-bit Perl. Now I'm confused.\n> \n> On that machine:\n> \n> [nm@power8-aix 7:0 2021-03-13T06:09:08 ~ 0]$ /usr/bin/perl64 -e 'unpack \"q\", \"\"'\n> [nm@power8-aix 7:0 2021-03-13T06:09:10 ~ 0]$ /usr/bin/perl -e 'unpack \"q\", \"\"'\n> Invalid type 'q' in unpack at -e line 1.\n> \n> hoverfly does configure with PERL=perl64. /usr/bin/prove is from the 32-bit\n> Perl, so I suspect the TAP suites get 32-bit Perl that way. (There's no\n> \"prove64\".) This test should unpack the field as two 32-bit values, not a\n> 64-bit value, to avoid requiring more from the Perl installation.\n\nI will post a modified test in a bit that avoids using Q/q.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 12 Mar 2021 22:18:32 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> On Mar 12, 2021, at 10:16 PM, Noah Misch <noah@leadboat.com> wrote:\n>> hoverfly does configure with PERL=perl64. /usr/bin/prove is from the 32-bit\n>> Perl, so I suspect the TAP suites get 32-bit Perl that way. (There's no\n>> \"prove64\".)\n\nOh, that's annoying.\n\n>> This test should unpack the field as two 32-bit values, not a\n>> 64-bit value, to avoid requiring more from the Perl installation.\n\n> I will post a modified test in a bit that avoids using Q/q.\n\nCoping with both endiannesses might be painful.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Mar 2021 01:22:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Mar 12, 2021, at 10:22 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Mar 12, 2021, at 10:16 PM, Noah Misch <noah@leadboat.com> wrote:\n>>> hoverfly does configure with PERL=perl64. /usr/bin/prove is from the 32-bit\n>>> Perl, so I suspect the TAP suites get 32-bit Perl that way. (There's no\n>>> \"prove64\".)\n> \n> Oh, that's annoying.\n> \n>>> This test should unpack the field as two 32-bit values, not a\n>>> 64-bit value, to avoid requiring more from the Perl installation.\n> \n>> I will post a modified test in a bit that avoids using Q/q.\n> \n> Coping with both endiannesses might be painful.\n\nNot too bad if the bigint value is zero, as both the low and high 32bits will be zero, regardless of endianness. The question is whether that gives up too much in terms of what the test is trying to do. I'm not sure that it does, but if you'd rather solve this by upgrading perl, that's ok by me. \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 12 Mar 2021 22:28:44 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "Hi,\n\nOn 2021-03-13 01:22:54 -0500, Tom Lane wrote:\n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> > On Mar 12, 2021, at 10:16 PM, Noah Misch <noah@leadboat.com> wrote:\n> >> hoverfly does configure with PERL=perl64. /usr/bin/prove is from the 32-bit\n> >> Perl, so I suspect the TAP suites get 32-bit Perl that way. (There's no\n> >> \"prove64\".)\n> \n> Oh, that's annoying.\n\nI suspect we could solve that by invoking changing our /usr/bin/prove\ninvocation to instead be PERL /usr/bin/prove? That might be a good thing\nindependent of this issue, because it's not unreasonable for a user to\nexpect that we'd actually use the perl installation they configured...\n\nAlthough I do not know how prove determines the perl installation it's\ngoing to use for the test scripts...\n\n- Andres\n\n\n",
"msg_date": "Fri, 12 Mar 2021 22:30:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Mar 12, 2021, at 10:28 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> \n> \n>> On Mar 12, 2021, at 10:22 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> \n>> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>>> On Mar 12, 2021, at 10:16 PM, Noah Misch <noah@leadboat.com> wrote:\n>>>> hoverfly does configure with PERL=perl64. /usr/bin/prove is from the 32-bit\n>>>> Perl, so I suspect the TAP suites get 32-bit Perl that way. (There's no\n>>>> \"prove64\".)\n>> \n>> Oh, that's annoying.\n>> \n>>>> This test should unpack the field as two 32-bit values, not a\n>>>> 64-bit value, to avoid requiring more from the Perl installation.\n>> \n>>> I will post a modified test in a bit that avoids using Q/q.\n>> \n>> Coping with both endiannesses might be painful.\n> \n> Not too bad if the bigint value is zero, as both the low and high 32bits will be zero, regardless of endianness. The question is whether that gives up too much in terms of what the test is trying to do. I'm not sure that it does, but if you'd rather solve this by upgrading perl, that's ok by me. \n\n\nI'm not advocating that this be the solution, but if we don't fix up the perl end of it, then this test change might be used instead.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 12 Mar 2021 22:33:29 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> On Mar 12, 2021, at 10:22 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Coping with both endiannesses might be painful.\n\n> Not too bad if the bigint value is zero, as both the low and high 32bits will be zero, regardless of endianness. The question is whether that gives up too much in terms of what the test is trying to do. I'm not sure that it does, but if you'd rather solve this by upgrading perl, that's ok by me. \n\nI don't mind updating the perl installations on prairiedog and gaur,\nbut Noah might have some difficulty with his AIX flotilla, as I believe\nhe's not sysadmin there.\n\nYou might think about using some symmetric-but-not-zero value,\n0x01010101 or the like.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Mar 2021 01:36:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Mar 12, 2021, at 10:36 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Mar 12, 2021, at 10:22 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Coping with both endiannesses might be painful.\n> \n>> Not too bad if the bigint value is zero, as both the low and high 32bits will be zero, regardless of endianness. The question is whether that gives up too much in terms of what the test is trying to do. I'm not sure that it does, but if you'd rather solve this by upgrading perl, that's ok by me. \n> \n> I don't mind updating the perl installations on prairiedog and gaur,\n> but Noah might have some difficulty with his AIX flotilla, as I believe\n> he's not sysadmin there.\n> \n> You might think about using some symmetric-but-not-zero value,\n> 0x01010101 or the like.\n\nI thought about that, but I'm not sure that it proves much more than just using zero. The test doesn't really do much of interest with this value, and it doesn't seem worth complicating the test. The idea originally was that perl's \"q\" pack code would make reading/writing a number such as 12345678 easy, but since it's not easy, this is easy.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 12 Mar 2021 22:55:11 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> On Mar 12, 2021, at 10:36 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> You might think about using some symmetric-but-not-zero value,\n>> 0x01010101 or the like.\n\n> I thought about that, but I'm not sure that it proves much more than just using zero.\n\nPerhaps not. I haven't really looked at any of this code, so I'll defer\nto Robert's judgment about whether this represents an interesting testing\nissue.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Mar 2021 02:00:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Sat, Mar 13, 2021 at 01:36:11AM -0500, Tom Lane wrote:\n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> > On Mar 12, 2021, at 10:22 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Coping with both endiannesses might be painful.\n> \n> > Not too bad if the bigint value is zero, as both the low and high 32bits will be zero, regardless of endianness. The question is whether that gives up too much in terms of what the test is trying to do. I'm not sure that it does, but if you'd rather solve this by upgrading perl, that's ok by me. \n> \n> I don't mind updating the perl installations on prairiedog and gaur,\n> but Noah might have some difficulty with his AIX flotilla, as I believe\n> he's not sysadmin there.\n\nThe AIX animals have Perl v5.28.1. hoverfly, in particular, got a big update\npackage less than a month ago. Hence, I doubt it's a question of applying\nroutine updates. The puzzle would be to either (a) compile a 32-bit Perl that\nhandles unpack('q') or (b) try a PostgreSQL configuration like \"./configure\n... PROVE='perl64 /usr/bin/prove --'\" to run the TAP suites under perl64.\n(For hoverfly, it's enough to run \"prove\" under $PERL. mandrill, however,\nneeds a 32-bit $PERL for plperl, regardless of what it needs for \"prove\".)\nFuture AIX packagers would face doing the same.\n\nWith v5-0001-pg_amcheck-continuing-to-fix-portability-problems.patch being so\nself-contained, something like it is the way to go.\n\n\n",
"msg_date": "Fri, 12 Mar 2021 23:19:13 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Sat, Mar 13, 2021 at 01:36:11AM -0500, Tom Lane wrote:\n>> I don't mind updating the perl installations on prairiedog and gaur,\n>> but Noah might have some difficulty with his AIX flotilla, as I believe\n>> he's not sysadmin there.\n\n> The AIX animals have Perl v5.28.1. hoverfly, in particular, got a big update\n> package less than a month ago. Hence, I doubt it's a question of applying\n> routine updates. The puzzle would be to either (a) compile a 32-bit Perl that\n> handles unpack('q') or (b) try a PostgreSQL configuration like \"./configure\n> ... PROVE='perl64 /usr/bin/prove --'\" to run the TAP suites under perl64.\n> (For hoverfly, it's enough to run \"prove\" under $PERL. mandrill, however,\n> needs a 32-bit $PERL for plperl, regardless of what it needs for \"prove\".)\n> Future AIX packagers would face doing the same.\n\nYeah. prairiedog and gaur are both frankenstein systems: some of the\nsoftware components are years newer than the underlying OS. So I don't\nmind changing them further; in the end they're both in the buildfarm\nfor reasons of hardware diversity, not because they represent platforms\nanyone would run production PG on. On the other hand, those AIX animals\nrepresent systems that are still considered production grade (no?), so\nwe have to be willing to adapt to them not vice versa.\n\n> With v5-0001-pg_amcheck-continuing-to-fix-portability-problems.patch being so\n> self-contained, something like it is the way to go.\n\nI'm only concerned about whether the all-zero value causes any\nsignificant degradation in test quality. Probably Peter G. would\nhave the most informed opinion about that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Mar 2021 02:36:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Sat, Mar 13, 2021 at 1:55 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I thought about that, but I'm not sure that it proves much more than just using zero. The test doesn't really do much of interest with this value, and it doesn't seem worth complicating the test. The idea originally was that perl's \"q\" pack code would make reading/writing a number such as 12345678 easy, but since it's not easy, this is easy.\n\nI think it would be good to use a non-zero value here. We're doing a\nlot of poking into raw bytes here, and if something goes wrong, a zero\nvalue is more likely to look like something normal than whatever else.\nI suggest picking a value where all 8 bytes are the same, but not\nzero, and ideally chosen so that they don't look much like any of the\nsurrounding bytes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 13 Mar 2021 08:49:50 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I think it would be good to use a non-zero value here. We're doing a\n> lot of poking into raw bytes here, and if something goes wrong, a zero\n> value is more likely to look like something normal than whatever else.\n> I suggest picking a value where all 8 bytes are the same, but not\n> zero, and ideally chosen so that they don't look much like any of the\n> surrounding bytes.\n\nActually, it seems like we can let pack/unpack deal with byte-swapping\nwithin 32-bit words; what we lose by not using 'q' format is just the\nability to correctly swap the two 32-bit words. Hence, any value in\nwhich upper and lower halves are the same should work, say\n0x1234567812345678.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Mar 2021 09:50:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Mar 13, 2021, at 6:50 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Robert Haas <robertmhaas@gmail.com> writes:\n>> I think it would be good to use a non-zero value here. We're doing a\n>> lot of poking into raw bytes here, and if something goes wrong, a zero\n>> value is more likely to look like something normal than whatever else.\n>> I suggest picking a value where all 8 bytes are the same, but not\n>> zero, and ideally chosen so that they don't look much like any of the\n>> surrounding bytes.\n> \n> Actually, it seems like we can let pack/unpack deal with byte-swapping\n> within 32-bit words; what we lose by not using 'q' format is just the\n> ability to correctly swap the two 32-bit words. Hence, any value in\n> which upper and lower halves are the same should work, say\n> 0x1234567812345678.\n> \n> \t\t\tregards, tom lane\n\nThe heap tuple in question looks as follows, with ???????? in the spot we're debating:\n\nTuple Layout (values in hex):\n\nt_xmin: 223\nt_xmax: 0\nt_field3: 0\nbi_hi: 0\nbi_lo: 0\nip_posid: 1\nt_infomask2: 3\nt_infomask: b06\nt_hoff: 18\nt_bits: 0\na_1: ????????\na_2: ????????\nb_header: 11 # little-endian, will be 88 on big endian\nb_body1: 61\nb_body2: 62\nb_body3: 63\nb_body4: 64\nb_body5: 65\nb_body6: 66\nb_body7: 67\nc_va_header: 1\nc_va_vartag: 12\nc_va_rawsize: 2714\nc_va_extsize: 2710\n\nvalueid and toastrelid are not shown, as they won't be stable. Relying on t_xmin to be stable makes the test brittle, but fortunately that is separated from a_1 and a_2 far enough that we should not need to worry about it.\n\nWe want to use values that don't look like any of the others. The complete set of nibbles in the values above is [012345678B], leaving the set [9ACDEF] unused. The attached patch uses the value DEADF9F9 as it seems a little easier to remember than other permutations of those nibbles.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 13 Mar 2021 07:20:00 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Sat, Mar 13, 2021 at 10:20 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> We want to use values that don't look like any of the others. The complete set of nibbles in the values above is [012345678B], leaving the set [9ACDEF] unused. The attached patch uses the value DEADF9F9 as it seems a little easier to remember than other permutations of those nibbles.\n\nOK, committed. The nibbles seem less relevant than the bytes as a\nwhole, but that's fine.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 13 Mar 2021 10:59:59 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Fri, Mar 12, 2021 at 05:04:09PM -0800, Mark Dilger wrote:\n> > On Mar 12, 2021, at 3:24 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> > \n> > and the second deals with an apparent problem with IPC::Run shell expanding an asterisk on some platforms but not others. That second one, if true, seems like a problem with scope beyond the pg_amcheck project, as TestLib::command_checks_all uses IPC::Run, and it would be desirable to have consistent behavior across platforms.\n\n> > One of pg_amcheck's regression tests was passing an asterisk through\n> > TestLib's command_checks_all() command, which gets through to\n> > pg_amcheck without difficulty on most platforms, but appears to get\n> > shell expanded on Windows (jacana) and AIX (hoverfly).\n\nFor posterity, I can't reproduce this on hoverfly. The suite fails the same\nway at f371a4c and f371a4c^. More-recently (commit 58f5749), the suite passes\neven after reverting f371a4c. Self-contained IPC::Run usage also does not\ncorroborate the theory:\n\n[nm@power8-aix 8:0 2021-03-13T18:32:23 clean 0]$ perl -MIPC::Run -e 'IPC::Run::run \"printf\", \"%s\\n\", \"*\"'\n*\n[nm@power8-aix 8:0 2021-03-13T18:32:29 clean 0]$ perl -MIPC::Run -e 'IPC::Run::run \"sh\", \"-c\", \"printf %s\\\\\\\\n *\"'\nCOPYRIGHT\nGNUmakefile.in\nHISTORY\nMakefile\nREADME\nREADME.git\naclocal.m4\nconfig\nconfigure\nconfigure.ac\ncontrib\ndoc\nsrc\n\n> there is a similar symptom caused by an unrelated problem\n\n> Subject: [PATCH v3] Avoid use of non-portable option ordering in\n> command_checks_all().\n\nOn a glibc system, \"env POSIXLY_CORRECT=1 make check ...\" tests this.\n\n\n",
"msg_date": "Sat, 13 Mar 2021 10:46:12 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Mar 13, 2021, at 10:46 AM, Noah Misch <noah@leadboat.com> wrote:\n> \n> On Fri, Mar 12, 2021 at 05:04:09PM -0800, Mark Dilger wrote:\n>>> On Mar 12, 2021, at 3:24 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>>> \n>>> and the second deals with an apparent problem with IPC::Run shell expanding an asterisk on some platforms but not others. That second one, if true, seems like a problem with scope beyond the pg_amcheck project, as TestLib::command_checks_all uses IPC::Run, and it would be desirable to have consistent behavior across platforms.\n> \n>>> One of pg_amcheck's regression tests was passing an asterisk through\n>>> TestLib's command_checks_all() command, which gets through to\n>>> pg_amcheck without difficulty on most platforms, but appears to get\n>>> shell expanded on Windows (jacana) and AIX (hoverfly).\n> \n> For posterity, I can't reproduce this on hoverfly. The suite fails the same\n> way at f371a4c and f371a4c^. More-recently (commit 58f5749), the suite passes\n> even after reverting f371a4c. Self-contained IPC::Run usage also does not\n> corroborate the theory:\n> \n> [nm@power8-aix 8:0 2021-03-13T18:32:23 clean 0]$ perl -MIPC::Run -e 'IPC::Run::run \"printf\", \"%s\\n\", \"*\"'\n> *\n> [nm@power8-aix 8:0 2021-03-13T18:32:29 clean 0]$ perl -MIPC::Run -e 'IPC::Run::run \"sh\", \"-c\", \"printf %s\\\\\\\\n *\"'\n> COPYRIGHT\n> GNUmakefile.in\n> HISTORY\n> Makefile\n> README\n> README.git\n> aclocal.m4\n> config\n> configure\n> configure.ac\n> contrib\n> doc\n> src\n> \n>> there is a similar symptom caused by an unrelated problem\n> \n>> Subject: [PATCH v3] Avoid use of non-portable option ordering in\n>> command_checks_all().\n> \n> On a glibc system, \"env POSIXLY_CORRECT=1 make check ...\" tests this.\n\nThanks for investigating!\n\nThe reason I suspected that passing the '*' through IPC::Run had to do with the error that pg_amcheck gave. It complained that too many arguments where being passed to it, and that the first such argument was \"pg_amcheck.c\". There's no reason pg_amcheck should know it's source file name, nor that the regression test should know that, which suggested that the asterisk was being shell expanded within the src/bin/pg_amcheck/ directory and the file listing was being passed into pg_amcheck as arguments.\n\nThat theory may have been wrong, but it was the only theory I had at the time. I don't have access to any of the machines where that happened, so it is hard for me to investigate further.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Sat, 13 Mar 2021 10:51:27 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Sat, Mar 13, 2021 at 10:51:27AM -0800, Mark Dilger wrote:\n> > On Mar 13, 2021, at 10:46 AM, Noah Misch <noah@leadboat.com> wrote:\n> > On Fri, Mar 12, 2021 at 05:04:09PM -0800, Mark Dilger wrote:\n> >>> On Mar 12, 2021, at 3:24 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> >>> and the second deals with an apparent problem with IPC::Run shell expanding an asterisk on some platforms but not others. That second one, if true, seems like a problem with scope beyond the pg_amcheck project, as TestLib::command_checks_all uses IPC::Run, and it would be desirable to have consistent behavior across platforms.\n> > \n> >>> One of pg_amcheck's regression tests was passing an asterisk through\n> >>> TestLib's command_checks_all() command, which gets through to\n> >>> pg_amcheck without difficulty on most platforms, but appears to get\n> >>> shell expanded on Windows (jacana) and AIX (hoverfly).\n> > \n> > For posterity, I can't reproduce this on hoverfly. The suite fails the same\n> > way at f371a4c and f371a4c^. More-recently (commit 58f5749), the suite passes\n> > even after reverting f371a4c. Self-contained IPC::Run usage also does not\n> > corroborate the theory:\n> > \n> > [nm@power8-aix 8:0 2021-03-13T18:32:23 clean 0]$ perl -MIPC::Run -e 'IPC::Run::run \"printf\", \"%s\\n\", \"*\"'\n> > *\n> > [nm@power8-aix 8:0 2021-03-13T18:32:29 clean 0]$ perl -MIPC::Run -e 'IPC::Run::run \"sh\", \"-c\", \"printf %s\\\\\\\\n *\"'\n> > COPYRIGHT\n> > GNUmakefile.in\n> > HISTORY\n> > Makefile\n> > README\n> > README.git\n> > aclocal.m4\n> > config\n> > configure\n> > configure.ac\n> > contrib\n> > doc\n> > src\n\n> The reason I suspected that passing the '*' through IPC::Run had to do with the error that pg_amcheck gave. It complained that too many arguments where being passed to it, and that the first such argument was \"pg_amcheck.c\". There's no reason pg_amcheck should know it's source file name, nor that the regression test should know that, which suggested that the asterisk was being shell expanded within the src/bin/pg_amcheck/ directory and the file listing was being passed into pg_amcheck as arguments.\n\nI agree. I can reproduce the problem on Windows. Commit f371a4c fixed\nhttp://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2021-03-12%2020%3A12%3A44\nand I see logs of that kind of failure only on fairywren and jacana.\n\n\n",
"msg_date": "Sat, 13 Mar 2021 11:18:41 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "Looks like we're not quite out of the woods, as hornet and tern are\nstill unhappy:\n\n# Failed test 'pg_amcheck excluding all corrupt schemas status (got 2 vs expected 0)'\n# at t/003_check.pl line 498.\n\n# Failed test 'pg_amcheck excluding all corrupt schemas stdout /(?^:^$)/'\n# at t/003_check.pl line 498.\n# 'heap table \"db1\".\"pg_catalog\".\"pg_statistic\", block 2, offset 1, attribute 27:\n# final toast chunk number 0 differs from expected value 1\n# heap table \"db1\".\"pg_catalog\".\"pg_statistic\", block 2, offset 1, attribute 27:\n# toasted value for attribute 27 missing from toast table\n# '\n# doesn't match '(?^:^$)'\n# Looks like you failed 2 tests of 60.\n[12:18:06] t/003_check.pl ........... \nDubious, test returned 2 (wstat 512, 0x200)\nFailed 2/60 subtests \n\nThese animals have somewhat weird alignment properties: MAXALIGN is 8\nbut ALIGNOF_DOUBLE is only 4. I speculate that that is affecting their\nchoices about whether an out-of-line TOAST value is needed, breaking\nthis test case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Mar 2021 13:04:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Mar 15, 2021, at 10:04 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Looks like we're not quite out of the woods, as hornet and tern are\n> still unhappy:\n> \n> # Failed test 'pg_amcheck excluding all corrupt schemas status (got 2 vs expected 0)'\n> # at t/003_check.pl line 498.\n> \n> # Failed test 'pg_amcheck excluding all corrupt schemas stdout /(?^:^$)/'\n> # at t/003_check.pl line 498.\n> # 'heap table \"db1\".\"pg_catalog\".\"pg_statistic\", block 2, offset 1, attribute 27:\n> # final toast chunk number 0 differs from expected value 1\n> # heap table \"db1\".\"pg_catalog\".\"pg_statistic\", block 2, offset 1, attribute 27:\n> # toasted value for attribute 27 missing from toast table\n> # '\n> # doesn't match '(?^:^$)'\n> # Looks like you failed 2 tests of 60.\n> [12:18:06] t/003_check.pl ........... \n> Dubious, test returned 2 (wstat 512, 0x200)\n> Failed 2/60 subtests \n> \n> These animals have somewhat weird alignment properties: MAXALIGN is 8\n> but ALIGNOF_DOUBLE is only 4. I speculate that that is affecting their\n> choices about whether an out-of-line TOAST value is needed, breaking\n> this test case.\n\nThe pg_amcheck test case is not corrupting any pg_catalog tables, but contrib/amcheck/verify_heapam is complaining about a corruption in pg_catalog.pg_statistic.\n\nThe logic in verify_heapam only looks for a value in the toast table if the tuple it gets from the main table (in this case, from pg_statistic) has an attribute that claims to be toasted. The error message we're seeing that you quoted above simply means that no entry exists in the toast table. The bit about \"final toast chunk number 0 differs from expected value 1\" is super unhelpful, as what it is really saying is that there were no chunks found. I should submit a patch to not print that message in cases where the attribute is missing from the toast table.\n\nIs it possible that pg_statistic really is corrupt here, and that this is not a bug in pg_amcheck? It's not like we've been checking for corruption in the build farm up till now. I notice that this test, as well as test 005_opclass_damage.pl, neglects to turn off autovacuum for the test node. So maybe the corruptions are getting propogated during autovacuum? This is just a guess, but I will submit a patch that turns off autovacuum for the test node shortly.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 15 Mar 2021 11:11:17 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Mar 15, 2021, at 11:11 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> I will submit a patch that turns off autovacuum for the test node shortly.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 15 Mar 2021 11:38:35 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> On Mar 15, 2021, at 10:04 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> These animals have somewhat weird alignment properties: MAXALIGN is 8\n>> but ALIGNOF_DOUBLE is only 4. I speculate that that is affecting their\n>> choices about whether an out-of-line TOAST value is needed, breaking\n>> this test case.\n\n> The logic in verify_heapam only looks for a value in the toast table if\n> the tuple it gets from the main table (in this case, from pg_statistic)\n> has an attribute that claims to be toasted. The error message we're\n> seeing that you quoted above simply means that no entry exists in the\n> toast table.\n\nYeah, that could be phrased better. Do we have a strong enough lock on\nthe table under examination to be sure that autovacuum couldn't remove\na dead toast entry before we reach it? But this would only be an\nissue if we are trying to check validity of toasted fields within\nknown-dead tuples, which I would argue we shouldn't, since lock or\nno lock there's no guarantee the toast entry is still there.\n\nNot sure that I believe the theory that this is from bad luck of\nconcurrent autovacuum timing, though. The fact that we're seeing\nthis on just those two animals suggests strongly to me that it's\narchitecture-correlated, instead.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Mar 2021 14:57:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Mar 15, 2021, at 11:57 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Do we have a strong enough lock on\n> the table under examination to be sure that autovacuum couldn't remove\n> a dead toast entry before we reach it?\n\nThe main table and the toast table are only locked with AccessShareLock. Each page in the main table is locked with BUFFER_LOCK_SHARE. Toast is not checked unless the tuple passes visibility checks verifying the tuple is not dead.\n\n> But this would only be an\n> issue if we are trying to check validity of toasted fields within\n> known-dead tuples, which I would argue we shouldn't, since lock or\n> no lock there's no guarantee the toast entry is still there.\n\nIt does not intentionally check toasted fields within dead tuples. If that is happening, it's a bug, possibly in the visibility function. But I'm not seeing a specific reason to assume that is the issue. If we still see the complaint on tern or hornet after committing the patch to turn off autovacuum, we will be able to rule out the theory that the toast was removed by autovacuum.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 15 Mar 2021 12:20:01 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Mar 15, 2021, at 11:57 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Yeah, that could be phrased better.\n\nAttaching the 0001 patch submitted earlier, plus 0002 which fixes the confusing corruption message.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 15 Mar 2021 12:35:52 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Mar 15, 2021, at 11:57 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Not sure that I believe the theory that this is from bad luck of\n> concurrent autovacuum timing, though. The fact that we're seeing\n> this on just those two animals suggests strongly to me that it's\n> architecture-correlated, instead.\n\nI find it a little hard to see how amcheck is tripping over a toasted value just in this one table, pg_statistic, and not in any of the others. The error message says the problem is in attribute 27, which I believe means it is stavalues2. The comment in the header for this catalog is intriguing:\n\n /*\n * Values in these arrays are values of the column's data type, or of some\n * related type such as an array element type. We presently have to cheat\n * quite a bit to allow polymorphic arrays of this kind, but perhaps\n * someday it'll be a less bogus facility.\n */\n anyarray stavalues1;\n anyarray stavalues2;\n anyarray stavalues3;\n anyarray stavalues4;\n anyarray stavalues5;\n\nThis is hard to duplicate in a test, because you can't normally create tables with pseudo-type columns. However, if amcheck is walking the tuple and does not correctly update the offset with the length of attribute 26, it may try to read attribute 27 at the wrong offset, unsurprisingly leading to garbage, perhaps a garbage toast pointer. The attached patch v7-0004 adds a check to verify_heapam to see if the va_toastrelid matches the expected toast table oid for the table we're reading. That check almost certainly should have been included in the initial version of verify_heapam, so even if it does nothing to help us in this issue, it's good that it be committed, I think.\n\nIt is unfortunate that the failing test only runs pg_amcheck after creating numerous corruptions, as we can't know if pg_amcheck would have complained about pg_statistic before the corruptions were created in other tables, or if it only does so after. The attached patch v7-0003 adds a call to pg_amcheck after all tables are created and populated, but before any corruptions are caused. This should help narrow down what is happening, and doesn't hurt to leave in place long-term.\n\nI don't immediately see anything wrong with how pg_statistic uses a pseudo-type, but it leads me to want to poke a bit more at pg_statistic on hornet and tern, though I don't have any regression tests specifically for doing so.\n\nTests v7-0001 and v7-0002 are just repeats of the tests posted previously.\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 15 Mar 2021 19:10:37 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Mon, Mar 15, 2021 at 02:57:20PM -0400, Tom Lane wrote:\n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> > On Mar 15, 2021, at 10:04 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> These animals have somewhat weird alignment properties: MAXALIGN is 8\n> >> but ALIGNOF_DOUBLE is only 4. I speculate that that is affecting their\n> >> choices about whether an out-of-line TOAST value is needed, breaking\n> >> this test case.\n\nThat machine also has awful performance for filesystem metadata operations,\nlike open(O_CREAT). Its CPU and read()/write() performance are normal.\n\n> > The logic in verify_heapam only looks for a value in the toast table if\n> > the tuple it gets from the main table (in this case, from pg_statistic)\n> > has an attribute that claims to be toasted. The error message we're\n> > seeing that you quoted above simply means that no entry exists in the\n> > toast table.\n> \n> Yeah, that could be phrased better. Do we have a strong enough lock on\n> the table under examination to be sure that autovacuum couldn't remove\n> a dead toast entry before we reach it? But this would only be an\n> issue if we are trying to check validity of toasted fields within\n> known-dead tuples, which I would argue we shouldn't, since lock or\n> no lock there's no guarantee the toast entry is still there.\n> \n> Not sure that I believe the theory that this is from bad luck of\n> concurrent autovacuum timing, though.\n\nWith autovacuum_naptime=1s, on hornet, the failure reproduced twice in twelve\nruns. With v6-0001-Turning-off-autovacuum-during-corruption-tests.patch\napplied, 196 runs all succeeded.\n\n> The fact that we're seeing\n> this on just those two animals suggests strongly to me that it's\n> architecture-correlated, instead.\n\nThat is possible.\n\n\n",
"msg_date": "Mon, 15 Mar 2021 23:09:13 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Mar 15, 2021, at 11:09 PM, Noah Misch <noah@leadboat.com> wrote:\n> \n>> Not sure that I believe the theory that this is from bad luck of\n>> concurrent autovacuum timing, though.\n> \n> With autovacuum_naptime=1s, on hornet, the failure reproduced twice in twelve\n> runs. With v6-0001-Turning-off-autovacuum-during-corruption-tests.patch\n> applied, 196 runs all succeeded.\n> \n>> The fact that we're seeing\n>> this on just those two animals suggests strongly to me that it's\n>> architecture-correlated, instead.\n> \n> That is possible.\n\nI think autovacuum simply triggers the bug, and is not the cause of the bug. If I turn autovacuum off and instead do an ANALYZE in each test database rather than performing the corruptions, I get reports about problems in pg_statistic. This is on my mac laptop. This rules out the theory that autovacuum is propogating corruptions into pg_statistic, and also the theory that it is architecture dependent.\n\nI'll investigate further.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 16 Mar 2021 08:45:28 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> I think autovacuum simply triggers the bug, and is not the cause of the bug. If I turn autovacuum off and instead do an ANALYZE in each test database rather than performing the corruptions, I get reports about problems in pg_statistic. This is on my mac laptop. This rules out the theory that autovacuum is propogating corruptions into pg_statistic, and also the theory that it is architecture dependent.\n\nI wonder whether amcheck is confused by the declaration of those columns\nas \"anyarray\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Mar 2021 12:07:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Mar 16, 2021, at 9:07 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> I think autovacuum simply triggers the bug, and is not the cause of the bug. If I turn autovacuum off and instead do an ANALYZE in each test database rather than performing the corruptions, I get reports about problems in pg_statistic. This is on my mac laptop. This rules out the theory that autovacuum is propogating corruptions into pg_statistic, and also the theory that it is architecture dependent.\n> \n> I wonder whether amcheck is confused by the declaration of those columns\n> as \"anyarray\".\n\nIt uses attlen and attalign for the attribute, so that idea does make sense. It gets that via TupleDescAttr(RelationGetDescr(rel), attnum).\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 16 Mar 2021 09:30:00 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Mar 16, 2021, at 9:30 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> \n> \n>> On Mar 16, 2021, at 9:07 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> \n>> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>>> I think autovacuum simply triggers the bug, and is not the cause of the bug. If I turn autovacuum off and instead do an ANALYZE in each test database rather than performing the corruptions, I get reports about problems in pg_statistic. This is on my mac laptop. This rules out the theory that autovacuum is propogating corruptions into pg_statistic, and also the theory that it is architecture dependent.\n>> \n>> I wonder whether amcheck is confused by the declaration of those columns\n>> as \"anyarray\".\n> \n> It uses attlen and attalign for the attribute, so that idea does make sense. It gets that via TupleDescAttr(RelationGetDescr(rel), attnum).\n\nYeah, that looks related:\n\nregression=# select attname, attlen, attnum, attalign from pg_attribute where attrelid = 2619 and attname like 'stavalue%';\n attname | attlen | attnum | attalign \n------------+--------+--------+----------\n stavalues1 | -1 | 27 | d\n stavalues2 | -1 | 28 | d\n stavalues3 | -1 | 29 | d\n stavalues4 | -1 | 30 | d\n stavalues5 | -1 | 31 | d\n(5 rows)\n\nIt shows them all has having attalign = 'd', but for some array types the alignment will be 'i', not 'd'. So it's lying a bit about the contents. But I'm now confused why this caused problems on the two hosts where integer and double have the same alignment? It seems like that would be the one place where the bug would not happen, not the one place where it does.\n\nI'm attaching a test that reliably reproduces this for me:\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 16 Mar 2021 09:51:04 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Tue, Mar 16, 2021 at 12:51 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Yeah, that looks related:\n>\n> regression=# select attname, attlen, attnum, attalign from pg_attribute where attrelid = 2619 and attname like 'stavalue%';\n> attname | attlen | attnum | attalign\n> ------------+--------+--------+----------\n> stavalues1 | -1 | 27 | d\n> stavalues2 | -1 | 28 | d\n> stavalues3 | -1 | 29 | d\n> stavalues4 | -1 | 30 | d\n> stavalues5 | -1 | 31 | d\n> (5 rows)\n>\n> It shows them all has having attalign = 'd', but for some array types the alignment will be 'i', not 'd'. So it's lying a bit about the contents. But I'm now confused why this caused problems on the two hosts where integer and double have the same alignment? It seems like that would be the one place where the bug would not happen, not the one place where it does.\n\nWait, so the value in the attalign column isn't the alignment we're\nactually using? I can understand how we might generate tuples like\nthat, if we pass the actual type to construct_array(), but shouldn't\nwe then get garbage when we deform the tuple? I mean,\nheap_deform_tuple() is going to get the alignment from the tuple\ndescriptor, which is a table property. It doesn't (and can't) know\nwhat type of value is stored inside any of these fields for real,\nright?\n\nIn other words, isn't this actually corruption, caused by a bug in our\ncode, and how have we not noticed it before now?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Mar 2021 12:56:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Mar 16, 2021 at 12:51 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> It shows them all has having attalign = 'd', but for some array types the alignment will be 'i', not 'd'. So it's lying a bit about the contents. But I'm now confused why this caused problems on the two hosts where integer and double have the same alignment? It seems like that would be the one place where the bug would not happen, not the one place where it does.\n\n> Wait, so the value in the attalign column isn't the alignment we're\n> actually using? I can understand how we might generate tuples like\n> that, if we pass the actual type to construct_array(), but shouldn't\n> we then get garbage when we deform the tuple?\n\nNo. What should be happening there is that some arrays in the column\nget larger alignment than they actually need, but that shouldn't cause\na problem (and has not done so, AFAIK, in the decades that it's been\nlike this). As you say, deforming the tuple is going to rely on the\ntable's tupdesc for alignment; it can't know what is in the data.\n\nI'm not entirely sure what's going on, but I think coming at this\nwith the mindset that \"amcheck has detected some corruption\" is\njust going to lead you astray. Almost certainly, it's \"amcheck\nis incorrectly claiming corruption\". That might be from mis-decoding\na TOAST-referencing datum. (Too bad the message doesn't report the\nTOAST OID it probed for, so we can see if that's sane or not.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Mar 2021 13:48:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "... btw, I now see that tern and hornet are passing this test\nat least as much as they're failing, proving that there's some\ntiming or random chance involved. That doesn't completely\neliminate the idea that there may be an architecture component\nto the issue, but it sure reduces its credibility. I now\nbelieve the theory that the triggering condition is an auto-analyze\nhappening at the right time, and populating pg_statistic with\nsome data that other runs don't see.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Mar 2021 14:01:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Mar 16, 2021, at 10:48 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Tue, Mar 16, 2021 at 12:51 PM Mark Dilger\n>> <mark.dilger@enterprisedb.com> wrote:\n>>> It shows them all has having attalign = 'd', but for some array types the alignment will be 'i', not 'd'. So it's lying a bit about the contents. But I'm now confused why this caused problems on the two hosts where integer and double have the same alignment? It seems like that would be the one place where the bug would not happen, not the one place where it does.\n> \n>> Wait, so the value in the attalign column isn't the alignment we're\n>> actually using? I can understand how we might generate tuples like\n>> that, if we pass the actual type to construct_array(), but shouldn't\n>> we then get garbage when we deform the tuple?\n> \n> No. What should be happening there is that some arrays in the column\n> get larger alignment than they actually need, but that shouldn't cause\n> a problem (and has not done so, AFAIK, in the decades that it's been\n> like this). As you say, deforming the tuple is going to rely on the\n> table's tupdesc for alignment; it can't know what is in the data.\n> \n> I'm not entirely sure what's going on, but I think coming at this\n> with the mindset that \"amcheck has detected some corruption\" is\n> just going to lead you astray. Almost certainly, it's \"amcheck\n> is incorrectly claiming corruption\". That might be from mis-decoding\n> a TOAST-referencing datum. (Too bad the message doesn't report the\n> TOAST OID it probed for, so we can see if that's sane or not.)\n\nI've added that and now get the toast pointer's va_valueid in the message:\n\nmark.dilger@laptop280-ma-us amcheck % pg_amcheck -t \"pg_catalog.pg_statistic\" postgres\nheap table \"postgres\".\"pg_catalog\".\"pg_statistic\", block 4, offset 2, attribute 29:\n toasted value id 13227 for attribute 29 missing from toast table\nheap table \"postgres\".\"pg_catalog\".\"pg_statistic\", block 4, offset 5, attribute 27:\n toasted value id 13228 for attribute 27 missing from toast table\n\ndiff --git a/contrib/amcheck/verify_heapam.c b/contrib/amcheck/verify_heapam.c\nindex 5ccae46375..a0be71bb7f 100644\n--- a/contrib/amcheck/verify_heapam.c\n+++ b/contrib/amcheck/verify_heapam.c\n@@ -1111,8 +1111,8 @@ check_tuple_attribute(HeapCheckContext *ctx)\n }\n if (!found_toasttup)\n report_corruption(ctx,\n- psprintf(\"toasted value for attribute %u missing from toast table\",\n- ctx->attnum));\n+ psprintf(\"toasted value id %u for attribute %u missing from toast table\",\n+ toast_pointer.va_valueid, ctx->attnum));\n else if (ctx->chunkno != (ctx->endchunk + 1))\n report_corruption(ctx,\n psprintf(\"final toast chunk number %u differs from expected value %u\",\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 16 Mar 2021 11:07:16 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> On Mar 16, 2021, at 10:48 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> (Too bad the message doesn't report the\n>> TOAST OID it probed for, so we can see if that's sane or not.)\n\n> I've added that and now get the toast pointer's va_valueid in the message:\n\n> heap table \"postgres\".\"pg_catalog\".\"pg_statistic\", block 4, offset 2, attribute 29:\n> toasted value id 13227 for attribute 29 missing from toast table\n> heap table \"postgres\".\"pg_catalog\".\"pg_statistic\", block 4, offset 5, attribute 27:\n> toasted value id 13228 for attribute 27 missing from toast table\n\nThat's awfully interesting, because OIDs less than 16384 should\nonly be generated during initdb. So what we seem to be looking at\nhere is pg_statistic entries that were made during the ANALYZE\ndone by initdb (cf. vacuum_db()), and then were deleted during\na later auto-analyze (in the buildfarm) or deliberate analyze\n(in your reproducer). But if they're deleted, why is amcheck\nlooking for them?\n\nI'm circling back around to the idea that amcheck is trying to\nvalidate TOAST references that are already dead, and it's getting\nburnt because something-or-other has already removed the toast\nrows, though not the referencing datums. That's legal behavior\nonce the rows are marked dead. Upthread it was claimed that\namcheck isn't doing that, but this looks like a smoking gun to me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Mar 2021 14:22:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Tue, Mar 16, 2021 at 1:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> No. What should be happening there is that some arrays in the column\n> get larger alignment than they actually need, but that shouldn't cause\n> a problem (and has not done so, AFAIK, in the decades that it's been\n> like this). As you say, deforming the tuple is going to rely on the\n> table's tupdesc for alignment; it can't know what is in the data.\n\nOK, I don't understand this. attalign is 'd', which is already the\nmaximum possible, and even if it weren't, individual rows can't decide\nto use a larger OR smaller alignment than expected without breaking\nstuff.\n\nIn what context is it OK to just add extra alignment padding?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Mar 2021 14:24:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Mar 16, 2021, at 10:48 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I'm not entirely sure what's going on, but I think coming at this\n> with the mindset that \"amcheck has detected some corruption\" is\n> just going to lead you astray. Almost certainly, it's \"amcheck\n> is incorrectly claiming corruption\". That might be from mis-decoding\n> a TOAST-referencing datum. (Too bad the message doesn't report the\n> TOAST OID it probed for, so we can see if that's sane or not.)\n\nCopyStatistics seems to just copy Form_pg_statistic without regard for who owns the toast. Is this safe? Looking at RemoveStatistics, I'm not sure that it is. Anybody more familiar with this code care to give an opinion?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 16 Mar 2021 11:24:33 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Tue, Mar 16, 2021 at 2:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I'm circling back around to the idea that amcheck is trying to\n> validate TOAST references that are already dead, and it's getting\n> burnt because something-or-other has already removed the toast\n> rows, though not the referencing datums. That's legal behavior\n> once the rows are marked dead. Upthread it was claimed that\n> amcheck isn't doing that, but this looks like a smoking gun to me.\n\nI think this theory has some legs. From check_tuple_header_and_visibilty():\n\n else if (!(infomask & HEAP_XMAX_COMMITTED))\n return true; /*\nHEAPTUPLE_DELETE_IN_PROGRESS or\n *\nHEAPTUPLE_LIVE */\n else\n return false; /*\nHEAPTUPLE_RECENTLY_DEAD or HEAPTUPLE_DEAD */\n }\n return true; /* not dead */\n}\n\nThat first case looks wrong to me. Don't we need to call\nget_xid_status() here, Mark? As coded, it seems that if the xmin is ok\nand the xmax is marked committed, we consider the tuple checkable. The\ncomment says it must be HEAPTUPLE_DELETE_IN_PROGRESS or\nHEAPTUPLE_LIVE, but it seems to me that if the actual answer is either\nHEAPTUPLE_RECENTLY_DEAD or HEAPTUPLE_DEAD depending on whether the\nxmax is all-visible. And in the second case the comment says it's\neither HEAPTUPLE_RECENTLY_DEAD or HEAPTUPLE_DEAD, but I think in that\ncase it's either HEAPTUPLE_DELETE_IN_PROGRESS or\nHEAPTUPLE_RECENTLY_DEAD or HEAPTUPLE_DEAD, depending on the XID\nstatus.\n\nAnother thought here is that it's probably not wicked smart to be\nrelying on the hint bits to match the actual status of the tuple in\ncases of corruption. Maybe we should be warning about tuples that are\nhave xmin or xmax flagged as committed or invalid when in fact clog\ndisagrees. That's not a particularly uncommon case, and it's hard to\ncheck.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Mar 2021 14:40:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Mar 16, 2021 at 1:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> No. What should be happening there is that some arrays in the column\n>> get larger alignment than they actually need, but that shouldn't cause\n>> a problem (and has not done so, AFAIK, in the decades that it's been\n>> like this). As you say, deforming the tuple is going to rely on the\n>> table's tupdesc for alignment; it can't know what is in the data.\n\n> OK, I don't understand this. attalign is 'd', which is already the\n> maximum possible, and even if it weren't, individual rows can't decide\n> to use a larger OR smaller alignment than expected without breaking\n> stuff.\n\n> In what context is it OK to just add extra alignment padding?\n\nIt's *not* extra, according to pg_statistic's tuple descriptor.\nBoth forming and deforming of pg_statistic tuples should honor\nthat and locate stavaluesX values on d-aligned boundaries.\n\nIt could be that a particular entry is of an array type that\nonly requires i-alignment. But that doesn't break anything,\nit just means we inserted more padding than an omniscient\nimplementation would do.\n\n(I suppose in some parallel universe there could be a machine\nwhere i-alignment is stricter than d-alignment, and then we'd\nhave trouble.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Mar 2021 14:45:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> CopyStatistics seems to just copy Form_pg_statistic without regard for\n> who owns the toast. Is this safe?\n\nNo less so than a ton of other places that insert values that might've\ncome from on-disk storage. heap_toast_insert_or_update() is responsible\nfor dealing with the problem. These days it looks like it's actually\ntoast_tuple_init() that takes care of dereferencing previously-toasted\nvalues during an INSERT.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Mar 2021 14:52:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Tue, Mar 16, 2021 at 2:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > In what context is it OK to just add extra alignment padding?\n>\n> It's *not* extra, according to pg_statistic's tuple descriptor.\n> Both forming and deforming of pg_statistic tuples should honor\n> that and locate stavaluesX values on d-aligned boundaries.\n>\n> It could be that a particular entry is of an array type that\n> only requires i-alignment. But that doesn't break anything,\n> it just means we inserted more padding than an omniscient\n> implementation would do.\n\nOK, yeah, I just misunderstood what you were saying.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Mar 2021 15:09:48 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\nOn 3/13/21 1:30 AM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-03-13 01:22:54 -0500, Tom Lane wrote:\n>> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>>> On Mar 12, 2021, at 10:16 PM, Noah Misch <noah@leadboat.com> wrote:\n>>>> hoverfly does configure with PERL=perl64. /usr/bin/prove is from the 32-bit\n>>>> Perl, so I suspect the TAP suites get 32-bit Perl that way. (There's no\n>>>> \"prove64\".)\n>> Oh, that's annoying.\n> I suspect we could solve that by invoking changing our /usr/bin/prove\n> invocation to instead be PERL /usr/bin/prove? That might be a good thing\n> independent of this issue, because it's not unreasonable for a user to\n> expect that we'd actually use the perl installation they configured...\n>\n> Although I do not know how prove determines the perl installation it's\n> going to use for the test scripts...\n>\n\n\nThere's a very good chance this would break msys builds, which are\nconfigured to build against a pure native (i.e. non-msys) perl sucj as\nAS or Strawberry, but need to run msys perl for TAP tests, so it gets\nthe paths right.\n\n(Don't get me started on the madness that can result from managing this.\nI've lost weeks of my life to it ... If you add cygwin into the mix and\nyou're trying to coordinate builds among three buildfarm animals it's a\nmajor pain.)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 16 Mar 2021 15:21:39 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Mar 16, 2021, at 11:40 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Tue, Mar 16, 2021 at 2:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'm circling back around to the idea that amcheck is trying to\n>> validate TOAST references that are already dead, and it's getting\n>> burnt because something-or-other has already removed the toast\n>> rows, though not the referencing datums. That's legal behavior\n>> once the rows are marked dead. Upthread it was claimed that\n>> amcheck isn't doing that, but this looks like a smoking gun to me.\n> \n> I think this theory has some legs. From check_tuple_header_and_visibilty():\n> \n> else if (!(infomask & HEAP_XMAX_COMMITTED))\n> return true; /*\n> HEAPTUPLE_DELETE_IN_PROGRESS or\n> *\n> HEAPTUPLE_LIVE */\n> else\n> return false; /*\n> HEAPTUPLE_RECENTLY_DEAD or HEAPTUPLE_DEAD */\n> }\n> return true; /* not dead */\n> }\n> \n> That first case looks wrong to me. Don't we need to call\n> get_xid_status() here, Mark? As coded, it seems that if the xmin is ok\n> and the xmax is marked committed, we consider the tuple checkable. The\n> comment says it must be HEAPTUPLE_DELETE_IN_PROGRESS or\n> HEAPTUPLE_LIVE, but it seems to me that if the actual answer is either\n> HEAPTUPLE_RECENTLY_DEAD or HEAPTUPLE_DEAD depending on whether the\n> xmax is all-visible. And in the second case the comment says it's\n> either HEAPTUPLE_RECENTLY_DEAD or HEAPTUPLE_DEAD, but I think in that\n> case it's either HEAPTUPLE_DELETE_IN_PROGRESS or\n> HEAPTUPLE_RECENTLY_DEAD or HEAPTUPLE_DEAD, depending on the XID\n> status.\n> \n> Another thought here is that it's probably not wicked smart to be\n> relying on the hint bits to match the actual status of the tuple in\n> cases of corruption. Maybe we should be warning about tuples that are\n> have xmin or xmax flagged as committed or invalid when in fact clog\n> disagrees. That's not a particularly uncommon case, and it's hard to\n> check.\n\nThis code was not committed as part of the recent pg_amcheck work, but longer ago, and I'm having trouble reconstructing exactly why it was written that way.\n\nChanging check_tuple_header_and_visibilty() fixes the regression test and also manual tests against the \"regression\" database that I've been using. I'd like to ponder the changes a while longer before I post, but the fact that these changes fix the tests seems to add credibility to this theory.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 16 Mar 2021 12:31:42 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Mon, Mar 15, 2021 at 10:10 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> It is unfortunate that the failing test only runs pg_amcheck after creating numerous corruptions, as we can't know if pg_amcheck would have complained about pg_statistic before the corruptions were created in other tables, or if it only does so after. The attached patch v7-0003 adds a call to pg_amcheck after all tables are created and populated, but before any corruptions are caused. This should help narrow down what is happening, and doesn't hurt to leave in place long-term.\n>\n> I don't immediately see anything wrong with how pg_statistic uses a pseudo-type, but it leads me to want to poke a bit more at pg_statistic on hornet and tern, though I don't have any regression tests specifically for doing so.\n>\n> Tests v7-0001 and v7-0002 are just repeats of the tests posted previously.\n\nSince we now know that shutting autovacuum off makes the problem go\naway, I don't see a reason to commit 0001. We should fix pg_amcheck\ninstead, if, as presently seems to be the case, that's where the\nproblem is.\n\nI just committed 0002.\n\nI think 0003 is probably a good idea, but I haven't committed it yet.\n\nAs for 0004, it seems to me that we might want to do a little more\nrewording of these messages and perhaps we should try to do it all at\nonce. Like, for example, your other change to print out the toast\nvalue ID seems like a good idea, and could apply to any new messages\nas well as some existing ones. Maybe there are also more fields in the\nTOAST pointer for which we could add checks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Mar 2021 15:52:09 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Mar 16, 2021, at 12:52 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Mon, Mar 15, 2021 at 10:10 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> It is unfortunate that the failing test only runs pg_amcheck after creating numerous corruptions, as we can't know if pg_amcheck would have complained about pg_statistic before the corruptions were created in other tables, or if it only does so after. The attached patch v7-0003 adds a call to pg_amcheck after all tables are created and populated, but before any corruptions are caused. This should help narrow down what is happening, and doesn't hurt to leave in place long-term.\n>> \n>> I don't immediately see anything wrong with how pg_statistic uses a pseudo-type, but it leads me to want to poke a bit more at pg_statistic on hornet and tern, though I don't have any regression tests specifically for doing so.\n>> \n>> Tests v7-0001 and v7-0002 are just repeats of the tests posted previously.\n> \n> Since we now know that shutting autovacuum off makes the problem go\n> away, I don't see a reason to commit 0001. We should fix pg_amcheck\n> instead, if, as presently seems to be the case, that's where the\n> problem is.\n\nIf you get unlucky, autovacuum will process one of the tables that the test intentionally corrupted, with bad consequences, ultimately causing build farm intermittent test failures. We could wait to see if this ever happens before fixing it, if you prefer. I'm not sure what that buys us, though.\n\nThe right approach, I think, is to extend the contrib/amcheck tests to include regressing this particular case to see if it fails. I've written that test and verified that it fails against the old code and passes against the new.\n\n> I just committed 0002.\n\nThanks!\n\n> I think 0003 is probably a good idea, but I haven't committed it yet.\n\nIt won't do anything for us in this particular case, but it might make debugging failures quicker in the future.\n\n> As for 0004, it seems to me that we might want to do a little more\n> rewording of these messages and perhaps we should try to do it all at\n> once. Like, for example, your other change to print out the toast\n> value ID seems like a good idea, and could apply to any new messages\n> as well as some existing ones. Maybe there are also more fields in the\n> TOAST pointer for which we could add checks.\n\nOf the toast pointer fields:\n\n int32 va_rawsize; /* Original data size (includes header) */\n int32 va_extsize; /* External saved size (doesn't) */\n Oid va_valueid; /* Unique ID of value within TOAST table */\n Oid va_toastrelid; /* RelID of TOAST table containing it */\n\nall seem worth getting as part of any toast error message, even if these fields themselves are not corrupt. It just makes it easier to understand the context of the error you're looking at. At first I tried putting these into each message, but it is very wordy to say things like \"toast pointer with rawsize %u and extsize %u pointing at relation with oid %u\" and such. It made more sense to just add these four fields to the verify_heapam tuple format. That saves putting them in the message text itself, and has the benefit that you could filter the rows coming from verify_heapam() for ones where valueid is or is not null, for example. This changes the external interface of verify_heapam, but I didn't bother with a amcheck--1.3--1.4.sql because amcheck--1.2--1.3. sql was added as part of the v14 development work and has not yet been released. My assumption is that I can just change it, rather than making a new upgrade file.\n\nThese patches fix the visibility rules and add extra toast checking. Some of the previous patch material is not included, since it is not clear to me if you wanted to commit any of it.\n\n\n\n\n\n \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 17 Mar 2021 21:00:39 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Mar 16, 2021, at 12:52 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n>> Since we now know that shutting autovacuum off makes the problem go\n>> away, I don't see a reason to commit 0001. We should fix pg_amcheck\n>> instead, if, as presently seems to be the case, that's where the\n>> problem is.\n\n> If you get unlucky, autovacuum will process one of the tables that the test intentionally corrupted, with bad consequences, ultimately causing build farm intermittent test failures.\n\nUm, yeah, the test had better shut off autovacuum on any table that\nit intentionally corrupts.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Mar 2021 00:12:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Thu, Mar 18, 2021 at 12:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> >> On Mar 16, 2021, at 12:52 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> >> Since we now know that shutting autovacuum off makes the problem go\n> >> away, I don't see a reason to commit 0001. We should fix pg_amcheck\n> >> instead, if, as presently seems to be the case, that's where the\n> >> problem is.\n>\n> > If you get unlucky, autovacuum will process one of the tables that the test intentionally corrupted, with bad consequences, ultimately causing build farm intermittent test failures.\n>\n> Um, yeah, the test had better shut off autovacuum on any table that\n> it intentionally corrupts.\n\nRight, good point. But... does that really apply to\n005_opclass_damage.pl? I feel like with the kind of physical damage\nwe're doing in 003_check.pl, it makes a lot of sense to stop vacuum\nfrom crashing headlong into that table. But, 005 is doing \"logical\"\ndamage rather than \"physical\" damage, and I don't see why autovacuum\nshould misbehave in that kind of case. In fact, the fact that\nautovacuum can handle such cases is one of the selling points for the\nwhole design of vacuum, as opposed to, for example, retail index\nlookups.\n\nPending resolution of that question, I've committed the change to\ndisable autovacuum in 003, and also Mark's changes to have it also run\npg_amcheck BEFORE corrupting anything, so the post-corruption tests\nfail - say by finding the wrong kind of corruption - we can see\nwhether it was also failing before the corruption was even introduced.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Mar 2021 15:05:47 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Mar 23, 2021, at 12:05 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> 005 is doing \"logical\"\n> damage rather than \"physical\" damage, and I don't see why autovacuum\n> should misbehave in that kind of case. In fact, the fact that\n> autovacuum can handle such cases is one of the selling points for the\n> whole design of vacuum, as opposed to, for example, retail index\n> lookups.\n\nThat is a good point. Checking that autovacuum behaves sensibly despite sort order breakage sounds reasonable, but test 005 doesn't do that reliably, because it does nothing to make sure that autovacuum runs against the affected table during the short window when the affected table exists. All the same, I don't see that turning autovacuum off is required. If autovacuum is broken in this regard, we may get occasional, hard to reproduce build farm failures, but that would be more informative than no failures at all.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 23 Mar 2021 12:20:35 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Tue, Mar 23, 2021 at 12:05 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Right, good point. But... does that really apply to\n> 005_opclass_damage.pl? I feel like with the kind of physical damage\n> we're doing in 003_check.pl, it makes a lot of sense to stop vacuum\n> from crashing headlong into that table. But, 005 is doing \"logical\"\n> damage rather than \"physical\" damage, and I don't see why autovacuum\n> should misbehave in that kind of case. In fact, the fact that\n> autovacuum can handle such cases is one of the selling points for the\n> whole design of vacuum, as opposed to, for example, retail index\n> lookups.\n\nFWIW that is only 99.9% true (contrary to what README.HOT says). This\nis the case because nbtree page deletion will in fact search the tree\nto find a downlink to the target page, which must be removed at the\nsame time -- see the call to _bt_search() made within nbtpage.c.\n\nThis is much less of a problem than you'd think, though, even with an\nopclass that gives wrong answers all the time. Because it's also true\nthat _bt_getstackbuf() is remarkably tolerant when it actually goes to\nlocate the downlink -- because that happens via a linear search that\nmatches on downlink block number (it doesn't use the opclass for that\npart). This means that we'll accidentally fail to fail if the page is\n*somewhere* to the right in the \"true\" key space. Which probably means\nthat it has a greater than 50% chance of not failing with a 100%\nbroken opclass. Which probably makes our odds better with more\nplausible levels of misbehavior (e.g. collation changes).\n\nThat being said, I should make _bt_lock_subtree_parent() return false\nand back out of page deletion without raising an error in the case\nwhere we really cannot locate a valid downlink. We really ought to\nsoldier on when that happens, since we'll do that for a bunch of other\nreasons already. I believe that the only reason we throw an error\ntoday is for parity with the page split case (the main\n_bt_getstackbuf() call). But this isn't the same situation at all --\nthis is VACUUM.\n\nI will make this change to HEAD soon, barring objections.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 23 Mar 2021 12:41:54 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> That being said, I should make _bt_lock_subtree_parent() return false\n> and back out of page deletion without raising an error in the case\n> where we really cannot locate a valid downlink. We really ought to\n> soldier on when that happens, since we'll do that for a bunch of other\n> reasons already. I believe that the only reason we throw an error\n> today is for parity with the page split case (the main\n> _bt_getstackbuf() call). But this isn't the same situation at all --\n> this is VACUUM.\n\n> I will make this change to HEAD soon, barring objections.\n\n+1. Not deleting the upper page seems better than the alternatives.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Mar 2021 15:44:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Tue, Mar 23, 2021 at 12:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I will make this change to HEAD soon, barring objections.\n>\n> +1. Not deleting the upper page seems better than the alternatives.\n\nFWIW it might also work that way as a holdover from the old page\ndeletion algorithm. These days we decide exactly which pages (leaf\npage plus possible internal pages) are to be deleted as a whole up\nfront (these are a subtree, though usually just a degenerate\nsingle-leaf-page subtree -- internal page deletions are rare).\n\nOne of the advantages of this design is that we verify practically all\nof the work involved in deleting an entire subtree up-front, inside\n_bt_lock_subtree_parent(). It's clearly safe to back out of it if it\nlooks dicey.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 23 Mar 2021 12:53:35 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Tue, Mar 23, 2021 at 12:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> One of the advantages of this design is that we verify practically all\n> of the work involved in deleting an entire subtree up-front, inside\n> _bt_lock_subtree_parent(). It's clearly safe to back out of it if it\n> looks dicey.\n\nThat's taken care of. I just pushed a commit that teaches\n_bt_lock_subtree_parent() to press on.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 23 Mar 2021 16:13:22 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Mar 17, 2021, at 9:00 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Of the toast pointer fields:\n> \n> int32 va_rawsize; /* Original data size (includes header) */\n> int32 va_extsize; /* External saved size (doesn't) */\n> Oid va_valueid; /* Unique ID of value within TOAST table */\n> Oid va_toastrelid; /* RelID of TOAST table containing it */\n> \n> all seem worth getting as part of any toast error message, even if these fields themselves are not corrupt. It just makes it easier to understand the context of the error you're looking at. At first I tried putting these into each message, but it is very wordy to say things like \"toast pointer with rawsize %u and extsize %u pointing at relation with oid %u\" and such. It made more sense to just add these four fields to the verify_heapam tuple format. That saves putting them in the message text itself, and has the benefit that you could filter the rows coming from verify_heapam() for ones where valueid is or is not null, for example. This changes the external interface of verify_heapam, but I didn't bother with a amcheck--1.3--1.4.sql because amcheck--1.2--1.3. sql was added as part of the v14 development work and has not yet been released. My assumption is that I can just change it, rather than making a new upgrade file.\n> \n> These patches fix the visibility rules and add extra toast checking. \n\nThese new patches address the same issues as v9 (which was never committed), and v10 (which was never even posted to this list), with some changes.\n\nRather than print out all four toast pointer fields for each toast failure, va_rawsize, va_extsize, and va_toastrelid are only mentioned in the corruption message if they are related to the specific corruption. Otherwise, just the va_valueid is mentioned in the corruption message.\n\nThe visibility rules fix is different in v11, relying on a visibility check which more closely follows the implementation of HeapTupleSatisfiesVacuumHorizon.\n\n\n\n\n \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 23 Mar 2021 23:13:07 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Wed, Mar 24, 2021 at 2:13 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> The visibility rules fix is different in v11, relying on a visibility check which more closely follows the implementation of HeapTupleSatisfiesVacuumHorizon.\n\nHmm. The header comment you wrote says \"If a tuple might not be\nvisible to any running transaction, then we must not check it.\" But, I\ndon't find that statement very clear: does it mean \"if there could be\neven one transaction to which this tuple is not visible, we must not\ncheck it\"? Or does it mean \"if the number of transactions that can see\nthis tuple could potentially be zero, then we must not check it\"? I\ndon't think either of those is actually what we care about. I think\nwhat we should be saying is \"if the tuple could have been inserted by\na transaction that also added a column to the table, but which\nultimately did not commit, then the table's current TupleDesc might\ndiffer from the one used to construct this tuple, so we must not check\nit.\"\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Mar 2021 09:12:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Wed, Mar 24, 2021 at 9:12 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Mar 24, 2021 at 2:13 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> > The visibility rules fix is different in v11, relying on a visibility check which more closely follows the implementation of HeapTupleSatisfiesVacuumHorizon.\n>\n> Hmm. The header comment you wrote says \"If a tuple might not be\n> visible to any running transaction, then we must not check it.\" But, I\n> don't find that statement very clear: does it mean \"if there could be\n> even one transaction to which this tuple is not visible, we must not\n> check it\"? Or does it mean \"if the number of transactions that can see\n> this tuple could potentially be zero, then we must not check it\"? I\n> don't think either of those is actually what we care about. I think\n> what we should be saying is \"if the tuple could have been inserted by\n> a transaction that also added a column to the table, but which\n> ultimately did not commit, then the table's current TupleDesc might\n> differ from the one used to construct this tuple, so we must not check\n> it.\"\n\nHit send too soon. And I was wrong, too. Wahoo. Thinking about the\nbuildfarm failure, I realized that there's a second danger here,\nunrelated to the possibility of different TupleDescs, which we talked\nabout before: if the tuple is dead, we can't safely follow any TOAST\npointers, because the TOAST chunks might disappear at any time. So\ntechnically we could split the return value up into something\nthree-way: if the inserted is known to have committed, we can check\nthe tuple itself, because the TupleDesc has to be reasonable. And, if\nthe tuple is known not to be dead already, and known not to be in a\nstate where it could become dead while we're doing stuff, we can\nfollow the TOAST pointer. I'm not sure whether it's worth trying to be\nthat fancy or not.\n\nIf we were only concerned about the mismatched-TupleDesc problem, this\nfunction could return true in a lot more cases. Once we get to the\ncomment that says \"Okay, the inserter committed...\" we could just\nreturn true. Similarly, the HEAP_MOVED_IN and HEAP_MOVED_OFF cases\ncould just skip all the interior test and return true, because if the\ntuple is being moved, the original inserter has to have committed.\nConversely, however, the !HeapTupleHeaderXminCommitted ->\nTransactionIdIsCurrentTransactionId case probably ought to always\nreturn false. One could argue otherwise: if we're the inserter, then\nthe only in-progress transaction that might have changed the TupleDesc\nis us, so we could just consider this case to be a true return value\nalso, regardless of what's going on with xmax. In that case, we're not\nasking \"did the inserter definitely commit?\" but \"are the inserter's\npossible DDL changes definitely visible to us?\" which might be an OK\ndefinition too.\n\nHowever, the could-the-TOAST-data-disappear problem is another story.\nI don't see how we can answer that question correctly with the logic\nyou've got here, because you have no XID threshold. Consider the case\nwhere we reach this code:\n\n+ if (!(tuphdr->t_infomask & HEAP_XMAX_COMMITTED))\n+ {\n+ if\n(TransactionIdIsInProgress(HeapTupleHeaderGetRawXmax(tuphdr)))\n+ return true; /*\nHEAPTUPLE_DELETE_IN_PROGRESS */\n+ else if\n(!TransactionIdDidCommit(HeapTupleHeaderGetRawXmax(tuphdr)))\n+\n+ /*\n+ * Not in Progress, Not Committed, so either\nAborted or crashed\n+ */\n+ return true; /* HEAPTUPLE_LIVE */\n+\n+ /* At this point the xmax is known committed */\n+ }\n\nIf we reach the case where the code comment says\nHEAPTUPLE_DELETE_IN_PROGRESS, we know that the tuple isn't dead right\nnow, and so the TOAST tuples aren't dead either. But, by the time we\ngo try to look at the TOAST tuples, they might have become dead and\nbeen pruned away, because the deleting transaction can commit at any\ntime, and after that pruning can happen at any time. Our only\nguarantee that that won't happen is if the deleting XID is new enough\nthat it's invisible to some snapshot that our backend has registered.\nThat's approximately why HeapTupleSatisfiesVacuumHorizon needs to set\n*dead_after in this case and one other, and I think you have the same\nrequirement.\n\nI just noticed that this whole thing has another, related problem:\ncheck_tuple_header_and_visibilty() and check_tuple_attribute() are\ncalled from within check_tuple(), which is called while we hold a\nbuffer lock on the heap page. We should not be going and doing complex\noperations that might take their own buffer locks - like TOAST index\nchecks - while we're holding an lwlock. That's going to have to be\nchanged so that the TOAST pointer checking happens after\nUnlockReleaseBuffer(); in other words, we'll need to remember the\nTOAST pointers to go look up and actually look them up after\nUnlockReleaseBuffer(). But, when we do that, then the HEAPTUPLE_LIVE\ncase above has the same race condition that is already present in the\nHEAPTUPLE_DELETE_IN_PROGRESS case: after we release the buffer pin,\nsome other transaction might delete the tuple and the associated TOAST\ntuples, and they might then commit, and the tuple might become dead\nand get pruned away before we check the TOAST table.\n\nOn a related note, I notice that your latest patch removes all the\nlogic that complains about XIDs being out of bounds. I don't think\nthat's good. Those seem like important checks. They're important for\nfinding problems with the relation, and I think we probably also need\nthem because of the XID-horizon issue mentioned above. One possible\nway of looking at it is to say that the XID_BOUNDS_OK case has two\nsub-cases: either the XID is within bounds and is one that cannot\nbecome all-visible concurrently because it's not visible to all of our\nbackend's registered snapshots, or it's within bounds but does have\nthe possibility of becoming all-visible. In the former case, if it\nappears as XMAX we can safely follow TOAST pointers found within the\ntuple; in the latter case, we can't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Mar 2021 10:43:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "Mark,\n\nHere's a quick and very dirty sketch of what I think perhaps this\nlogic could look like. This is pretty much untested and it might be\nbuggy, but at least you can see whether we're thinking at all in the\nsame direction.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 24 Mar 2021 16:46:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Mar 24, 2021, at 1:46 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> Mark,\n> \n> Here's a quick and very dirty sketch of what I think perhaps this\n> logic could look like. This is pretty much untested and it might be\n> buggy, but at least you can see whether we're thinking at all in the\n> same direction.\n\nThanks! The attached patch addresses your comments here and in your prior email. In particular, this patch changes the tuple visibility logic to not check tuples for which the inserting transaction aborted or is still in progress, and to not check toast for tuples deleted in transactions older than our transaction snapshot's xmin. A list of toasted attributes which are safe to check is compiled per main table page during the scan of the page, then checked after the buffer lock on the main page is released.\n\nIn the perhaps unusual case where verify_heapam() is called in a transaction which has also added tuples to the table being checked, this patch's visibility logic chooses not to check such tuples. I'm on the fence about this choice, and am mostly following your lead. I like that this decision maintains the invariant that we never check tuples which have not yet been committed.\n\nThe patch includes a bit of refactoring. In the old code, heap_check() performed clog bounds checking on xmin and xmax prior to calling check_tuple_header_and_visibilty(), but I think that's not such a great choice. If the tuple header is garbled to have random bytes in the xmin and xmax fields, and we can detect that situation because other tuple header fields are garbled in detectable ways, I'd rather get a report about the header being garbled than a report about the xmin or xmax being out of bounds. In the new code, the tuple header is checked first, then the visibility is checked, then the tuple is checked against the current relation description, then the tuple attributes are checked. I think the layout is easier to follow, too.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 29 Mar 2021 10:45:04 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Mon, Mar 29, 2021 at 1:45 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Thanks! The attached patch addresses your comments here and in your prior email. In particular, this patch changes the tuple visibility logic to not check tuples for which the inserting transaction aborted or is still in progress, and to not check toast for tuples deleted in transactions older than our transaction snapshot's xmin. A list of toasted attributes which are safe to check is compiled per main table page during the scan of the page, then checked after the buffer lock on the main page is released.\n>\n> In the perhaps unusual case where verify_heapam() is called in a transaction which has also added tuples to the table being checked, this patch's visibility logic chooses not to check such tuples. I'm on the fence about this choice, and am mostly following your lead. I like that this decision maintains the invariant that we never check tuples which have not yet been committed.\n>\n> The patch includes a bit of refactoring. In the old code, heap_check() performed clog bounds checking on xmin and xmax prior to calling check_tuple_header_and_visibilty(), but I think that's not such a great choice. If the tuple header is garbled to have random bytes in the xmin and xmax fields, and we can detect that situation because other tuple header fields are garbled in detectable ways, I'd rather get a report about the header being garbled than a report about the xmin or xmax being out of bounds. In the new code, the tuple header is checked first, then the visibility is checked, then the tuple is checked against the current relation description, then the tuple attributes are checked. I think the layout is easier to follow, too.\n\nHmm, so this got ~10x bigger from my version. Could you perhaps\nseparate it out into a series of patches for easier review? Say, one\nthat just fixes the visibility logic, and then a second to avoid doing\nthe TOAST check with a buffer lock held, and then more than that if\nthere are other pieces that make sense to separate out?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Mar 2021 16:06:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Mar 29, 2021, at 1:06 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> Hmm, so this got ~10x bigger from my version. Could you perhaps\n> separate it out into a series of patches for easier review? Say, one\n> that just fixes the visibility logic, and then a second to avoid doing\n> the TOAST check with a buffer lock held, and then more than that if\n> there are other pieces that make sense to separate out?\n\nSure, here are four patches which do the same as the single v12 patch did.\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 29 Mar 2021 16:16:44 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Mon, Mar 29, 2021 at 7:16 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Sure, here are four patches which do the same as the single v12 patch did.\n\nThanks. Here are some comments on 0003 and 0004:\n\nWhen you posted v11, you said that \"Rather than print out all four\ntoast pointer fields for each toast failure, va_rawsize, va_extsize,\nand va_toastrelid are only mentioned in the corruption message if they\nare related to the specific corruption. Otherwise, just the\nva_valueid is mentioned in the corruption message.\" I like that\nprincipal; in fact, as you know, I suggested it. But, with the v13\npatches applied, exactly zero of the callers to\nreport_toast_corruption() appear to be following it, because none of\nthem include the value ID. I think you need to revise the messages,\ne.g. \"toasted value for attribute %u missing from toast table\" ->\n\"toast value %u not found in toast table\"; \"final toast chunk number\n%u differs from expected value %u\" -> \"toast value %u was expected to\nend at chunk %u, but ended at chunk %u\"; \"toast chunk sequence number\nis null\" -> \"toast value %u has toast chunk with null sequence\nnumber\". In the first of those example cases, I think you need not\nmention the attribute number because it's already there in its own\ncolumn.\n\nOn a related note, it doesn't look like you are actually checking\nva_toastrelid here. Doing so seems like it would be a good idea. It\nalso seems like it would be good to check that the compressed size is\nless than or equal to the uncompressed size.\n\nI do not like the name tuple_is_volatile, because volatile has a\ncouple of meanings already, and this isn't one of them. A\nSQL-callable function is volatile if it might return different outputs\ngiven the same inputs, even within the same SQL statement. A C\nvariable is volatile if it might be magically modified in ways not\nknown to the compiler. I had suggested tuple_cannot_die_now, which is\ncloser to the mark. If you want to be even more precise, you could\ntalk about whether the tuple is potentially prunable (e.g.\ntuple_can_be_pruned, which inverts the sense). That's really what\nwe're worried about: whether MVCC rules would permit the tuple to be\npruned after we release the buffer lock and before we check TOAST.\n\nI would ideally prefer not to rename report_corruption(). The old name\nis clearer, and changing it produces a bunch of churn that I'd rather\navoid. Perhaps the common helper function could be called\nreport_corruption_internal(), and the callers could be\nreport_corruption() and report_toast_corruption().\n\nRegarding 0001 and 0002, I think the logic in 0002 looks a lot closer\nto correct now, but I want to go through it in more detail. I think,\nthough, that you've made some of my comments worse. For example, I\nwrote: \"It should be impossible for xvac to still be running, since\nwe've removed all that code, but even if it were, it ought to be safe\nto read the tuple, since the original inserter must have committed.\nBut, if the xvac transaction committed, this tuple (and its associated\nTOAST tuples) could be pruned at any time.\" You changed that to read\n\"We don't bother comparing against safe_xmin because the VACUUM FULL\nmust have committed prior to an upgrade and can't still be running.\"\nYour comment is shorter, which is a point in its favor, but what I was\ntrying to emphasize is that the logic would be correct EVEN IF we\nagain started to use HEAP_MOVED_OFF and HEAP_MOVED_IN again. Your\nversion makes it sound like the code would need to be revised in that\ncase. If that's true, then my comment was wrong, but I didn't think it\nwas true, or I wouldn't have written the comment in that way.\n\nAlso, and maybe this is a job for a separate patch, but then again\nmaybe not, I wonder if it's really a good idea for get_xid_status to\nreturn both a XidBoundsViolation and an XidCommitStatus. It seems to\nme (without checking all that carefully) that it might be better to\njust flatten all of that into a single enum, because right now it\nseems like you often end up with two consecutive switch statements\nwhere, perhaps, just one would suffice.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 30 Mar 2021 15:45:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Mar 30, 2021, at 12:45 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Mon, Mar 29, 2021 at 7:16 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> Sure, here are four patches which do the same as the single v12 patch did.\n> \n> Thanks. Here are some comments on 0003 and 0004:\n> \n> When you posted v11, you said that \"Rather than print out all four\n> toast pointer fields for each toast failure, va_rawsize, va_extsize,\n> and va_toastrelid are only mentioned in the corruption message if they\n> are related to the specific corruption. Otherwise, just the\n> va_valueid is mentioned in the corruption message.\" I like that\n> principal; in fact, as you know, I suggested it. But, with the v13\n> patches applied, exactly zero of the callers to\n> report_toast_corruption() appear to be following it, because none of\n> them include the value ID. I think you need to revise the messages,\n> e.g.\n\nThese changes got lost between v11 and v12. I've put them back, as well as updating to use your language.\n\n> \"toasted value for attribute %u missing from toast table\" ->\n> \"toast value %u not found in toast table\";\n\nChanged.\n\n> \"final toast chunk number\n> %u differs from expected value %u\" -> \"toast value %u was expected to\n> end at chunk %u, but ended at chunk %u\";\n\nChanged.\n\n> \"toast chunk sequence number\n> is null\" -> \"toast value %u has toast chunk with null sequence\n> number\".\n\nChanged.\n\n> In the first of those example cases, I think you need not\n> mention the attribute number because it's already there in its own\n> column.\n\nCorrect. I'd removed that but lost that work in v12.\n\n> On a related note, it doesn't look like you are actually checking\n> va_toastrelid here. Doing so seems like it would be a good idea. It\n> also seems like it would be good to check that the compressed size is\n> less than or equal to the uncompressed size.\n\nYeah, those checks were in v11 but got lost when I changed things for v12. They are back in v14.\n\n> I do not like the name tuple_is_volatile, because volatile has a\n> couple of meanings already, and this isn't one of them. A\n> SQL-callable function is volatile if it might return different outputs\n> given the same inputs, even within the same SQL statement. A C\n> variable is volatile if it might be magically modified in ways not\n> known to the compiler. I had suggested tuple_cannot_die_now, which is\n> closer to the mark. If you want to be even more precise, you could\n> talk about whether the tuple is potentially prunable (e.g.\n> tuple_can_be_pruned, which inverts the sense). That's really what\n> we're worried about: whether MVCC rules would permit the tuple to be\n> pruned after we release the buffer lock and before we check TOAST.\n\nI used \"tuple_can_be_pruned\". I didn't like \"tuple_cannot_die_now\", and still don't like that name, as it has several wrong interpretations. One meaning of \"cannot die now\" is that it has become immortal. Another is \"cannot be deleted from the table\". \n\n> I would ideally prefer not to rename report_corruption(). The old name\n> is clearer, and changing it produces a bunch of churn that I'd rather\n> avoid. Perhaps the common helper function could be called\n> report_corruption_internal(), and the callers could be\n> report_corruption() and report_toast_corruption().\n\nYes, hence the commit message in the previous patch set, \"This patch can probably be left out if the committer believes it creates more git churn than it is worth.\" I've removed this patch from this next patch set, and used the function names you suggest.\n\n> Regarding 0001 and 0002, I think the logic in 0002 looks a lot closer\n> to correct now, but I want to go through it in more detail. I think,\n> though, that you've made some of my comments worse. For example, I\n> wrote: \"It should be impossible for xvac to still be running, since\n> we've removed all that code, but even if it were, it ought to be safe\n> to read the tuple, since the original inserter must have committed.\n> But, if the xvac transaction committed, this tuple (and its associated\n> TOAST tuples) could be pruned at any time.\" You changed that to read\n> \"We don't bother comparing against safe_xmin because the VACUUM FULL\n> must have committed prior to an upgrade and can't still be running.\"\n> Your comment is shorter, which is a point in its favor, but what I was\n> trying to emphasize is that the logic would be correct EVEN IF we\n> again started to use HEAP_MOVED_OFF and HEAP_MOVED_IN again. Your\n> version makes it sound like the code would need to be revised in that\n> case. If that's true, then my comment was wrong, but I didn't think it\n> was true, or I wouldn't have written the comment in that way.\n\nI think the logic would have to change if we brought back the old VACUUM FULL behavior.\n\nI'm not looking at the old VACUUM FULL code, but my assumption is that if the xvac code were resurrected, then when a tuple is moved off by a VACUUM FULL, the old tuple and associated toast cannot be pruned until concurrent transactions end. So, if amcheck is running more-or-less concurrently with the VACUUM FULL and has a snapshot xmin no newer than the xid of the VACUUM FULL's xid, it can check the toast associated with the moved off tuple after the VACUUM FULL commits. If instead the VACUUM FULL xid was older than amcheck's xmin, then the toast is in danger of being vacuumed away. So the logic in verify_heapam would need to change to think about this distinction. We don't have to concern ourselves about that, because VACUUM FULL cannot be running, and so the xid for it must be older than our xmin, and hence the toast is unconditionally not safe to check.\n\nI'm changing the comments back to how you had them, but I'd like to know why my reasoning is wrong.\n\n> Also, and maybe this is a job for a separate patch, but then again\n> maybe not, I wonder if it's really a good idea for get_xid_status to\n> return both a XidBoundsViolation and an XidCommitStatus. It seems to\n> me (without checking all that carefully) that it might be better to\n> just flatten all of that into a single enum, because right now it\n> seems like you often end up with two consecutive switch statements\n> where, perhaps, just one would suffice.\n\nget_xid_status was written to return XidBoundsViolation separately from returning by reference an XidCommitStatus because, if you pass null for the XidCommitStatus parameter, the function can return earlier without taking the XactTruncationLock and checking clog. I think that design made a lot of sense at the time get_xid_status was written, but there are no longer any callers passing null, so the function never returns early.\n\nI am hesitant to refactor get_xid_status as you suggest until we're sure no such callers who pass null are needed. So perhaps your idea of having that change as a separate patch for after this patch series is done and committed is the right strategy.\n\nAlso, even now, there are some places where the returned XidBoundsViolation is used right away, but some other processing happens before the XidCommitStatus is finally used. If they were one value in a merged enum, there would still be two switches at least in the location I'm thinking of.\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 30 Mar 2021 21:34:01 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Wed, Mar 31, 2021 at 12:34 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I'm not looking at the old VACUUM FULL code, but my assumption is that if the xvac code were resurrected, then when a tuple is moved off by a VACUUM FULL, the old tuple and associated toast cannot be pruned until concurrent transactions end. So, if amcheck is running more-or-less concurrently with the VACUUM FULL and has a snapshot xmin no newer than the xid of the VACUUM FULL's xid, it can check the toast associated with the moved off tuple after the VACUUM FULL commits. If instead the VACUUM FULL xid was older than amcheck's xmin, then the toast is in danger of being vacuumed away. So the logic in verify_heapam would need to change to think about this distinction. We don't have to concern ourselves about that, because VACUUM FULL cannot be running, and so the xid for it must be older than our xmin, and hence the toast is unconditionally not safe to check.\n>\n> I'm changing the comments back to how you had them, but I'd like to know why my reasoning is wrong.\n\nLet's start by figuring out *whether* your reasoning is wrong. My\nassumption was that old-style VACUUM FULL would move tuples without\nretoasting. That is, if we decided to move a tuple from page 2 of the\nmain table to page 1, we would just write the tuple into page 1,\nmarking it moved-in, and on page 2 we would mark it moved-off. And\nthat we would not examine or follow any TOAST pointers at all, so\nwhatever TOAST entries existed would be shared between the two tuples.\nOne tuple or the other would eventually die, depending on whether xvac\nwent on to commit or abort, but either way the TOAST doesn't need\nupdating because there's always exactly 1 remaining tuple using\npointers to those TOAST values.\n\nYour assumption seems to be the opposite, that the TOASTed values\nwould be retoasted as part of VF. If that is true, then your idea is\nright.\n\nDo you agree with this analysis? If so, we can check the code and find\nout which way it actually worked.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Mar 2021 13:11:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Mar 31, 2021, at 10:11 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Wed, Mar 31, 2021 at 12:34 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> I'm not looking at the old VACUUM FULL code, but my assumption is that if the xvac code were resurrected, then when a tuple is moved off by a VACUUM FULL, the old tuple and associated toast cannot be pruned until concurrent transactions end. So, if amcheck is running more-or-less concurrently with the VACUUM FULL and has a snapshot xmin no newer than the xid of the VACUUM FULL's xid, it can check the toast associated with the moved off tuple after the VACUUM FULL commits. If instead the VACUUM FULL xid was older than amcheck's xmin, then the toast is in danger of being vacuumed away. So the logic in verify_heapam would need to change to think about this distinction. We don't have to concern ourselves about that, because VACUUM FULL cannot be running, and so the xid for it must be older than our xmin, and hence the toast is unconditionally not safe to check.\n>> \n>> I'm changing the comments back to how you had them, but I'd like to know why my reasoning is wrong.\n> \n> Let's start by figuring out *whether* your reasoning is wrong. My\n> assumption was that old-style VACUUM FULL would move tuples without\n> retoasting. That is, if we decided to move a tuple from page 2 of the\n> main table to page 1, we would just write the tuple into page 1,\n> marking it moved-in, and on page 2 we would mark it moved-off. And\n> that we would not examine or follow any TOAST pointers at all, so\n> whatever TOAST entries existed would be shared between the two tuples.\n> One tuple or the other would eventually die, depending on whether xvac\n> went on to commit or abort, but either way the TOAST doesn't need\n> updating because there's always exactly 1 remaining tuple using\n> pointers to those TOAST values.\n> \n> Your assumption seems to be the opposite, that the TOASTed values\n> would be retoasted as part of VF. If that is true, then your idea is\n> right.\n> \n> Do you agree with this analysis? If so, we can check the code and find\n> out which way it actually worked.\n\nActually, that makes a lot of sense without even looking at the old code. I was implicitly assuming that the toast table was undergoing a VF also, and that the toast pointers in the main table tuples would be updated to point to the new location, so we'd be unable to follow the pointers to the old location without danger of the old location entries being vacuumed away. But if the main table tuples get moved while keeping their toast pointers unaltered, then you don't have to worry about that, although you do have to worry that a VF of the main table doesn't help so much with toast table bloat.\n\nWe're only discussing this in order to craft the right comment for a bit of code with respect to a hypothetical situation in which VF gets resurrected, so I'm not sure this should be top priority, but I'm curious enough now to go read the old code....\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 31 Mar 2021 10:31:50 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Wed, Mar 31, 2021 at 1:31 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Actually, that makes a lot of sense without even looking at the old code. I was implicitly assuming that the toast table was undergoing a VF also, and that the toast pointers in the main table tuples would be updated to point to the new location, so we'd be unable to follow the pointers to the old location without danger of the old location entries being vacuumed away. But if the main table tuples get moved while keeping their toast pointers unaltered, then you don't have to worry about that, although you do have to worry that a VF of the main table doesn't help so much with toast table bloat.\n>\n> We're only discussing this in order to craft the right comment for a bit of code with respect to a hypothetical situation in which VF gets resurrected, so I'm not sure this should be top priority, but I'm curious enough now to go read the old code....\n\nRight, well, we wouldn't be PostgreSQL hackers if we didn't spend lots\nof time worrying about obscure details. Whether that's good software\nengineering or mere pedantry is sometimes debatable.\n\nI took a look at commit 0a469c87692d15a22eaa69d4b3a43dd8e278dd64,\nwhich removed old-style VACUUM FULL, and AFAICS, it doesn't contain\nany references to tuple deforming, varlena, HeapTupleHasExternal, or\nanything else that would make me think it has the foggiest idea\nwhether the tuples it's moving around contain TOAST pointers, so I\nthink I had the right idea.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Mar 2021 13:41:11 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Mar 31, 2021, at 10:31 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> \n> \n>> On Mar 31, 2021, at 10:11 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n>> \n>> On Wed, Mar 31, 2021 at 12:34 AM Mark Dilger\n>> <mark.dilger@enterprisedb.com> wrote:\n>>> I'm not looking at the old VACUUM FULL code, but my assumption is that if the xvac code were resurrected, then when a tuple is moved off by a VACUUM FULL, the old tuple and associated toast cannot be pruned until concurrent transactions end. So, if amcheck is running more-or-less concurrently with the VACUUM FULL and has a snapshot xmin no newer than the xid of the VACUUM FULL's xid, it can check the toast associated with the moved off tuple after the VACUUM FULL commits. If instead the VACUUM FULL xid was older than amcheck's xmin, then the toast is in danger of being vacuumed away. So the logic in verify_heapam would need to change to think about this distinction. We don't have to concern ourselves about that, because VACUUM FULL cannot be running, and so the xid for it must be older than our xmin, and hence the toast is unconditionally not safe to check.\n>>> \n>>> I'm changing the comments back to how you had them, but I'd like to know why my reasoning is wrong.\n>> \n>> Let's start by figuring out *whether* your reasoning is wrong. My\n>> assumption was that old-style VACUUM FULL would move tuples without\n>> retoasting. That is, if we decided to move a tuple from page 2 of the\n>> main table to page 1, we would just write the tuple into page 1,\n>> marking it moved-in, and on page 2 we would mark it moved-off. And\n>> that we would not examine or follow any TOAST pointers at all, so\n>> whatever TOAST entries existed would be shared between the two tuples.\n>> One tuple or the other would eventually die, depending on whether xvac\n>> went on to commit or abort, but either way the TOAST doesn't need\n>> updating because there's always exactly 1 remaining tuple using\n>> pointers to those TOAST values.\n>> \n>> Your assumption seems to be the opposite, that the TOASTed values\n>> would be retoasted as part of VF. If that is true, then your idea is\n>> right.\n>> \n>> Do you agree with this analysis? If so, we can check the code and find\n>> out which way it actually worked.\n> \n> Actually, that makes a lot of sense without even looking at the old code. I was implicitly assuming that the toast table was undergoing a VF also, and that the toast pointers in the main table tuples would be updated to point to the new location, so we'd be unable to follow the pointers to the old location without danger of the old location entries being vacuumed away. But if the main table tuples get moved while keeping their toast pointers unaltered, then you don't have to worry about that, although you do have to worry that a VF of the main table doesn't help so much with toast table bloat.\n> \n> We're only discussing this in order to craft the right comment for a bit of code with respect to a hypothetical situation in which VF gets resurrected, so I'm not sure this should be top priority, but I'm curious enough now to go read the old code....\n\nWell, that's annoying. The documentation of postgres 8.2 for vacuum full [1] says,\n\n Selects \"full\" vacuum, which may reclaim more space, but takes much longer and exclusively locks the table.\n\nI read \"exclusively locks\" as meaning it takes an ExclusiveLock, but the code shows that it takes an AccessExclusiveLock. I think the docs are pretty misleading here, though I understand that grammatically it is hard to say \"accessively exclusively locks\" or such. But a part of my analysis was based on the reasoning that if VF only takes an ExclusiveLock, then there must be concurrent readers possible. VF went away long enough ago that I had forgotten exactly how inconvenient it was.\n\n[1] https://www.postgresql.org/docs/8.2/sql-vacuum.html\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 31 Mar 2021 10:44:55 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Wed, Mar 31, 2021 at 1:44 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I read \"exclusively locks\" as meaning it takes an ExclusiveLock, but the code shows that it takes an AccessExclusiveLock. I think the docs are pretty misleading here, though I understand that grammatically it is hard to say \"accessively exclusively locks\" or such. But a part of my analysis was based on the reasoning that if VF only takes an ExclusiveLock, then there must be concurrent readers possible. VF went away long enough ago that I had forgotten exactly how inconvenient it was.\n\nIt kinda depends on what you mean by concurrent readers, because a\ntransaction that could start on Monday and acquire an XID, and then on\nTuesday you could run VACUUM FULL on relation \"foo\", and then on\nWednesday the transaction from before could get around to reading some\ndata from \"foo\". The two transactions are concurrent, in the sense\nthat the 3-day transaction was running before the VACUUM FULL, was\nstill running after VACUUM FULL, read the same pages that the VACUUM\nFULL modified, and cares whether the XID from the VACUUM FULL\ncommitted or aborted. But, it's not concurrent in the sense that you\nnever have a situation where the VACUUM FULL does some of its\nmodifications, then an overlapping transaction sees them, and then it\ndoes the rest of them.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 31 Mar 2021 13:51:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Wed, Mar 31, 2021 at 12:34 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> These changes got lost between v11 and v12. I've put them back, as well as updating to use your language.\n\nHere's an updated patch that includes your 0001 and 0002 plus a bunch\nof changes by me. I intend to commit this version unless somebody\nspots a problem with it.\n\nHere are the things I changed:\n\n- I renamed tuple_can_be_pruned to tuple_could_be_pruned because I\nthink it does a better job suggesting that we're uncertain about what\nwill happen.\n\n- I got rid of bool header_garbled and changed it to bool result,\ninverting the sense, because it didn't seem useful to have a function\nthat ended with if (some_boolean) return false; return true when I\ncould end the function with return some_other_boolean.\n\n- I removed all the one-word comments that said /* corrupt */ or /*\ncheckable */ because they seemed redundant.\n\n- In the xmin_status section of check_tuple_visibility(), I got rid of\nthe xmin_status == XID_IS_CURRENT_XID and xmin_status ==\nXID_IN_PROGRESS cases because they were redundant with the xmin_status\n!= XID_COMMITTED case.\n\n- If xmax is a multi but seems to be garbled, I changed it to return\ntrue rather than false. The inserter is known to have committed by\nthat point, so I think it's OK to try to deform the tuple. We just\nshouldn't try to check TOAST.\n\n- I changed both switches over xmax_status to break in each case and\nthen return true after instead of returning true for each case. I\nthink this is clearer.\n\n- I changed get_xid_status() to perform the TransactionIdIs... checks\nin the same order that HeapTupleSatisfies...() does them. I believe\nthat it's incorrect to conclude that the transaction must be in\nprogress because it neither IsCurrentTransaction nor DidCommit nor\nDidAbort, because all of those things will be false for a transaction\nthat is running at the time of a system crash. The correct rule is\nthat if it neither IsCurrentTransaction nor IsInProgress nor DidCommit\nthen it's aborted.\n\n- I moved a few comments and rewrote some others, including some of\nthe ones that you took from my earlier draft patch. The thing is, the\ncomment needs to be adjusted based on where you put it. Like, I had a\ncomment that says\"It should be impossible for xvac to still be\nrunning, since we've removed all that code, but even if it were, it\nought to be safe to read the tuple, since the original inserter must\nhave committed. But, if the xvac transaction committed, this tuple\n(and its associated TOAST tuples) could be pruned at any time.\" which\nin my version was right before a TransactionIdDidCommit() test, and\nexplains why that test is there and why the code does what it does as\na result. But in your version you've moved it to a place where we've\nalready tested that the transaction has committed, and more\nimportantly, where we've already tested that it's not still running.\nSaying that it \"should\" be impossible for it not to be running when\nwe've *actually checked that* doesn't make nearly as much sense as it\ndoes when we haven't checked that and aren't going to do so.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 1 Apr 2021 11:08:00 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Apr 1, 2021, at 8:08 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Wed, Mar 31, 2021 at 12:34 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> These changes got lost between v11 and v12. I've put them back, as well as updating to use your language.\n> \n> Here's an updated patch that includes your 0001 and 0002 plus a bunch\n> of changes by me. I intend to commit this version unless somebody\n> spots a problem with it.\n> \n> Here are the things I changed:\n> \n> - I renamed tuple_can_be_pruned to tuple_could_be_pruned because I\n> think it does a better job suggesting that we're uncertain about what\n> will happen.\n\n+1\n\n> - I got rid of bool header_garbled and changed it to bool result,\n> inverting the sense, because it didn't seem useful to have a function\n> that ended with if (some_boolean) return false; return true when I\n> could end the function with return some_other_boolean.\n\n+1\n\n> - I removed all the one-word comments that said /* corrupt */ or /*\n> checkable */ because they seemed redundant.\n\nOk.\n\n> - In the xmin_status section of check_tuple_visibility(), I got rid of\n> the xmin_status == XID_IS_CURRENT_XID and xmin_status ==\n> XID_IN_PROGRESS cases because they were redundant with the xmin_status\n> != XID_COMMITTED case.\n\nOk.\n\n> - If xmax is a multi but seems to be garbled, I changed it to return\n> true rather than false. The inserter is known to have committed by\n> that point, so I think it's OK to try to deform the tuple. We just\n> shouldn't try to check TOAST.\n\nIt is hard to know what to do when at least one tuple header field is corrupt. You don't necesarily know which one it is. For example, if HEAP_XMAX_IS_MULTI is set, we try to interpret the xmax as a mxid, and if it is out of bounds, we report it as corrupt. But was the xmax corrupt? Or was the HEAP_XMAX_IS_MULTI bit corrupt? It's not clear. I took the view that if either xmin or xmax appear to be corrupt when interpreted in light of the various tuple header bits, all we really know is that the set of fields/bits don't make sense as a whole, so we report corruption, don't trust any of them, and abort further checking of the tuple. You have be burden of proof the other way around. If the xmin appears fine, and xmax appears corrupt, then we only know that xmax is corrupt, so the tuple is checkable because according to the xmin it committed.\n\nI don't think how you have it causes undue problems, since deforming the tuple when you shouldn't merely risks a bunch of extra not-so-helpful corruption messages. And hey, maybe they're helpful to somebody clever enough to diagnose why that particular bit of noise was generated. \n\n> - I changed both switches over xmax_status to break in each case and\n> then return true after instead of returning true for each case. I\n> think this is clearer.\n\nOk.\n\n> - I changed get_xid_status() to perform the TransactionIdIs... checks\n> in the same order that HeapTupleSatisfies...() does them. I believe\n> that it's incorrect to conclude that the transaction must be in\n> progress because it neither IsCurrentTransaction nor DidCommit nor\n> DidAbort, because all of those things will be false for a transaction\n> that is running at the time of a system crash. The correct rule is\n> that if it neither IsCurrentTransaction nor IsInProgress nor DidCommit\n> then it's aborted.\n\nOk.\n\n> - I moved a few comments and rewrote some others, including some of\n> the ones that you took from my earlier draft patch. The thing is, the\n> comment needs to be adjusted based on where you put it. Like, I had a\n> comment that says\"It should be impossible for xvac to still be\n> running, since we've removed all that code, but even if it were, it\n> ought to be safe to read the tuple, since the original inserter must\n> have committed. But, if the xvac transaction committed, this tuple\n> (and its associated TOAST tuples) could be pruned at any time.\" which\n> in my version was right before a TransactionIdDidCommit() test, and\n> explains why that test is there and why the code does what it does as\n> a result. But in your version you've moved it to a place where we've\n> already tested that the transaction has committed, and more\n> importantly, where we've already tested that it's not still running.\n> Saying that it \"should\" be impossible for it not to be running when\n> we've *actually checked that* doesn't make nearly as much sense as it\n> does when we haven't checked that and aren't going to do so.\n\n\n * If xmin_status happens to be XID_IN_PROGRESS, then in theory \n \nDid you mean to say XID_IS_CURRENT_XID here? \n\n/* xmax is an MXID, not an MXID. Sanity check it. */ \n\nIs it an MXID or isn't it?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 1 Apr 2021 09:32:44 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Thu, Apr 1, 2021 at 12:32 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > - If xmax is a multi but seems to be garbled, I changed it to return\n> > true rather than false. The inserter is known to have committed by\n> > that point, so I think it's OK to try to deform the tuple. We just\n> > shouldn't try to check TOAST.\n>\n> It is hard to know what to do when at least one tuple header field is corrupt. You don't necesarily know which one it is. For example, if HEAP_XMAX_IS_MULTI is set, we try to interpret the xmax as a mxid, and if it is out of bounds, we report it as corrupt. But was the xmax corrupt? Or was the HEAP_XMAX_IS_MULTI bit corrupt? It's not clear. I took the view that if either xmin or xmax appear to be corrupt when interpreted in light of the various tuple header bits, all we really know is that the set of fields/bits don't make sense as a whole, so we report corruption, don't trust any of them, and abort further checking of the tuple. You have be burden of proof the other way around. If the xmin appears fine, and xmax appears corrupt, then we only know that xmax is corrupt, so the tuple is checkable because according to the xmin it committed.\n\nI agree that it's hard to be sure what's gone once we start finding\ncorrupted data, but deciding that maybe xmin didn't really commit\nbecause we see that there's something wrong with xmax seems excessive\nto me. I thought about a related case: if xmax is a bad multi but is\nalso hinted invalid, should we try to follow TOAST pointers? I think\nthat's hard to say, because we don't know whether (1) the invalid\nmarking is in error, (2) it's wrong to consider it a multi rather than\nan XID, (3) the stored multi got overwritten with a garbage value, or\n(4) the stored multi got removed before the tuple was frozen. Not\nknowing which of those is the case, how are we supposed to decide\nwhether the TOAST tuples might have been (or be about to get) pruned?\n\nBut, in the case we're talking about here, I don't think it's a\nparticularly close decision. All we need to say is that if xmax or the\ninfomask bits pertaining to it are corrupted, we're still going to\nsuppose that xmin and the infomask bits pertaining to it, which are\nall different bytes and bits, are OK. To me, the contrary decision,\nnamely that a bogus xmax means xmin was probably lying about the\ntransaction having been committed in the first place, seems like a\nserious overreaction. As you say:\n\n> I don't think how you have it causes undue problems, since deforming the tuple when you shouldn't merely risks a bunch of extra not-so-helpful corruption messages. And hey, maybe they're helpful to somebody clever enough to diagnose why that particular bit of noise was generated.\n\nI agree. The biggest risk here is that we might omit >0 complaints\nwhen only 0 are justified. That will panic users. The possibility that\nwe might omit >x complaints when only x are justified, for some x>0,\nis also a risk, but it's not nearly as bad, because there's definitely\nsomething wrong, and it's just a question of what it is exactly. So we\nhave to be really conservative about saying that X is corruption if\nthere's any possibility that it might be fine. But once we've\ncomplained about one thing, we can take a more balanced approach about\nwhether to risk issuing more complaints. The possibility that\nsuppressing the additional complaints might complicate resolution of\nthe issue also needs to be considered.\n\n> * If xmin_status happens to be XID_IN_PROGRESS, then in theory\n>\n> Did you mean to say XID_IS_CURRENT_XID here?\n\nYes, I did, thanks.\n\n> /* xmax is an MXID, not an MXID. Sanity check it. */\n>\n> Is it an MXID or isn't it?\n\nGood catch.\n\nNew patch attached.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 1 Apr 2021 12:56:09 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Apr 1, 2021, at 9:56 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Thu, Apr 1, 2021 at 12:32 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>>> - If xmax is a multi but seems to be garbled, I changed it to return\n>>> true rather than false. The inserter is known to have committed by\n>>> that point, so I think it's OK to try to deform the tuple. We just\n>>> shouldn't try to check TOAST.\n>> \n>> It is hard to know what to do when at least one tuple header field is corrupt. You don't necesarily know which one it is. For example, if HEAP_XMAX_IS_MULTI is set, we try to interpret the xmax as a mxid, and if it is out of bounds, we report it as corrupt. But was the xmax corrupt? Or was the HEAP_XMAX_IS_MULTI bit corrupt? It's not clear. I took the view that if either xmin or xmax appear to be corrupt when interpreted in light of the various tuple header bits, all we really know is that the set of fields/bits don't make sense as a whole, so we report corruption, don't trust any of them, and abort further checking of the tuple. You have be burden of proof the other way around. If the xmin appears fine, and xmax appears corrupt, then we only know that xmax is corrupt, so the tuple is checkable because according to the xmin it committed.\n> \n> I agree that it's hard to be sure what's gone once we start finding\n> corrupted data, but deciding that maybe xmin didn't really commit\n> because we see that there's something wrong with xmax seems excessive\n> to me. I thought about a related case: if xmax is a bad multi but is\n> also hinted invalid, should we try to follow TOAST pointers? I think\n> that's hard to say, because we don't know whether (1) the invalid\n> marking is in error, (2) it's wrong to consider it a multi rather than\n> an XID, (3) the stored multi got overwritten with a garbage value, or\n> (4) the stored multi got removed before the tuple was frozen. Not\n> knowing which of those is the case, how are we supposed to decide\n> whether the TOAST tuples might have been (or be about to get) pruned?\n> \n> But, in the case we're talking about here, I don't think it's a\n> particularly close decision. All we need to say is that if xmax or the\n> infomask bits pertaining to it are corrupted, we're still going to\n> suppose that xmin and the infomask bits pertaining to it, which are\n> all different bytes and bits, are OK. To me, the contrary decision,\n> namely that a bogus xmax means xmin was probably lying about the\n> transaction having been committed in the first place, seems like a\n> serious overreaction. As you say:\n> \n>> I don't think how you have it causes undue problems, since deforming the tuple when you shouldn't merely risks a bunch of extra not-so-helpful corruption messages. And hey, maybe they're helpful to somebody clever enough to diagnose why that particular bit of noise was generated.\n> \n> I agree. The biggest risk here is that we might omit >0 complaints\n> when only 0 are justified. That will panic users. The possibility that\n> we might omit >x complaints when only x are justified, for some x>0,\n> is also a risk, but it's not nearly as bad, because there's definitely\n> something wrong, and it's just a question of what it is exactly. So we\n> have to be really conservative about saying that X is corruption if\n> there's any possibility that it might be fine. But once we've\n> complained about one thing, we can take a more balanced approach about\n> whether to risk issuing more complaints. The possibility that\n> suppressing the additional complaints might complicate resolution of\n> the issue also needs to be considered.\n\nThis all seems fine to me. The main thing is that we don't go on to check the toast, which we don't.\n\n>> * If xmin_status happens to be XID_IN_PROGRESS, then in theory\n>> \n>> Did you mean to say XID_IS_CURRENT_XID here?\n> \n> Yes, I did, thanks.\n\nOuch. You've got a typo: s/XID_IN_CURRENT_XID/XID_IS_CURRENT_XID/\n\n>> /* xmax is an MXID, not an MXID. Sanity check it. */\n>> \n>> Is it an MXID or isn't it?\n> \n> Good catch.\n> \n> New patch attached.\n\nSeems fine other than the typo.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 1 Apr 2021 10:06:04 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Thu, Apr 1, 2021 at 1:06 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> Seems fine other than the typo.\n\nOK, let's try that again.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 1 Apr 2021 13:20:34 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Apr 1, 2021, at 10:20 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Thu, Apr 1, 2021 at 1:06 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>> Seems fine other than the typo.\n> \n> OK, let's try that again.\n\nLooks good!\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 1 Apr 2021 10:24:24 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Thu, Apr 1, 2021 at 1:24 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> > On Apr 1, 2021, at 10:20 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> > OK, let's try that again.\n> Looks good!\n\nOK, committed. We still need to deal with what you had as 0003\nupthread, so I guess the next step is for me to spend some time\nreviewing that one.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 1 Apr 2021 13:41:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Thu, Apr 1, 2021 at 1:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> OK, committed. We still need to deal with what you had as 0003\n> upthread, so I guess the next step is for me to spend some time\n> reviewing that one.\n\nI did this, and it was a bit depressing. It appears that we now have\nduplicate checks for xmin and xmax being out of the valid range.\nSomehow you have the removal of those duplicate checks in 0003, but\nwhy in the world didn't you put them into one of the previous patches\nand just make 0003 about fixing the\nholding-buffer-lock-while-following-TOAST-pointers problem? (And, gah,\nwhy did I not look carefully enough to notice that you hadn't done\nthat?)\n\nOther than that, I notice a few other things:\n\n- There are a total of two (2) calls in the current source code to\npalloc0fast, and hundreds of calls to palloc0. So I think you should\nforget about using the fast variant and just do what almost every\nother caller does.\n\n- If you want to make this code faster, a better idea would be to\navoid doing all of this allocate and free work and just allocate an\narray that's guaranteed to be big enough, and then keep track of how\nmany elements of that array are actually in use.\n\n- #ifdef DECOMPRESSION_CORRUPTION_CHECKING is not a useful way of\nintroducing such a feature. Either we do it for real and expose it via\nSQL and pg_amcheck as an optional behavior, or we rip it out and\nrevisit it later. Given the nearness of feature freeze, my vote is for\nthe latter.\n\n- I'd remove the USE_LZ4 bit, too. Let's not define the presence of\nLZ4 data in a non-LZ4-enabled cluster as corruption. If we do, then\npeople will expect to be able to use this to find places where they\nare dependent on LZ4 if they want to move away from it -- and if we\ndon't recurse into composite datums, that will not actually work.\n\n- check_toast_tuple() has an odd and slightly unclear calling\nconvention for which there are no comments. I wonder if it would be\nbetter to reverse things and make bool *error the return value and\nwhat is now the return value into a pointer argument, but whatever we\ndo I think it needs a few words in the comments. We don't need to\nslavishly explain every argument -- I think toasttup and ctx and tctx\nare reasonably clear -- but this is not.\n\n- To me it would be more logical to reverse the order of the\ntoast_pointer.va_toastrelid != ctx->rel->rd_rel->reltoastrelid and\nVARATT_EXTERNAL_GET_EXTSIZE(toast_pointer) > toast_pointer.va_rawsize\n- VARHDRSZ checks. Whether we're pointing at the correct relation\nfeels more fundamental.\n\n- If we moved the toplevel foreach loop in check_toasted_attributes()\nout to the caller, say renaming the function to just\ncheck_toasted_attribute(), we'd save a level of indentation in that\nwhole function and probably add a tad of clarity, too. You wouldn't\nfeel the need to Assert(ctx.toasted_attributes == NIL) in the caller\nif the caller had just done list_free(ctx->toasted_attributes);\nctx->toasted_attributes = NIL.\n\n- Is there a reason we need a cross-check on both the number of chunks\nand on the total size? It seems to me that we should check that each\nindividual chunk has the size we expect, and that the total number of\nchunks is what we expect. The overall size is then necessarily\ncorrect.\n\n- Why are all the changes to the tests in this patch? What do they\nhave to do with getting the TOAST checks out from under the buffer\nlock? I really need you to structure the patch series so that each\npatch is about one topic and, equally, so that each topic is only\ncovered by one patch. Otherwise it's just way too confusing.\n\n- I think some of these messages need a bit of word-smithing, too, but\nwe can leave that for when we're closer to being done with this.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 1 Apr 2021 16:08:56 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Apr 1, 2021, at 1:08 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> \n> \n> - There are a total of two (2) calls in the current source code to\n> palloc0fast, and hundreds of calls to palloc0. So I think you should\n> forget about using the fast variant and just do what almost every\n> other caller does.\n\nDone.\n\n> - If you want to make this code faster, a better idea would be to\n> avoid doing all of this allocate and free work and just allocate an\n> array that's guaranteed to be big enough, and then keep track of how\n> many elements of that array are actually in use.\n\nSounds like premature optimization to me. I only used palloc0fast because the argument is compile-time known. I wasn't specifically attempting to speed anything up.\n\n> - #ifdef DECOMPRESSION_CORRUPTION_CHECKING is not a useful way of\n> introducing such a feature. Either we do it for real and expose it via\n> SQL and pg_amcheck as an optional behavior, or we rip it out and\n> revisit it later. Given the nearness of feature freeze, my vote is for\n> the latter.\n> \n> - I'd remove the USE_LZ4 bit, too. Let's not define the presence of\n> LZ4 data in a non-LZ4-enabled cluster as corruption. If we do, then\n> people will expect to be able to use this to find places where they\n> are dependent on LZ4 if they want to move away from it -- and if we\n> don't recurse into composite datums, that will not actually work.\n\nOk, I have removed this bit. I also removed the part of the patch that introduced a new corruption check, decompressing the data to see if it decompresses without error.\n\n> - check_toast_tuple() has an odd and slightly unclear calling\n> convention for which there are no comments. I wonder if it would be\n> better to reverse things and make bool *error the return value and\n> what is now the return value into a pointer argument, but whatever we\n> do I think it needs a few words in the comments. We don't need to\n> slavishly explain every argument -- I think toasttup and ctx and tctx\n> are reasonably clear -- but this is not.\n...\n> - Is there a reason we need a cross-check on both the number of chunks\n> and on the total size? It seems to me that we should check that each\n> individual chunk has the size we expect, and that the total number of\n> chunks is what we expect. The overall size is then necessarily\n> correct.\n\nGood point. I've removed the extra check on the total size, since it cannot be wrong if the checks on the individual chunk sizes were all correct. This eliminates the need for the odd calling convention for check_toast_tuple(), so I've changed that to return void and not take any return-by-reference arguments.\n\n> - To me it would be more logical to reverse the order of the\n> toast_pointer.va_toastrelid != ctx->rel->rd_rel->reltoastrelid and\n> VARATT_EXTERNAL_GET_EXTSIZE(toast_pointer) > toast_pointer.va_rawsize\n> - VARHDRSZ checks. Whether we're pointing at the correct relation\n> feels more fundamental.\n\nDone.\n\n> - If we moved the toplevel foreach loop in check_toasted_attributes()\n> out to the caller, say renaming the function to just\n> check_toasted_attribute(), we'd save a level of indentation in that\n> whole function and probably add a tad of clarity, too. You wouldn't\n> feel the need to Assert(ctx.toasted_attributes == NIL) in the caller\n> if the caller had just done list_free(ctx->toasted_attributes);\n> ctx->toasted_attributes = NIL.\n\nYou're right. It looks nicer that way. Changed.\n\n> - Why are all the changes to the tests in this patch? What do they\n> have to do with getting the TOAST checks out from under the buffer\n> lock? I really need you to structure the patch series so that each\n> patch is about one topic and, equally, so that each topic is only\n> covered by one patch. Otherwise it's just way too confusing.\n\nv18-0001 - Finishes work started in commit 3b6c1259f9 that was overlooked owing to how I had separated the changes in v17-0002 vs. v17-0003\n\nv18-0002 - Postpones the toast checks for a page until after the main table page lock is released\n\nv18-0003 - Improves the corruption messages in ways already discussed earlier in this thread. Changes the tests to expect the new messages, but adds no new checks\n\nv18-0004 - Adding corruption checks of toast pointers. Extends the regression tests to cover the new checks.\n\n> \n> - I think some of these messages need a bit of word-smithing, too, but\n> we can leave that for when we're closer to being done with this.\n\nOk.\n\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 4 Apr 2021 17:02:40 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Sun, Apr 4, 2021 at 8:02 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> v18-0001 - Finishes work started in commit 3b6c1259f9 that was overlooked owing to how I had separated the changes in v17-0002 vs. v17-0003\n\nCommitted.\n\n> v18-0002 - Postpones the toast checks for a page until after the main table page lock is released\n\nCommitted, but I changed list_free() to list_free_deep() to avoid a\nmemory leak, and I revised the commit message to mention the important\npoint that we need to avoid following TOAST pointers from\npotentially-prunable tuples.\n\n> v18-0003 - Improves the corruption messages in ways already discussed earlier in this thread. Changes the tests to expect the new messages, but adds no new checks\n\nKibitizing your message wording:\n\n\"toast value %u chunk data is null\" -> \"toast value %u chunk %d has\nnull data\". We can mention the chunk number this way.\n\n\"toast value %u corrupt extended chunk has invalid varlena header: %0x\n(sequence number %d)\" -> \"toast value %u chunk %d has invalid varlena\nheader %0x\". We can be more consistent about how we incorporate the\nchunk number into the text, and we don't really need to include the\nword corrupt, because all of these are corruption complaints, and I\nthink it looks better without the colon.\n\n\"toast value %u chunk sequence number %u does not match the expected\nsequence number %u\" -> \"toast value %u contains chunk %d where chunk\n%d was expected\". Shorter. Uses %d for a sequence number instead of\n%u, which I think is correct -- anyway we should have them all one way\nor all the other. I think I'd rather ditch the \"sequence number\"\ntechnology and just talk about \"chunk %d\" or whatever.\n\n\"toast value %u chunk sequence number %u exceeds the end chunk\nsequence number %u\" -> \"toast value %u chunk %d follows last expected\nchunk %d\"\n\n\"toast value %u chunk size %u differs from the expected size %u\" ->\n\"toast value %u chunk %d has size %u, but expected size %u\"\n\nOther complaints:\n\nYour commit message fails to mention the addition of\nVARATT_EXTERNAL_GET_POINTER, which is a significant change/bug fix\nunrelated to message wording.\n\nIt feels like we have a non-minimal number of checks/messages for the\nseries of toast chunks. I think that if we find a chunk after the last\nchunk we were expecting to find (curchunk > endchunk) and you also get\na message if we have the wrong number of chunks in total (chunkno !=\n(endchunk + 1)). Now maybe I'm wrong, but if the first message\ntriggers, it seems like the second message must also trigger. Is that\nwrong? If not, maybe we can get rid of the first one entirely? That's\nsuch a small change I think we could include it in this same patch, if\nit's a correct idea.\n\nOn a related note, as I think I said before, I still think we should\nbe rejiggering this so that we're not testing both the size of each\nindividual chunk and the total size, because that ought to be\nredundant. That might be better done as a separate patch but I think\nwe should try to clean it up.\n\n> v18-0004 - Adding corruption checks of toast pointers. Extends the regression tests to cover the new checks.\n\nI think we could check that the result of\nVARATT_EXTERNAL_GET_COMPRESS_METHOD is one of the values we expect to\nsee.\n\nUsing AllocSizeIsValid() seems pretty vile. I know that MaxAllocSize\nis 0x3FFFFFFF in no small part because that's the maximum length that\ncan be represented by a varlena, but I'm not sure it's a good idea to\ncouple the concepts so closely like this. Maybe we can just #define\nVARLENA_SIZE_LIMIT in this file and use that, and a message that says\nsize %u exceeds limit %u.\n\nI'm a little worried about whether the additional test cases are\nEndian-dependent at all. I don't immediately know what might be wrong\nwith them, but I'm going to think about that some more later. Any\nchance you have access to a Big-endian box where you can test this?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 7 Apr 2021 16:16:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Apr 7, 2021, at 1:16 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Sun, Apr 4, 2021 at 8:02 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>> v18-0001 - Finishes work started in commit 3b6c1259f9 that was overlooked owing to how I had separated the changes in v17-0002 vs. v17-0003\n> \n> Committed.\n\nThank you.\n\n>> v18-0002 - Postpones the toast checks for a page until after the main table page lock is released\n> \n> Committed, but I changed list_free() to list_free_deep() to avoid a\n> memory leak, and I revised the commit message to mention the important\n> point that we need to avoid following TOAST pointers from\n> potentially-prunable tuples.\n\nThank you, and yes, I agree with that change.\n\n>> v18-0003 - Improves the corruption messages in ways already discussed earlier in this thread. Changes the tests to expect the new messages, but adds no new checks\n> \n> Kibitizing your message wording:\n> \n> \"toast value %u chunk data is null\" -> \"toast value %u chunk %d has\n> null data\". We can mention the chunk number this way.\n\nChanged.\n\n> \"toast value %u corrupt extended chunk has invalid varlena header: %0x\n> (sequence number %d)\" -> \"toast value %u chunk %d has invalid varlena\n> header %0x\". We can be more consistent about how we incorporate the\n> chunk number into the text, and we don't really need to include the\n> word corrupt, because all of these are corruption complaints, and I\n> think it looks better without the colon.\n\nChanged.\n\n> \"toast value %u chunk sequence number %u does not match the expected\n> sequence number %u\" -> \"toast value %u contains chunk %d where chunk\n> %d was expected\". Shorter. Uses %d for a sequence number instead of\n> %u, which I think is correct -- anyway we should have them all one way\n> or all the other. I think I'd rather ditch the \"sequence number\"\n> technology and just talk about \"chunk %d\" or whatever.\n\nI don't agree with this one. I do agree with changing the message, but not to the message you suggest.\n\nImagine a toasted attribute with 18 chunks numbered [0..17]. Then we update the toast to have only 6 chunks numbered [0..5] except we corruptly keep chunks numbered [12..17] in the toast table. We'd rather see a report like this:\n\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n# toast value 16444 chunk 6 has sequence number 12, but expected sequence number 6\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n# toast value 16444 chunk 7 has sequence number 13, but expected sequence number 7\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n# toast value 16444 chunk 8 has sequence number 14, but expected sequence number 8\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n# toast value 16444 chunk 9 has sequence number 15, but expected sequence number 9\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n# toast value 16444 chunk 10 has sequence number 16, but expected sequence number 10\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n# toast value 16444 chunk 11 has sequence number 17, but expected sequence number 11\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n# toast value 16444 was expected to end at chunk 6, but ended at chunk 12\n\nthan one like this:\n\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n# toast value 16444 contains chunk 12 where chunk 6 was expected\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n# toast value 16444 contains chunk 13 where chunk 7 was expected\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n# toast value 16444 contains chunk 14 where chunk 8 was expected\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n# toast value 16444 contains chunk 15 where chunk 9 was expected\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n# toast value 16444 contains chunk 16 where chunk 10 was expected\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n# toast value 16444 contains chunk 17 where chunk 11 was expected\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n# toast value 16444 was expected to end at chunk 6, but ended at chunk 12\n\nbecause saying the toast value ended at \"chunk 12\" after saying that it contains \"chunk 17\" is contradictory. You need the distinction between the chunk number and the chunk sequence number, since in corrupt circumstances they may not be the same.\n\n> \"toast value %u chunk sequence number %u exceeds the end chunk\n> sequence number %u\" -> \"toast value %u chunk %d follows last expected\n> chunk %d\"\n\nChanged.\n\n> \"toast value %u chunk size %u differs from the expected size %u\" ->\n> \"toast value %u chunk %d has size %u, but expected size %u\"\n\nChanged.\n\n> Other complaints:\n> \n> Your commit message fails to mention the addition of\n> VARATT_EXTERNAL_GET_POINTER, which is a significant change/bug fix\n> unrelated to message wording.\n\nRight you are.\n\n> It feels like we have a non-minimal number of checks/messages for the\n> series of toast chunks. I think that if we find a chunk after the last\n> chunk we were expecting to find (curchunk > endchunk) and you also get\n> a message if we have the wrong number of chunks in total (chunkno !=\n> (endchunk + 1)). Now maybe I'm wrong, but if the first message\n> triggers, it seems like the second message must also trigger. Is that\n> wrong? If not, maybe we can get rid of the first one entirely? That's\n> such a small change I think we could include it in this same patch, if\n> it's a correct idea.\n\nMotivated by discussions we had off-list, I dug into this one.\n\nPurely as manual testing, and not part of the patch, I hacked the backend a bit to allow direct modification of the toast table. After corrupting the toast with the following bit of SQL:\n\n\tWITH chunk_limit AS (\n\t\tSELECT chunk_id, MAX(chunk_seq) AS maxseq\n\t\t\tFROM $toastname\n\t\t\tGROUP BY chunk_id)\n\t\tINSERT INTO $toastname (chunk_id, chunk_seq, chunk_data)\n\t\t\t(SELECT t.chunk_id,\n\t\t\t\t\tt.chunk_seq + cl.maxseq + CASE WHEN t.chunk_seq < 3 THEN 1 ELSE 7 END,\n\t\t\t\t\tt.chunk_data\n\t\t\t\tFROM $toastname t\n\t\t\t\tINNER JOIN chunk_limit cl\n\t\t\t\tON t.chunk_id = cl.chunk_id)\n\npg_amcheck reports the following corruption messages:\n\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n# toast value 16444 chunk 6 follows last expected chunk 5\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n# toast value 16444 chunk 7 follows last expected chunk 5\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n# toast value 16444 chunk 8 follows last expected chunk 5\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n# toast value 16444 chunk 9 has sequence number 15, but expected sequence number 9\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n# toast value 16444 chunk 10 has sequence number 16, but expected sequence number 10\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n# toast value 16444 chunk 11 has sequence number 17, but expected sequence number 11\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n# toast value 16444 was expected to end at chunk 6, but ended at chunk 12\n\nI think if we'd left out the first three messages, it would read strangely. We would be complaining about three chunks with the wrong sequence number, then conclude that there were six extra chunks. A sufficiently savvy user might deduce the presence of chunks 6, 7, and 8, but the problem is more obvious (to my eyes, at least) if we keep the first three messages. This seems like a judgement call and not a clear argument either way, so if you still want me to change it, I guess I don't mind doing so.\n\n> On a related note, as I think I said before, I still think we should\n> be rejiggering this so that we're not testing both the size of each\n> individual chunk and the total size, because that ought to be\n> redundant. That might be better done as a separate patch but I think\n> we should try to clean it up.\n\nCan you point me to the exact check you are mentioning, and with which patch applied? I don't see any examples of this after applying the v18-0003.\n\n>> v18-0004 - Adding corruption checks of toast pointers. Extends the regression tests to cover the new checks.\n> \n> I think we could check that the result of\n> VARATT_EXTERNAL_GET_COMPRESS_METHOD is one of the values we expect to\n> see.\n\nYes. I had that before, pulled it out along with other toast compression checks, but have put it back in for v19.\n\n> Using AllocSizeIsValid() seems pretty vile. I know that MaxAllocSize\n> is 0x3FFFFFFF in no small part because that's the maximum length that\n> can be represented by a varlena, but I'm not sure it's a good idea to\n> couple the concepts so closely like this. Maybe we can just #define\n> VARLENA_SIZE_LIMIT in this file and use that, and a message that says\n> size %u exceeds limit %u.\n\nChanged.\n\n> I'm a little worried about whether the additional test cases are\n> Endian-dependent at all. I don't immediately know what might be wrong\n> with them, but I'm going to think about that some more later. Any\n> chance you have access to a Big-endian box where you can test this?\n\nI don't have a Big-endian box, but I think one of them may be wrong now that you mention the issue:\n\n# Corrupt column c's toast pointer va_extinfo field\n\nThe problem is that the 30-bit extsize and 2-bit cmid split is not being handled in the perl test, and I don't see an easy way to have perl's pack/unpack do that for us. There isn't any requirement that each possible corruption we check actually be manifested in the regression tests. The simplest solution is to remove this problematic test, so that's what I did. The other two new tests corrupt c_va_toastrelid and c_va_rawsize, both of which are read/written using unpack/pack, so perl should handle the endianness for us (I hope).\n\nIf you'd rather not commit these two extra tests, you don't have to, as I've split them out into v19-0003. But if you do commit them, it makes more sense to me to be one commit with 0002+0003 together, rather than separately. Not committing the new tests just means that verify_heapam() is able to detect additional forms of corruption that we're not covering in the regression tests. But that's already true for some other corruption types, such as detecting toast chunks with null sequence numbers.\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 8 Apr 2021 12:02:36 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 3:02 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> Imagine a toasted attribute with 18 chunks numbered [0..17]. Then we update the toast to have only 6 chunks numbered [0..5] except we corruptly keep chunks numbered [12..17] in the toast table. We'd rather see a report like this:\n[ toast value NNN chunk NNN has sequence number NNN, but expected\nsequence number NNN ]\n> than one like this:\n[ toast value NNN contains chunk NNN where chunk NNN was expected ]\n>\n> because saying the toast value ended at \"chunk 12\" after saying that it contains \"chunk 17\" is contradictory. You need the distinction between the chunk number and the chunk sequence number, since in corrupt circumstances they may not be the same.\n\nHmm, I see your point, and that's a good example to illustrate it.\nBut, with that example in front of me, I am rather doubtful that\neither of these is what users actually want. Consider the case where I\nshould have chunks 0..17 and chunk 1 is just plain gone. This, by the\nway, seems like a pretty likely case to arise in practice, since all\nwe need is for a block to get truncated away or zeroed erroneously, or\nfor a tuple to get pruned that shouldn't. With either of the above\nschemes, I guess we're going to get a message about every chunk from 2\nto 17, complaining that they're all misnumbered. We might also get a\ncomplaint that the last chunk is the wrong size, and that the total\nnumber of chunks isn't right. What we really want is a single\ncomplaint saying chunk 1 is missing.\n\nLikewise, in your example, I sort of feel like what I really want,\nrather than either of the above outputs, is to get some messages like\nthis:\n\ntoast value NNN contains unexpected extra chunk [12-17]\n\nBoth your phrasing for those messages and what I suggested make it\nsound like the problem is that the chunk number is wrong. But that\ndoesn't seem like it's taking the right view of the situation. Chunks\n12-17 shouldn't exist at all, and if they do, we should say that, e.g.\nby complaining about something like \"toast value 16444 chunk 12\nfollows last expected chunk 5\"\n\nIn other words, I don't buy the idea that the user will accept the\nidea that there's a chunk number and a chunk sequence number, and that\nthey should know the difference between those things and what each of\nthem are. They're entitled to imagine that there's just one thing, and\nthat we're going to tell them about value that are extra or missing.\nThe fact that we're not doing that seems like it's just a matter of\nmissing code. If we start the index scan and get chunk 4, we can\neasily emit messages for chunks 0..3 right on the spot, declaring them\nmissing. Things do get a bit hairy if the index scan returns values\nout of order: what if it gives us chunk_seq = 2 and then chunk_seq =\n1? But I think we could handle that by just issuing a complaint in any\nsuch case that \"toast index returns chunks out of order for toast\nvalue NNN\" and stopping further checking of that toast value.\n\n> Purely as manual testing, and not part of the patch, I hacked the backend a bit to allow direct modification of the toast table. After corrupting the toast with the following bit of SQL:\n>\n> WITH chunk_limit AS (\n> SELECT chunk_id, MAX(chunk_seq) AS maxseq\n> FROM $toastname\n> GROUP BY chunk_id)\n> INSERT INTO $toastname (chunk_id, chunk_seq, chunk_data)\n> (SELECT t.chunk_id,\n> t.chunk_seq + cl.maxseq + CASE WHEN t.chunk_seq < 3 THEN 1 ELSE 7 END,\n> t.chunk_data\n> FROM $toastname t\n> INNER JOIN chunk_limit cl\n> ON t.chunk_id = cl.chunk_id)\n>\n> pg_amcheck reports the following corruption messages:\n>\n> # heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n> # toast value 16444 chunk 6 follows last expected chunk 5\n> # heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n> # toast value 16444 chunk 7 follows last expected chunk 5\n> # heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n> # toast value 16444 chunk 8 follows last expected chunk 5\n> # heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n> # toast value 16444 chunk 9 has sequence number 15, but expected sequence number 9\n> # heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n> # toast value 16444 chunk 10 has sequence number 16, but expected sequence number 10\n> # heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n> # toast value 16444 chunk 11 has sequence number 17, but expected sequence number 11\n> # heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n> # toast value 16444 was expected to end at chunk 6, but ended at chunk 12\n>\n> I think if we'd left out the first three messages, it would read strangely. We would be complaining about three chunks with the wrong sequence number, then conclude that there were six extra chunks. A sufficiently savvy user might deduce the presence of chunks 6, 7, and 8, but the problem is more obvious (to my eyes, at least) if we keep the first three messages. This seems like a judgement call and not a clear argument either way, so if you still want me to change it, I guess I don't mind doing so.\n\nI mean, looking at it, the question here is why it's not just using\nthe same message for all of them. The fact that the chunk numbers are\nhigher than 5 is the problem. The sequence numbers seem like just a\ndistraction.\n\n> > On a related note, as I think I said before, I still think we should\n> > be rejiggering this so that we're not testing both the size of each\n> > individual chunk and the total size, because that ought to be\n> > redundant. That might be better done as a separate patch but I think\n> > we should try to clean it up.\n>\n> Can you point me to the exact check you are mentioning, and with which patch applied? I don't see any examples of this after applying the v18-0003.\n\nHmm, my mistake, I think.\n\n> > I'm a little worried about whether the additional test cases are\n> > Endian-dependent at all. I don't immediately know what might be wrong\n> > with them, but I'm going to think about that some more later. Any\n> > chance you have access to a Big-endian box where you can test this?\n>\n> I don't have a Big-endian box, but I think one of them may be wrong now that you mention the issue:\n>\n> # Corrupt column c's toast pointer va_extinfo field\n>\n> The problem is that the 30-bit extsize and 2-bit cmid split is not being handled in the perl test, and I don't see an easy way to have perl's pack/unpack do that for us. There isn't any requirement that each possible corruption we check actually be manifested in the regression tests. The simplest solution is to remove this problematic test, so that's what I did. The other two new tests corrupt c_va_toastrelid and c_va_rawsize, both of which are read/written using unpack/pack, so perl should handle the endianness for us (I hope).\n\nI don't immediately see why this particular thing should be an issue.\nThe format of the varlena header itself is different on big-endian and\nlittle-endian machines, which is why postgres.h has all this stuff\nconditioned on WORDS_BIGENDIAN. But va_extinfo doesn't have any\nsimilar treatment, so I'm not sure what could go wrong there, as long\nas the 4-byte value as a whole is being packed and unpacked according\nto the machine's endian-ness.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Apr 2021 16:05:02 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Apr 8, 2021, at 1:05 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Thu, Apr 8, 2021 at 3:02 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>> Imagine a toasted attribute with 18 chunks numbered [0..17]. Then we update the toast to have only 6 chunks numbered [0..5] except we corruptly keep chunks numbered [12..17] in the toast table. We'd rather see a report like this:\n> [ toast value NNN chunk NNN has sequence number NNN, but expected\n> sequence number NNN ]\n>> than one like this:\n> [ toast value NNN contains chunk NNN where chunk NNN was expected ]\n>> \n>> because saying the toast value ended at \"chunk 12\" after saying that it contains \"chunk 17\" is contradictory. You need the distinction between the chunk number and the chunk sequence number, since in corrupt circumstances they may not be the same.\n> \n> Hmm, I see your point, and that's a good example to illustrate it.\n> But, with that example in front of me, I am rather doubtful that\n> either of these is what users actually want. Consider the case where I\n> should have chunks 0..17 and chunk 1 is just plain gone. This, by the\n> way, seems like a pretty likely case to arise in practice, since all\n> we need is for a block to get truncated away or zeroed erroneously, or\n> for a tuple to get pruned that shouldn't. With either of the above\n> schemes, I guess we're going to get a message about every chunk from 2\n> to 17, complaining that they're all misnumbered. We might also get a\n> complaint that the last chunk is the wrong size, and that the total\n> number of chunks isn't right. What we really want is a single\n> complaint saying chunk 1 is missing.\n> \n> Likewise, in your example, I sort of feel like what I really want,\n> rather than either of the above outputs, is to get some messages like\n> this:\n> \n> toast value NNN contains unexpected extra chunk [12-17]\n> \n> Both your phrasing for those messages and what I suggested make it\n> sound like the problem is that the chunk number is wrong. But that\n> doesn't seem like it's taking the right view of the situation. Chunks\n> 12-17 shouldn't exist at all, and if they do, we should say that, e.g.\n> by complaining about something like \"toast value 16444 chunk 12\n> follows last expected chunk 5\"\n> \n> In other words, I don't buy the idea that the user will accept the\n> idea that there's a chunk number and a chunk sequence number, and that\n> they should know the difference between those things and what each of\n> them are. They're entitled to imagine that there's just one thing, and\n> that we're going to tell them about value that are extra or missing.\n> The fact that we're not doing that seems like it's just a matter of\n> missing code.\n\nSomehow, we have to get enough information about chunk_seq discontinuity into the output that if the user forwards it to -hackers, or if the output comes from a buildfarm critter, that we have all the information to help diagnose what went wrong.\n\nAs a specific example, if the va_rawsize suggests 2 chunks, and we find 150 chunks all with contiguous chunk_seq values, that is different from a debugging point of view than if we find 150 chunks with chunk_seq values spread all over the [0..MAXINT] range. We can't just tell the user that there were 148 extra chunks. We also shouldn't phrase the error in terms of \"extra chunks\", since it might be the va_rawsize that is corrupt.\n\nI agree that the current message output might be overly verbose in how it reports this information. Conceptually, we want to store up information about the chunk issues and report them all at once, but that's hard to do in general, as the number of chunk_seq discontinuities might be quite large, much too large to fit reasonably into any one message. Maybe we could report just the first N discontinuities rather than all of them, but if somebody wants to open a hex editor and walk through the toast table, they won't appreciate having the corruption information truncated like that.\n\nAll this leads me to believe that we should report the following:\n\n1) If the total number of chunks retrieved differs from the expected number, report how many we expected vs. how many we got\n2) If the chunk_seq numbers are discontiguous, report each discontiguity.\n3) If the index scan returned chunks out of chunk_seq order, report that\n4) If any chunk is not the expected size, report that\n\nSo, for your example of chunk 1 missing from chunks [0..17], we'd report that we got one fewer chunks than we expected, that the second chunk seen was discontiguous from the first chunk seen, that the final chunk seen was smaller than expected by M bytes, and that the total size was smaller than we expected by N bytes. The third of those is somewhat misleading, since the final chunk was presumably the right size; we just weren't expecting to hit a partial chunk quite yet. But I don't see how to make that better in the general case.\n\n> If we start the index scan and get chunk 4, we can\n> easily emit messages for chunks 0..3 right on the spot, declaring them\n> missing. Things do get a bit hairy if the index scan returns values\n> out of order: what if it gives us chunk_seq = 2 and then chunk_seq =\n> 1? But I think we could handle that by just issuing a complaint in any\n> such case that \"toast index returns chunks out of order for toast\n> value NNN\" and stopping further checking of that toast value.\n> \n>> Purely as manual testing, and not part of the patch, I hacked the backend a bit to allow direct modification of the toast table. After corrupting the toast with the following bit of SQL:\n>> \n>> WITH chunk_limit AS (\n>> SELECT chunk_id, MAX(chunk_seq) AS maxseq\n>> FROM $toastname\n>> GROUP BY chunk_id)\n>> INSERT INTO $toastname (chunk_id, chunk_seq, chunk_data)\n>> (SELECT t.chunk_id,\n>> t.chunk_seq + cl.maxseq + CASE WHEN t.chunk_seq < 3 THEN 1 ELSE 7 END,\n>> t.chunk_data\n>> FROM $toastname t\n>> INNER JOIN chunk_limit cl\n>> ON t.chunk_id = cl.chunk_id)\n>> \n>> pg_amcheck reports the following corruption messages:\n>> \n>> # heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n>> # toast value 16444 chunk 6 follows last expected chunk 5\n>> # heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n>> # toast value 16444 chunk 7 follows last expected chunk 5\n>> # heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n>> # toast value 16444 chunk 8 follows last expected chunk 5\n>> # heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n>> # toast value 16444 chunk 9 has sequence number 15, but expected sequence number 9\n>> # heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n>> # toast value 16444 chunk 10 has sequence number 16, but expected sequence number 10\n>> # heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n>> # toast value 16444 chunk 11 has sequence number 17, but expected sequence number 11\n>> # heap table \"postgres\".\"public\".\"test\", block 0, offset 1, attribute 2:\n>> # toast value 16444 was expected to end at chunk 6, but ended at chunk 12\n>> \n>> I think if we'd left out the first three messages, it would read strangely. We would be complaining about three chunks with the wrong sequence number, then conclude that there were six extra chunks. A sufficiently savvy user might deduce the presence of chunks 6, 7, and 8, but the problem is more obvious (to my eyes, at least) if we keep the first three messages. This seems like a judgement call and not a clear argument either way, so if you still want me to change it, I guess I don't mind doing so.\n> \n> I mean, looking at it, the question here is why it's not just using\n> the same message for all of them. The fact that the chunk numbers are\n> higher than 5 is the problem. The sequence numbers seem like just a\n> distraction.\n\nAgain, I don't think we can reach that conclusion. You are biasing the corruption reports in favor of believing the va_rawsize rather than believing the toast table.\n\n>>> On a related note, as I think I said before, I still think we should\n>>> be rejiggering this so that we're not testing both the size of each\n>>> individual chunk and the total size, because that ought to be\n>>> redundant. That might be better done as a separate patch but I think\n>>> we should try to clean it up.\n>> \n>> Can you point me to the exact check you are mentioning, and with which patch applied? I don't see any examples of this after applying the v18-0003.\n> \n> Hmm, my mistake, I think.\n> \n>>> I'm a little worried about whether the additional test cases are\n>>> Endian-dependent at all. I don't immediately know what might be wrong\n>>> with them, but I'm going to think about that some more later. Any\n>>> chance you have access to a Big-endian box where you can test this?\n>> \n>> I don't have a Big-endian box, but I think one of them may be wrong now that you mention the issue:\n>> \n>> # Corrupt column c's toast pointer va_extinfo field\n>> \n>> The problem is that the 30-bit extsize and 2-bit cmid split is not being handled in the perl test, and I don't see an easy way to have perl's pack/unpack do that for us. There isn't any requirement that each possible corruption we check actually be manifested in the regression tests. The simplest solution is to remove this problematic test, so that's what I did. The other two new tests corrupt c_va_toastrelid and c_va_rawsize, both of which are read/written using unpack/pack, so perl should handle the endianness for us (I hope).\n> \n> I don't immediately see why this particular thing should be an issue.\n> The format of the varlena header itself is different on big-endian and\n> little-endian machines, which is why postgres.h has all this stuff\n> conditioned on WORDS_BIGENDIAN. But va_extinfo doesn't have any\n> similar treatment, so I'm not sure what could go wrong there, as long\n> as the 4-byte value as a whole is being packed and unpacked according\n> to the machine's endian-ness.\n\nGood point. Perhaps the test was ok after all.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 8 Apr 2021 14:21:28 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 5:21 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> All this leads me to believe that we should report the following:\n>\n> 1) If the total number of chunks retrieved differs from the expected number, report how many we expected vs. how many we got\n> 2) If the chunk_seq numbers are discontiguous, report each discontiguity.\n> 3) If the index scan returned chunks out of chunk_seq order, report that\n> 4) If any chunk is not the expected size, report that\n>\n> So, for your example of chunk 1 missing from chunks [0..17], we'd report that we got one fewer chunks than we expected, that the second chunk seen was discontiguous from the first chunk seen, that the final chunk seen was smaller than expected by M bytes, and that the total size was smaller than we expected by N bytes. The third of those is somewhat misleading, since the final chunk was presumably the right size; we just weren't expecting to hit a partial chunk quite yet. But I don't see how to make that better in the general case.\n\nHmm, that might be OK. It seems like it's going to be a bit verbose in\nsimple cases like 1 missing chunk, but on the plus side, it avoids a\nmountain of output if the raw size has been overwritten with a\ngigantic bogus value. But, how is #2 different from #3? Those sound\nlike the same thing to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Apr 2021 18:11:17 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Apr 8, 2021, at 3:11 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Thu, Apr 8, 2021 at 5:21 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>> All this leads me to believe that we should report the following:\n>> \n>> 1) If the total number of chunks retrieved differs from the expected number, report how many we expected vs. how many we got\n>> 2) If the chunk_seq numbers are discontiguous, report each discontiguity.\n>> 3) If the index scan returned chunks out of chunk_seq order, report that\n>> 4) If any chunk is not the expected size, report that\n>> \n>> So, for your example of chunk 1 missing from chunks [0..17], we'd report that we got one fewer chunks than we expected, that the second chunk seen was discontiguous from the first chunk seen, that the final chunk seen was smaller than expected by M bytes, and that the total size was smaller than we expected by N bytes. The third of those is somewhat misleading, since the final chunk was presumably the right size; we just weren't expecting to hit a partial chunk quite yet. But I don't see how to make that better in the general case.\n> \n> Hmm, that might be OK. It seems like it's going to be a bit verbose in\n> simple cases like 1 missing chunk, but on the plus side, it avoids a\n> mountain of output if the raw size has been overwritten with a\n> gigantic bogus value. But, how is #2 different from #3? Those sound\n> like the same thing to me.\n\n#2 is if chunk_seq goes up but skips numbers. #3 is if chunk_seq ever goes down, meaning the index scan did something unexpected.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 8 Apr 2021 15:51:41 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 6:51 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> #2 is if chunk_seq goes up but skips numbers. #3 is if chunk_seq ever goes down, meaning the index scan did something unexpected.\n\nYeah, sure. But I think we could probably treat those the same way.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Apr 2021 18:55:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Apr 8, 2021, at 3:11 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Thu, Apr 8, 2021 at 5:21 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>> All this leads me to believe that we should report the following:\n>> \n>> 1) If the total number of chunks retrieved differs from the expected number, report how many we expected vs. how many we got\n>> 2) If the chunk_seq numbers are discontiguous, report each discontiguity.\n>> 3) If the index scan returned chunks out of chunk_seq order, report that\n>> 4) If any chunk is not the expected size, report that\n>> \n>> So, for your example of chunk 1 missing from chunks [0..17], we'd report that we got one fewer chunks than we expected, that the second chunk seen was discontiguous from the first chunk seen, that the final chunk seen was smaller than expected by M bytes, and that the total size was smaller than we expected by N bytes. The third of those is somewhat misleading, since the final chunk was presumably the right size; we just weren't expecting to hit a partial chunk quite yet. But I don't see how to make that better in the general case.\n> \n> Hmm, that might be OK. It seems like it's going to be a bit verbose in\n> simple cases like 1 missing chunk, but on the plus side, it avoids a\n> mountain of output if the raw size has been overwritten with a\n> gigantic bogus value. But, how is #2 different from #3? Those sound\n> like the same thing to me.\n\nI think #4, above, requires some clarification. If there are missing chunks, the very definition of how large we expect subsequent chunks to be is ill-defined. I took a fairly conservative approach to avoid lots of bogus complaints about chunks that are of unexpected size. Not all such complaints are removed, but enough are removed that I needed to add a final complaint at the end about the total size seen not matching the total size expected.\n\nHere are a set of corruptions with the corresponding corruption reports from before and from after the code changes. The corruptions are *not* cumulative.\n\nHonestly, I'm not totally convinced that these changes are improvements in all cases. Let me know if you want further changes, or if you'd like to see other corruptions and their before and after results.\n\n\nCorruption #1:\n\n\tUPDATE $toastname SET chunk_seq = chunk_seq + 1000\n\nBefore:\n\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 chunk 0 has sequence number 1000, but expected sequence number 0\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 chunk 1 has sequence number 1001, but expected sequence number 1\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 chunk 2 has sequence number 1002, but expected sequence number 2\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 chunk 3 has sequence number 1003, but expected sequence number 3\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 chunk 4 has sequence number 1004, but expected sequence number 4\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 chunk 5 has sequence number 1005, but expected sequence number 5\n\nAfter:\n\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 missing chunks 0 through 999\n\n\nCorruption #2:\n\n\tUPDATE $toastname SET chunk_seq = chunk_seq * 1000\n\nBefore:\n\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 chunk 1 has sequence number 1000, but expected sequence number 1\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 chunk 2 has sequence number 2000, but expected sequence number 2\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 chunk 3 has sequence number 3000, but expected sequence number 3\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 chunk 4 has sequence number 4000, but expected sequence number 4\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 chunk 5 has sequence number 5000, but expected sequence number 5\n\nAfter:\n\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 missing chunks 1 through 999\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 missing chunks 1001 through 1999\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 missing chunks 2001 through 2999\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 missing chunks 3001 through 3999\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 missing chunks 4001 through 4999\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 3, attribute 2:\n\n\nCorruption #3:\n\n\tUPDATE $toastname SET chunk_id = (chunk_id::integer + 10000000)::oid WHERE chunk_seq = 3\n\nBefore:\n\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 chunk 3 has sequence number 4, but expected sequence number 3\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 chunk 4 has sequence number 5, but expected sequence number 4\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 was expected to end at chunk 6, but ended at chunk 5\n\nAfter:\n\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 missing chunk 3\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 chunk 4 has size 20, but expected size 1996\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 was expected to end at chunk 6, but ended at chunk 5\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n# toast value 16445 was expected to have size 10000, but had size 8004\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 3, attribute 2:\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 9 Apr 2021 11:50:32 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 2:50 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> I think #4, above, requires some clarification. If there are missing chunks, the very definition of how large we expect subsequent chunks to be is ill-defined. I took a fairly conservative approach to avoid lots of bogus complaints about chunks that are of unexpected size. Not all such complaints are removed, but enough are removed that I needed to add a final complaint at the end about the total size seen not matching the total size expected.\n\nMy instinct is to suppose that the size that we expect for future\nchunks is independent of anything being wrong with previous chunks. So\nif each chunk is supposed to be 2004 bytes (which probably isn't the\nreal number) and the value is 7000 bytes long, we expect chunks 0-2 to\nbe 2004 bytes each, chunk 3 to be 988 bytes, and chunk 4 and higher to\nnot exist. If chunk 1 happens to be missing or the wrong length or\nwhatever, our expectations for chunks 2 and 3 are utterly unchanged.\n\n> Corruption #1:\n>\n> UPDATE $toastname SET chunk_seq = chunk_seq + 1000\n>\n> Before:\n>\n> # heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n> # toast value 16445 chunk 0 has sequence number 1000, but expected sequence number 0\n> # heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n> # toast value 16445 chunk 1 has sequence number 1001, but expected sequence number 1\n> # heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n> # toast value 16445 chunk 2 has sequence number 1002, but expected sequence number 2\n> # heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n> # toast value 16445 chunk 3 has sequence number 1003, but expected sequence number 3\n> # heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n> # toast value 16445 chunk 4 has sequence number 1004, but expected sequence number 4\n> # heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n> # toast value 16445 chunk 5 has sequence number 1005, but expected sequence number 5\n>\n> After:\n>\n> # heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n> # toast value 16445 missing chunks 0 through 999\n\nApplying the above principle would lead to complaints that chunks 0-5\nare missing, and 1000-1005 are extra.\n\n> Corruption #2:\n>\n> UPDATE $toastname SET chunk_seq = chunk_seq * 1000\n\nSimilarly here, except the extra chunk numbers are different.\n\n> Corruption #3:\n>\n> UPDATE $toastname SET chunk_id = (chunk_id::integer + 10000000)::oid WHERE chunk_seq = 3\n\nAnd here we'd just get a complaint that chunk 3 is missing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 9 Apr 2021 16:51:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Apr 9, 2021, at 1:51 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Fri, Apr 9, 2021 at 2:50 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>> I think #4, above, requires some clarification. If there are missing chunks, the very definition of how large we expect subsequent chunks to be is ill-defined. I took a fairly conservative approach to avoid lots of bogus complaints about chunks that are of unexpected size. Not all such complaints are removed, but enough are removed that I needed to add a final complaint at the end about the total size seen not matching the total size expected.\n> \n> My instinct is to suppose that the size that we expect for future\n> chunks is independent of anything being wrong with previous chunks. So\n> if each chunk is supposed to be 2004 bytes (which probably isn't the\n> real number) and the value is 7000 bytes long, we expect chunks 0-2 to\n> be 2004 bytes each, chunk 3 to be 988 bytes, and chunk 4 and higher to\n> not exist. If chunk 1 happens to be missing or the wrong length or\n> whatever, our expectations for chunks 2 and 3 are utterly unchanged.\n\nFair enough.\n\n>> Corruption #1:\n>> \n>> UPDATE $toastname SET chunk_seq = chunk_seq + 1000\n>> \n>> Before:\n>> \n>> # heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n>> # toast value 16445 chunk 0 has sequence number 1000, but expected sequence number 0\n>> # heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n>> # toast value 16445 chunk 1 has sequence number 1001, but expected sequence number 1\n>> # heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n>> # toast value 16445 chunk 2 has sequence number 1002, but expected sequence number 2\n>> # heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n>> # toast value 16445 chunk 3 has sequence number 1003, but expected sequence number 3\n>> # heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n>> # toast value 16445 chunk 4 has sequence number 1004, but expected sequence number 4\n>> # heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n>> # toast value 16445 chunk 5 has sequence number 1005, but expected sequence number 5\n>> \n>> After:\n>> \n>> # heap table \"postgres\".\"public\".\"test\", block 0, offset 2, attribute 2:\n>> # toast value 16445 missing chunks 0 through 999\n> \n> Applying the above principle would lead to complaints that chunks 0-5\n> are missing, and 1000-1005 are extra.\n\nThat sounds right. It now reports:\n\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 16, attribute 2:\n# toast value 16459 missing chunks 0 through 4 with expected size 1996 and chunk 5 with expected size 20\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 16, attribute 2:\n# toast value 16459 unexpected chunks 1000 through 1004 each with size 1996 followed by chunk 1005 with size 20\n\n\n>> Corruption #2:\n>> \n>> UPDATE $toastname SET chunk_seq = chunk_seq * 1000\n> \n> Similarly here, except the extra chunk numbers are different.\n\nIt now reports:\n\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 17, attribute 2:\n# toast value 16460 missing chunks 1 through 4 with expected size 1996 and chunk 5 with expected size 20\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 17, attribute 2:\n# toast value 16460 unexpected chunk 1000 with size 1996\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 17, attribute 2:\n# toast value 16460 unexpected chunk 2000 with size 1996\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 17, attribute 2:\n# toast value 16460 unexpected chunk 3000 with size 1996\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 17, attribute 2:\n# toast value 16460 unexpected chunk 4000 with size 1996\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 17, attribute 2:\n# toast value 16460 unexpected chunk 5000 with size 20\n\nI don't see any good way in this case to report the extra chunks in one row, as in the general case there could be arbitrarily many of them, with the message text getting arbitrarily large if it reported the chunks as \"chunks 1000, 2000, 3000, 4000, 5000, ...\". I don't expect this sort of corruption being particularly common, though, so I'm not too bothered about it.\n\n> \n>> Corruption #3:\n>> \n>> UPDATE $toastname SET chunk_id = (chunk_id::integer + 10000000)::oid WHERE chunk_seq = 3\n> \n> And here we'd just get a complaint that chunk 3 is missing.\n\nIt now reports:\n\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 18, attribute 2:\n# toast value 16461 missing chunk 3 with expected size 1996\n# heap table \"postgres\".\"public\".\"test\", block 0, offset 18, attribute 2:\n# toast value 16461 was expected to end at chunk 6 with total size 10000, but ended at chunk 5 with total size 8004\n\nIt sounds like you weren't expecting the second of these reports. I think it is valuable, especially when there are multiple missing chunks and multiple extraneous chunks, as it makes it easier for the user to reconcile the missing chunks against the extraneous chunks.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 12 Apr 2021 20:06:09 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 11:06 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> It now reports:\n>\n> # heap table \"postgres\".\"public\".\"test\", block 0, offset 18, attribute 2:\n> # toast value 16461 missing chunk 3 with expected size 1996\n> # heap table \"postgres\".\"public\".\"test\", block 0, offset 18, attribute 2:\n> # toast value 16461 was expected to end at chunk 6 with total size 10000, but ended at chunk 5 with total size 8004\n>\n> It sounds like you weren't expecting the second of these reports. I think it is valuable, especially when there are multiple missing chunks and multiple extraneous chunks, as it makes it easier for the user to reconcile the missing chunks against the extraneous chunks.\n\nI wasn't, but I'm not overwhelmingly opposed to it, either. I do think\nI would be in favor of splitting this kind of thing up into two\nmessages:\n\n# toast value 16459 unexpected chunks 1000 through 1004 each with\nsize 1996 followed by chunk 1005 with size 20\n\nWe'll have fewer message variants and I don't think any real\nregression in usability if we say:\n\n# toast value 16459 has unexpected chunks 1000 through 1004 each\nwith size 1996\n# toast value 16459 has unexpected chunk 1005 with size 20\n\n(Notice that I also inserted \"has\" so that the sentence a verb. Or we\ncould \"contains.\")\n\nI committed 0001.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Apr 2021 13:17:33 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Apr 14, 2021, at 10:17 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Mon, Apr 12, 2021 at 11:06 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> It now reports:\n>> \n>> # heap table \"postgres\".\"public\".\"test\", block 0, offset 18, attribute 2:\n>> # toast value 16461 missing chunk 3 with expected size 1996\n>> # heap table \"postgres\".\"public\".\"test\", block 0, offset 18, attribute 2:\n>> # toast value 16461 was expected to end at chunk 6 with total size 10000, but ended at chunk 5 with total size 8004\n>> \n>> It sounds like you weren't expecting the second of these reports. I think it is valuable, especially when there are multiple missing chunks and multiple extraneous chunks, as it makes it easier for the user to reconcile the missing chunks against the extraneous chunks.\n> \n> I wasn't, but I'm not overwhelmingly opposed to it, either. I do think\n> I would be in favor of splitting this kind of thing up into two\n> messages:\n> \n> # toast value 16459 unexpected chunks 1000 through 1004 each with\n> size 1996 followed by chunk 1005 with size 20\n> \n> We'll have fewer message variants and I don't think any real\n> regression in usability if we say:\n> \n> # toast value 16459 has unexpected chunks 1000 through 1004 each\n> with size 1996\n> # toast value 16459 has unexpected chunk 1005 with size 20\n\nChanged.\n\n> (Notice that I also inserted \"has\" so that the sentence a verb. Or we\n> could \"contains.\")\n\nI have added the verb \"has\" rather than \"contains\" because \"has\" is more consistent with the phrasing of other similar corruption reports.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 15 Apr 2021 10:07:16 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Thu, Apr 15, 2021 at 1:07 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I have added the verb \"has\" rather than \"contains\" because \"has\" is more consistent with the phrasing of other similar corruption reports.\n\nThat makes sense.\n\nI think it's odd that a range of extraneous chunks is collapsed into a\nsingle report if the size of each chunk happens to be\nTOAST_MAX_CHUNK_SIZE and not otherwise. Why not just remember the\nfirst and last extraneous chunk and the size of each? If the next\nchunk you see is the next one in sequence and the same size as all the\nothers, extend your notion of the sequence end by 1. Otherwise, report\nthe range accumulated so far. It seems to me that this wouldn't be any\nmore code than you have now, and might actually be less.\n\nI think that report_missing_chunks() could probably just report the\nrange of missing chunks and not bother reporting how big they were\nexpected to be. But, if it is going to report how big they were\nexpected to be, I think it should have only 2 cases rather than 4:\neither a range of missing chunks of equal size, or a single missing\nchunk of some size. If, as I propose, it doesn't report the expected\nsize, then you still have just 2 cases: a range of missing chunks, or\na single missing chunk.\n\nSomehow I have a hard time feeling confident that check_toast_tuple()\nis going to do the right thing. The logic is really complex and hard\nto understand. 'chunkno' is a counter that advances every time we move\nto the next chunk, and 'curchunk' is the value we actually find in the\nTOAST tuple. This terminology is not easy to understand. Most messages\nnow report 'curchunk', but some still report 'chunkno'. Why does\n'chunkno' need to exist at all? AFAICS the combination of 'curchunk'\nand 'tctx->last_chunk_seen' ought to be sufficient. I can see no\nparticular reason why what you're calling 'chunkno' needs to exist\neven as a local variable, let alone be printed out. Either we haven't\nyet validated that the chunk_id extracted from the tuple is non-null\nand greater than the last chunk number we saw, in which case we can\njust complain about it if we find it to be otherwise, or we have\nalready done that validation, in which case we should complain about\nthat value and not 'chunkno' in any subsequent messages.\n\nThe conditionals between where you set max_valid_prior_chunk and where\nyou set last_chunk_seen seem hard to understand, particularly the\nbifurcated way that missing chunks are reported. Initial missing\nchunks are detected by (chunkno == 0 && max_valid_prior_chunk >= 0)\nand later missing chunks are detected by (chunkno > 0 &&\nmax_valid_prior_chunk > tctx->last_chunk_seen). I'm not sure if this\nis correct; I find it hard to get my head around what\nmax_valid_prior_chunk is supposed to represent. But in any case I\nthink it can be written more simply. Just keep track of what chunk_id\nwe expect to extract from the next TOAST tuple. Initially it's 0.\nThen:\n\nif (chunkno < tctx->expected_chunkno)\n{\n // toast value %u index scan returned chunk number %d when chunk %d\nwas expected\n // don't modify tctx->expected_chunkno here, just hope the next\nthing matches our previous expectation\n}\nelse\n{\n if (chunkno > tctx->expected_chunkno)\n // chunks are missing from tctx->expected_chunkno through\nMin(chunkno - 1, tctx->final_expected_chunk), provided that the latter\nvalue is greater than or equal to the former\n tctx->expected_chunkno = chunkno + 1;\n}\n\nIf you do this, you only need to report extraneous chunks when chunkno\n> tctx->final_expected_chunk, since chunkno < 0 is guaranteed to\ntrigger the first of the two complaints shown above.\n\nIn check_tuple_attribute I suggest \"bool valid = false\" rather than\n\"bool invalid = true\". I think it's easier to understand.\n\nI object to check_toasted_attribute() using 'chunkno' in a message for\nthe same reasons as above in regards to check_toast_tuple() i.e. I\nthink it's a concept which should not exist.\n\nI think this patch could possibly be split up into multiple patches.\nThere's some question in my mind whether it's getting too late to\ncommit any of this, since some of it looks suspiciously like new\nfeatures after feature freeze. However, I kind of hate to ship this\nrelease without at least doing something about the chunkno vs.\ncurchunk stuff, which is even worse in the committed code than in your\npatch, and which I think will confuse the heck out of users if those\nmessages actually fire for anyone.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Apr 2021 15:50:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Apr 19, 2021, at 12:50 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Thu, Apr 15, 2021 at 1:07 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> I have added the verb \"has\" rather than \"contains\" because \"has\" is more consistent with the phrasing of other similar corruption reports.\n> \n> That makes sense.\n> \n> I think it's odd that a range of extraneous chunks is collapsed into a\n> single report if the size of each chunk happens to be\n> TOAST_MAX_CHUNK_SIZE and not otherwise. Why not just remember the\n> first and last extraneous chunk and the size of each? If the next\n> chunk you see is the next one in sequence and the same size as all the\n> others, extend your notion of the sequence end by 1. Otherwise, report\n> the range accumulated so far. It seems to me that this wouldn't be any\n> more code than you have now, and might actually be less.\n\nIn all cases of uncorrupted toasted attributes, the sequence of N chunks that make up the attribute should be N-1 chunks of TOAST_MAX_CHUNK_SIZE ending with a single chunk of up to TOAST_MAX_CHUNK_SIZE. I'd like to refer to such sequences as \"reasonably sized\" sequences to make conversation easier.\n\nIf the toast pointer's va_extsize field leads us to believe that we should find 10 reasonably sized chunks, but instead we find 30 reasonably sized chunks, we know something is corrupt. We shouldn't automatically prejudice the user against the additional 20 chunks. We didn't expect them, but maybe that's because va_extsize was corrupt and gave us a false expectation. We're not pointing fingers one way or the other.\n\nOn the other hand, if we expect 10 chunks and find an additional 20 unreasonably sized chunks, we can and should point fingers at the extra 20 chunks. Even if we somehow knew that va_extsize was also corrupt, we'd still be justified in saying the 20 unreasonably sized chunks are each, individually corrupt.\n\nI tried to write the code to report one corruption message per corruption found. There are some edge cases where this is a definitional challenge, so it's not easy to say that I've always achieved this goal, but I think i've done so where the definitions are clear. As such, the only time I'd want to combine toast chunks into a single corruption message is when they are not in themselves necessarily *individually* corrupt. That is why I wrote the code to use TOAST_MAX_CHUNK_SIZE rather than just storing up any series of equally sized chunks.\n\nOn a related note, when complaining about a sequence of toast chunks, often the sequence is something like [maximal, maximal, ..., maximal, partial], but sometimes it's just [maximal...maximal], sometimes just [maximal], and sometimes just [partial]. If I'm complaining about that entire sequence, I'd really like to do so in just one message, otherwise it looks like separate complaints.\n\nI can certainly change the code to be how you are asking, but I'd first like to know that you really understood what I was doing here and why the reports read the way they do.\n\n> I think that report_missing_chunks() could probably just report the\n> range of missing chunks and not bother reporting how big they were\n> expected to be. But, if it is going to report how big they were\n> expected to be, I think it should have only 2 cases rather than 4:\n> either a range of missing chunks of equal size, or a single missing\n> chunk of some size. If, as I propose, it doesn't report the expected\n> size, then you still have just 2 cases: a range of missing chunks, or\n> a single missing chunk.\n\nRight, this is the same as above. I'm trying not to split a single corruption complaint into separate reports.\n\n> Somehow I have a hard time feeling confident that check_toast_tuple()\n> is going to do the right thing. The logic is really complex and hard\n> to understand. 'chunkno' is a counter that advances every time we move\n> to the next chunk, and 'curchunk' is the value we actually find in the\n> TOAST tuple. This terminology is not easy to understand. Most messages\n> now report 'curchunk', but some still report 'chunkno'. Why does\n> 'chunkno' need to exist at all? AFAICS the combination of 'curchunk'\n> and 'tctx->last_chunk_seen' ought to be sufficient. I can see no\n> particular reason why what you're calling 'chunkno' needs to exist\n> even as a local variable, let alone be printed out. Either we haven't\n> yet validated that the chunk_id extracted from the tuple is non-null\n> and greater than the last chunk number we saw, in which case we can\n> just complain about it if we find it to be otherwise, or we have\n> already done that validation, in which case we should complain about\n> that value and not 'chunkno' in any subsequent messages.\n\nIf we use tctx->last_chunk_seen as you propose, I imagine we'd set that to -1 prior to the first call to check_toast_tuple(). In the first call, we'd extract the toast chunk_seq and store it in curchunk and verify that it's one greater than tctx->last_chunk_seen. That all seems fine.\n\nBut under corrupt conditions, curchunk = DatumGetInt32(fastgetattr(toasttup, 2, hctx->toast_rel->rd_att, &isnull)) could return -1. That's invalid, of course, but now we don't know what to do. We're supposed to complain when we get the same chunk_seq from the index scan more than once in a row, but we don't know if the value in last_chunk_seen is a real value or just the dummy initial value. Worse still, when we get the next toast tuple back and it has a chunk_seq of -2, we want to complain that the index is returning tuples in reverse order, but we can't, because we still don't know if the -1 in last_chunk_seen is legitimate or a dummy value because that state information isn't carried over from the previous call.\n\nUsing chunkno solves this problem. If chunkno == 0, it means this is our first call, and tctx->last_chunk_seen is uninitialized. Otherwise, this is not the first call, and tctx->last_chunk_seen really is the chunk_seq seen in the prior call. There is no ambiguity.\n\nI could probably change \"int chunkno\" to \"bool is_first_call\" or similar. I had previously used chunkno in the corruption report about chunks whose chunk_seq is null. The idea was that if you have 100 chunks and the 30th chunk is corruptly nulled out, you could say something like \"toast value 178337 has toast chunk 30 with null sequence number\", but you had me change that to \"toast value 178337 has toast chunk with null sequence number\", so generation of that message no longer needs the chunkno. I had kept chunkno around for the other purpose of knowing whether tctx->last_chunk_seen has been initialized yet, but a bool for that would now be sufficient. In any event, though you disagree with me about this below, I think the caller of this code still needs to track chunkno.\n\n> The conditionals between where you set max_valid_prior_chunk and where\n> you set last_chunk_seen seem hard to understand, particularly the\n> bifurcated way that missing chunks are reported. Initial missing\n> chunks are detected by (chunkno == 0 && max_valid_prior_chunk >= 0)\n> and later missing chunks are detected by (chunkno > 0 &&\n> max_valid_prior_chunk > tctx->last_chunk_seen). I'm not sure if this\n> is correct;\n\nWhen we read a chunk_seq from a toast tuple, we need to determine if it indicates a gap in the chunk sequence, but we need to be careful. \n\nThe (chunkno == 0) and (chunkno > 0) stuff is just distinguishing between the first call and all subsequent calls.\n\nFor illustrative purposes, imagine that we expect chunks [0..4].\n\nOn the first call, we expect chunk_seq = 0, but that's not what we actually complain about if we get chunk_seq = 15. We complain about all missing expected chunks, namely [0..4], not [0..14]. We also don't complain yet about seeing extraneous chunk 15, because it might be the first in a series of contiguous extraneous chunks, and we want to wait and report those all at once when the sequence finishes. Simply complaining at this point that we didn't expect to see chunk_seq 15 is the kind of behavior that we already have committed and are trying to fix because the corruption reports are not on point.\n\nOn subsequent calls, we expect chunk_seq = last_chunk_seen+1, but that's also not what we actually complain about if we get some other value for chunk_seq. What we complain about are the missing and extraneous sequences, not the individual chunk that had an unexpected value.\n\n> I find it hard to get my head around what\n> max_valid_prior_chunk is supposed to represent. But in any case I\n> think it can be written more simply. Just keep track of what chunk_id\n> we expect to extract from the next TOAST tuple. Initially it's 0.\n> Then:\n> \n> if (chunkno < tctx->expected_chunkno)\n> {\n> // toast value %u index scan returned chunk number %d when chunk %d\n> was expected\n> // don't modify tctx->expected_chunkno here, just hope the next\n> thing matches our previous expectation\n> }\n> else\n> {\n> if (chunkno > tctx->expected_chunkno)\n> // chunks are missing from tctx->expected_chunkno through\n> Min(chunkno - 1, tctx->final_expected_chunk), provided that the latter\n> value is greater than or equal to the former\n> tctx->expected_chunkno = chunkno + 1;\n> }\n> \n> If you do this, you only need to report extraneous chunks when chunkno\n>> tctx->final_expected_chunk, since chunkno < 0 is guaranteed to\n> trigger the first of the two complaints shown above.\n\nIn the example above, if we're expecting chunks [0..4] and get chunk_seq = 5, the max_valid_prior_chunk is 4. If we instead get chunk_seq = 6, the max_valid_prior_chunk is still 4, because chunk 5 is out of bounds.\n\n> In check_tuple_attribute I suggest \"bool valid = false\" rather than\n> \"bool invalid = true\". I think it's easier to understand.\n\nYeah, I had it that way and changed it, because I don't much like having the only use of a boolean be a negation.\n\n bool foo = false; ... if (!foo) { ... }\n\nseems worse to me than\n\n bool foo = true; ... if (foo) { ... }\n\nBut you're looking at it more from the perspective of english grammar, where \"invalid = false\" reads as a double-negative. That's fine. I can change it back.\n\n> I object to check_toasted_attribute() using 'chunkno' in a message for\n> the same reasons as above in regards to check_toast_tuple() i.e. I\n> think it's a concept which should not exist.\n\nSo if we expect 100 chunks, get chunks [0..19, 80..99], you'd have me write the message as \"expected 100 chunks but sequence ended at chunk 99\"? I think that's odd. It makes infinitely more sense to me to say \"expected 100 chunks but sequence ended at chunk 40\". Actually, this is an argument against changing \"int chunkno\" to \"bool is_first_call\", as I alluded to above, because we have to keep the chunkno around anyway.\n\n> I think this patch could possibly be split up into multiple patches.\n> There's some question in my mind whether it's getting too late to\n> commit any of this, since some of it looks suspiciously like new\n> features after feature freeze. However, I kind of hate to ship this\n> release without at least doing something about the chunkno vs.\n> curchunk stuff, which is even worse in the committed code than in your\n> patch, and which I think will confuse the heck out of users if those\n> messages actually fire for anyone.\n\nI'm in favor of cleaning up the committed code to have easier to understand output. I don't really agree with any of your proposed changes to my patch, though, which is I think a first.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 19 Apr 2021 17:07:58 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Apr 19, 2021, at 5:07 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> \n> \n>> On Apr 19, 2021, at 12:50 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n>> \n>> On Thu, Apr 15, 2021 at 1:07 PM Mark Dilger\n>> <mark.dilger@enterprisedb.com> wrote:\n>>> I have added the verb \"has\" rather than \"contains\" because \"has\" is more consistent with the phrasing of other similar corruption reports.\n>> \n>> That makes sense.\n\nI have refactored the patch to address your other concerns. Breaking the patch into multiple pieces didn't add any clarity, but refactoring portions of it made things simpler to read, I think, so here it is as one patch file.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 22 Apr 2021 16:28:12 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Thu, Apr 22, 2021 at 7:28 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I have refactored the patch to address your other concerns. Breaking the patch into multiple pieces didn't add any clarity, but refactoring portions of it made things simpler to read, I think, so here it is as one patch file.\n\nI was hoping that this version was going to be smaller than the last\nversion, but instead it went from 300+ lines to 500+ lines.\n\nThe main thing I'm unhappy about in the status quo is the use of\nchunkno in error messages. I have suggested several times making that\nconcept go away, because I think users will be confused. Here's a\nminimal patch that does just that. It's 32 lines and results in a net\nremoval of 4 lines. It differs somewhat from my earlier suggestions,\nbecause my priority here is to get reasonably understandable output\nwithout needing a ton of code, and as I was working on this I found\nthat some of my earlier suggestions would have needed more code to\nimplement and I didn't think it bought enough to be worth it. It's\npossible this is too simple, or that it's buggy, so let me know what\nyou think. But basically, I think what got committed before is\nactually mostly fine and doesn't need major revision. It just needs\ntidying up to avoid the confusing chunkno concept.\n\nNow, the other thing we've talked about is adding a few more checks,\nto verify for example that the toastrelid is what we expect, and I see\nin your v22 you thought of a few other things. I think we can consider\nthose, possibly as things where we consider it tidying up loose ends\nfor v14, or else as improvements for v15. But I don't think that the\nfairly large size of your patch comes primarily from additional\nchecks. I think it mostly comes from the code to produce error reports\ngetting a lot more complicated. I apologize if my comments have driven\nthat complexity, but they weren't intended to.\n\nOne tiny problem with the attached patch is that it does not make any\nregression tests fail, which also makes it hard for me to tell if it\nbreaks anything, or if the existing code works. I don't know how\npractical it is to do anything about that. Do you have a patch handy\nthat allows manual updates and deletes on TOAST tables, for manual\ntesting purposes?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 23 Apr 2021 13:28:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Apr 23, 2021, at 10:28 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Thu, Apr 22, 2021 at 7:28 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> I have refactored the patch to address your other concerns. Breaking the patch into multiple pieces didn't add any clarity, but refactoring portions of it made things simpler to read, I think, so here it is as one patch file.\n> \n> I was hoping that this version was going to be smaller than the last\n> version, but instead it went from 300+ lines to 500+ lines.\n> \n> The main thing I'm unhappy about in the status quo is the use of\n> chunkno in error messages. I have suggested several times making that\n> concept go away, because I think users will be confused. Here's a\n> minimal patch that does just that. It's 32 lines and results in a net\n> removal of 4 lines. It differs somewhat from my earlier suggestions,\n> because my priority here is to get reasonably understandable output\n> without needing a ton of code, and as I was working on this I found\n> that some of my earlier suggestions would have needed more code to\n> implement and I didn't think it bought enough to be worth it. It's\n> possible this is too simple, or that it's buggy, so let me know what\n> you think. But basically, I think what got committed before is\n> actually mostly fine and doesn't need major revision. It just needs\n> tidying up to avoid the confusing chunkno concept.\n> \n> Now, the other thing we've talked about is adding a few more checks,\n> to verify for example that the toastrelid is what we expect, and I see\n> in your v22 you thought of a few other things. I think we can consider\n> those, possibly as things where we consider it tidying up loose ends\n> for v14, or else as improvements for v15. But I don't think that the\n> fairly large size of your patch comes primarily from additional\n> checks. I think it mostly comes from the code to produce error reports\n> getting a lot more complicated. I apologize if my comments have driven\n> that complexity, but they weren't intended to.\n> \n> One tiny problem with the attached patch is that it does not make any\n> regression tests fail, which also makes it hard for me to tell if it\n> breaks anything, or if the existing code works. I don't know how\n> practical it is to do anything about that. Do you have a patch handy\n> that allows manual updates and deletes on TOAST tables, for manual\n> testing purposes?\n\nYes, I haven't been posting that with the patch because, but I will test your patch and see what differs.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 23 Apr 2021 10:31:16 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Apr 23, 2021, at 10:31 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> I will test your patch and see what differs.\n\nHere are the differences between master and you patch:\n\nUPDATE $toastname SET chunk_seq = chunk_seq + 1000 WHERE chunk_id = $value_id_to_corrupt\n\n- qr/${header}toast value 16459 chunk 0 has sequence number 1000, but expected sequence number 0/,\n- qr/${header}toast value 16459 chunk 1 has sequence number 1001, but expected sequence number 1/,\n- qr/${header}toast value 16459 chunk 2 has sequence number 1002, but expected sequence number 2/,\n- qr/${header}toast value 16459 chunk 3 has sequence number 1003, but expected sequence number 3/,\n- qr/${header}toast value 16459 chunk 4 has sequence number 1004, but expected sequence number 4/,\n- qr/${header}toast value 16459 chunk 5 has sequence number 1005, but expected sequence number 5/;\n+ qr/${header}toast value 16459 index scan returned chunk 1000 when expecting chunk 0/,\n+ qr/${header}toast value 16459 chunk 1000 follows last expected chunk 5/,\n+ qr/${header}toast value 16459 chunk 1001 follows last expected chunk 5/,\n+ qr/${header}toast value 16459 chunk 1002 follows last expected chunk 5/,\n+ qr/${header}toast value 16459 chunk 1003 follows last expected chunk 5/,\n+ qr/${header}toast value 16459 chunk 1004 follows last expected chunk 5/,\n+ qr/${header}toast value 16459 chunk 1005 follows last expected chunk 5/;\n\nUPDATE $toastname SET chunk_seq = chunk_seq * 1000 WHERE chunk_id = $value_id_to_corrupt\n\n- qr/${header}toast value $value_id_to_corrupt chunk 1 has sequence number 1000, but expected sequence number 1/,\n- qr/${header}toast value $value_id_to_corrupt chunk 2 has sequence number 2000, but expected sequence number 2/,\n- qr/${header}toast value $value_id_to_corrupt chunk 3 has sequence number 3000, but expected sequence number 3/,\n- qr/${header}toast value $value_id_to_corrupt chunk 4 has sequence number 4000, but expected sequence number 4/,\n- qr/${header}toast value $value_id_to_corrupt chunk 5 has sequence number 5000, but expected sequence number 5/;\n-\n+ qr/${header}toast value 16460 index scan returned chunk 1000 when expecting chunk 1/,\n+ qr/${header}toast value 16460 chunk 1000 follows last expected chunk 5/,\n+ qr/${header}toast value 16460 index scan returned chunk 2000 when expecting chunk 1001/,\n+ qr/${header}toast value 16460 chunk 2000 follows last expected chunk 5/,\n+ qr/${header}toast value 16460 index scan returned chunk 3000 when expecting chunk 2001/,\n+ qr/${header}toast value 16460 chunk 3000 follows last expected chunk 5/,\n+ qr/${header}toast value 16460 index scan returned chunk 4000 when expecting chunk 3001/,\n+ qr/${header}toast value 16460 chunk 4000 follows last expected chunk 5/,\n+ qr/${header}toast value 16460 index scan returned chunk 5000 when expecting chunk 4001/,\n+ qr/${header}toast value 16460 chunk 5000 follows last expected chunk 5/;\n\nINSERT INTO $toastname (chunk_id, chunk_seq, chunk_data)\n (SELECT chunk_id,\n 10*chunk_seq + 1000,\n chunk_data\n FROM $toastname\n WHERE chunk_id = $value_id_to_corrupt)\n\n- qr/${header}toast value $value_id_to_corrupt chunk 6 has sequence number 1000, but expected sequence number 6/,\n- qr/${header}toast value $value_id_to_corrupt chunk 7 has sequence number 1010, but expected sequence number 7/,\n- qr/${header}toast value $value_id_to_corrupt chunk 8 has sequence number 1020, but expected sequence number 8/,\n- qr/${header}toast value $value_id_to_corrupt chunk 9 has sequence number 1030, but expected sequence number 9/,\n- qr/${header}toast value $value_id_to_corrupt chunk 10 has sequence number 1040, but expected sequence number 10/,\n- qr/${header}toast value $value_id_to_corrupt chunk 11 has sequence number 1050, but expected sequence number 11/,\n- qr/${header}toast value $value_id_to_corrupt was expected to end at chunk 6, but ended at chunk 12/;\n+ qr/${header}toast value $value_id_to_corrupt index scan returned chunk 1000 when expecting chunk 6/,\n+ qr/${header}toast value $value_id_to_corrupt chunk 1000 follows last expected chunk 5/,\n+ qr/${header}toast value $value_id_to_corrupt index scan returned chunk 1010 when expecting chunk 1001/,\n+ qr/${header}toast value $value_id_to_corrupt chunk 1010 follows last expected chunk 5/,\n+ qr/${header}toast value $value_id_to_corrupt index scan returned chunk 1020 when expecting chunk 1011/,\n+ qr/${header}toast value $value_id_to_corrupt chunk 1020 follows last expected chunk 5/,\n+ qr/${header}toast value $value_id_to_corrupt index scan returned chunk 1030 when expecting chunk 1021/,\n+ qr/${header}toast value $value_id_to_corrupt chunk 1030 follows last expected chunk 5/,\n+ qr/${header}toast value $value_id_to_corrupt index scan returned chunk 1040 when expecting chunk 1031/,\n+ qr/${header}toast value $value_id_to_corrupt chunk 1040 follows last expected chunk 5/,\n+ qr/${header}toast value $value_id_to_corrupt index scan returned chunk 1050 when expecting chunk 1041/,\n+ qr/${header}toast value $value_id_to_corrupt chunk 1050 follows last expected chunk 5/;\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 23 Apr 2021 11:05:08 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Fri, Apr 23, 2021 at 2:05 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Here are the differences between master and you patch:\n\nThanks. Those messages look reasonable to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 23 Apr 2021 14:10:14 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Apr 23, 2021, at 11:05 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Here are the differences between master and you patch:\n\nAnother difference I should probably mention is that a bunch of unrelated tests are failing with errors like:\n\n toast value 13465 chunk 0 has size 1995, but expected size 1996\n\nwhich leads me to suspect your changes to how the size is calculated.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 23 Apr 2021 11:15:19 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Fri, Apr 23, 2021 at 2:15 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Another difference I should probably mention is that a bunch of unrelated tests are failing with errors like:\n>\n> toast value 13465 chunk 0 has size 1995, but expected size 1996\n>\n> which leads me to suspect your changes to how the size is calculated.\n\nThat seems like a pretty reasonable suspicion, but I can't see the problem:\n\n- expected_size = curchunk < endchunk ? TOAST_MAX_CHUNK_SIZE\n- : VARATT_EXTERNAL_GET_EXTSIZE(ta->toast_pointer) -\n(endchunk * TOAST_MAX_CHUNK_SIZE);\n+ expected_size = chunk_seq < last_chunk_seq ? TOAST_MAX_CHUNK_SIZE\n+ : extsize % TOAST_MAX_CHUNK_SIZE;\n\nWhat's different?\n\n1. The variables are renamed.\n\n2. It uses a new variable extsize instead of recomputing\nVARATT_EXTERNAL_GET_EXTSIZE(ta->toast_pointer), but I think that\nshould have the same value.\n\n3. I used modulo arithmetic (%) instead of subtracting endchunk *\nTOAST_MAX_CHUNK_SIZE.\n\nIs TOAST_MAX_CHUNK_SIZE 1996? How long a value did you insert?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 23 Apr 2021 14:29:08 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Apr 23, 2021, at 11:29 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Fri, Apr 23, 2021 at 2:15 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> Another difference I should probably mention is that a bunch of unrelated tests are failing with errors like:\n>> \n>> toast value 13465 chunk 0 has size 1995, but expected size 1996\n>> \n>> which leads me to suspect your changes to how the size is calculated.\n> \n> That seems like a pretty reasonable suspicion, but I can't see the problem:\n> \n> - expected_size = curchunk < endchunk ? TOAST_MAX_CHUNK_SIZE\n> - : VARATT_EXTERNAL_GET_EXTSIZE(ta->toast_pointer) -\n> (endchunk * TOAST_MAX_CHUNK_SIZE);\n> + expected_size = chunk_seq < last_chunk_seq ? TOAST_MAX_CHUNK_SIZE\n> + : extsize % TOAST_MAX_CHUNK_SIZE;\n> \n> What's different?\n> \n> 1. The variables are renamed.\n> \n> 2. It uses a new variable extsize instead of recomputing\n> VARATT_EXTERNAL_GET_EXTSIZE(ta->toast_pointer), but I think that\n> should have the same value.\n> \n> 3. I used modulo arithmetic (%) instead of subtracting endchunk *\n> TOAST_MAX_CHUNK_SIZE.\n> \n> Is TOAST_MAX_CHUNK_SIZE 1996? How long a value did you insert?\n\nOn my laptop, yes, 1996 is TOAST_MAX_CHUNK_SIZE.\n\nI'm not inserting anything. These failures come from just regular tests that I have not changed. I just applied your patch and ran `make check-world` and these fail in src/bin/pg_amcheck \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 23 Apr 2021 11:32:39 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Apr 23, 2021, at 11:29 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> + expected_size = chunk_seq < last_chunk_seq ? TOAST_MAX_CHUNK_SIZE\n> + : extsize % TOAST_MAX_CHUNK_SIZE;\n> \n> What's different?\n\nfor one thing, if a sequence of chunks happens to fit perfectly, the final chunk will have size TOAST_MAX_CHUNK_SIZE, but you're expecting no larger than one less than that, given how modulo arithmetic works.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 23 Apr 2021 11:36:29 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Fri, Apr 23, 2021 at 2:36 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > What's different?\n>\n> for one thing, if a sequence of chunks happens to fit perfectly, the final chunk will have size TOAST_MAX_CHUNK_SIZE, but you're expecting no larger than one less than that, given how modulo arithmetic works.\n\nGood point.\n\nPerhaps something like this, closer to the way you had it?\n\n expected_size = chunk_seq < last_chunk_seq ? TOAST_MAX_CHUNK_SIZE\n : extsize - (last_chunk_seq * TOAST_MAX_CHUNK_SIZE);\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 23 Apr 2021 16:31:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Apr 23, 2021, at 1:31 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> Perhaps something like this, closer to the way you had it?\n> \n> expected_size = chunk_seq < last_chunk_seq ? TOAST_MAX_CHUNK_SIZE\n> : extsize - (last_chunk_seq * TOAST_MAX_CHUNK_SIZE);\n\nIt still suffers the same failures. I'll try to post something that accomplishes the changes to the reports that you are looking for.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 23 Apr 2021 15:01:32 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Apr 23, 2021, at 3:01 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> I'll try to post something that accomplishes the changes to the reports that you are looking for.\n\nThe attached patch changes amcheck corruption reports as discussed upthread. This patch is submitted for the v14 development cycle as a bug fix, per your complaint that the committed code generates reports sufficiently confusing to a user as to constitute a bug.\n\nAll other code refactoring and additional checks discussed upthread are reserved for the v15 development cycle and are not included here.\n\nThe minimal patch (not attached) that does not rename any variables is 135 lines. Your patch was 159 lines. The patch (attached) which includes your variable renaming is 174 lines.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 26 Apr 2021 10:52:34 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Mon, Apr 26, 2021 at 1:52 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> The attached patch changes amcheck corruption reports as discussed upthread. This patch is submitted for the v14 development cycle as a bug fix, per your complaint that the committed code generates reports sufficiently confusing to a user as to constitute a bug.\n>\n> All other code refactoring and additional checks discussed upthread are reserved for the v15 development cycle and are not included here.\n>\n> The minimal patch (not attached) that does not rename any variables is 135 lines. Your patch was 159 lines. The patch (attached) which includes your variable renaming is 174 lines.\n\nHi,\n\nI have compared this against my version. I found the following differences:\n\n1. This version passes last_chunk_seq rather than extsize to\ncheck_toast_tuple(). But this results in having to call\nVARATT_EXTERNAL_GET_EXTSIZE() inside that function. I thought it was\nnicer to do that in the caller, so that we don't do it twice.\n\n2. You fixed some out-of-date comments.\n\n3. You move the test for an unexpected chunk sequence further down in\nthe function. I don't see the point; I had put it by the related null\ncheck, and still think that's better. You also deleted my comment /*\nEither the TOAST index is corrupt, or we don't have all chunks. */\nwhich I would have preferred to keep.\n\n4. You don't return if chunk_seq > last_chunk_seq. That seems wrong,\nbecause we cannot compute a sensible expected size in that case. I\nthink your code will subtract a larger value from a smaller one and,\nthis being unsigned arithmetic, say that the expected chunk size is\nsomething gigantic. Returning and not issuing that complaint at all\nseems better.\n\n5. You fixed the incorrect formula I had introduced for the expected\nsize of the last chunk.\n\n6. You changed the variable name in check_toasted_attribute() from\nexpected_chunkno to chunkno, and initialized it later in the function\ninstead of at declaration time. I don't find this to be an\nimprovement; including the word \"expected\" seems to me to be\nsubstantially clearer. But I think I should have gone with\nexpected_chunk_seq for better consistency.\n\n7. You restored the message \"toast value %u was expected to end at\nchunk %d, but ended at chunk %d\" which my version deleted. I deleted\nthat message because I thought it was redundant, but I guess it's not:\nthere's nothing else to complain if the sequence of chunks ends early.\nI think we should change the test from != to < though, because if it's\n>, then we must have already complained about unexpected chunks. Also,\nI think the message is actually wrong, because even though you renamed\nthe variable, it still ends up being the expected next chunkno rather\nthan the last chunkno we actually saw.\n\nPFA my counter-proposal based on the above analysis.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 30 Apr 2021 12:39:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Apr 30, 2021, at 9:39 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Mon, Apr 26, 2021 at 1:52 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> The attached patch changes amcheck corruption reports as discussed upthread. This patch is submitted for the v14 development cycle as a bug fix, per your complaint that the committed code generates reports sufficiently confusing to a user as to constitute a bug.\n>> \n>> All other code refactoring and additional checks discussed upthread are reserved for the v15 development cycle and are not included here.\n>> \n>> The minimal patch (not attached) that does not rename any variables is 135 lines. Your patch was 159 lines. The patch (attached) which includes your variable renaming is 174 lines.\n> \n> Hi,\n> \n> I have compared this against my version. I found the following differences:\n\nJust to be clear, I did not use your patch v1 as the starting point. I took the code as committed to master as the starting point, used your corruption report verbiage changes and at least some of your variable naming choices, but did not use the rest, in large part because it didn't work. It caused corruption messages to be reported against tables that have no corruption. For that matter, your v2 patch doesn't work either, and in the same way. To wit:\n\n heap table \"postgres\".\"pg_catalog\".\"pg_rewrite\", block 6, offset 4, attribute 7:\n toast value 13461 chunk 0 has size 1995, but expected size 1996\n\nI think there is something wrong with the way you are trying to calculate and use extsize, because I'm not corrupting pg_catalog.pg_rewrite. You can get these same results by applying your patch to master, building, and running 'make check' from src/bin/pg_amcheck/\n\n\n> 1. This version passes last_chunk_seq rather than extsize to\n> check_toast_tuple(). But this results in having to call\n> VARATT_EXTERNAL_GET_EXTSIZE() inside that function. I thought it was\n> nicer to do that in the caller, so that we don't do it twice.\n\nI don't see that VARATT_EXTERNAL_GET_EXTSIZE() is worth too much concern, given that is just a struct access and a bit mask. You are avoiding calculating that twice, but at the expense of calculating last_chunk_seq twice, which involves division. I don't think the division can be optimized as a mere bit shift, since TOAST_MAX_CHUNK_SIZE is not in general a power of two. (For example, on my laptop it is 1996.)\n\nI don't say this to nitpick at the performance one way vs. the other. I doubt it makes any real difference. I'm just confused why you want to change this particular thing right now, given that it is not a bug.\n\n> 2. You fixed some out-of-date comments.\n\nYes, because they were wrong. That's on me. I failed to update them in a prior patch.\n\n> 3. You move the test for an unexpected chunk sequence further down in\n> the function. I don't see the point;\n\nRelative to your patch, perhaps. Relative to master, no tests have been moved.\n\n> I had put it by the related null\n> check, and still think that's better. You also deleted my comment /*\n> Either the TOAST index is corrupt, or we don't have all chunks. */\n> which I would have preferred to keep.\n\nThat's fine. I didn't mean to remove it. I was just taking a minimalist approach to constructing the patch.\n\n> 4. You don't return if chunk_seq > last_chunk_seq. That seems wrong,\n> because we cannot compute a sensible expected size in that case. I\n> think your code will subtract a larger value from a smaller one and,\n> this being unsigned arithmetic, say that the expected chunk size is\n> something gigantic.\n\nYour conclusion is probably right, but I think your analysis is based on a misreading of what \"last_chunk_seq\" means. It's not the last one seen, but the last one expected. (Should we rename the variable to avoid confusion?) It won't compute a gigantic size. Rather, it will expect *every* chunk with chunk_seq >= last_chunk_seq to have whatever size is appropriate for the last chunk. \n\n> Returning and not issuing that complaint at all\n> seems better.\n\nThat might be best. I had been resisting that because I don't want the extraneous chunks to be reported without chunk size information. When debugging corrupted toast, it may be interesting to know the size of the extraneous chunks. If there are 1000 extra chunks, somebody might want to see the sizes of them.\n\n> 5. You fixed the incorrect formula I had introduced for the expected\n> size of the last chunk.\n\nNot really. I just didn't introduce any change in that area.\n\n> 6. You changed the variable name in check_toasted_attribute() from\n> expected_chunkno to chunkno, and initialized it later in the function\n> instead of at declaration time. I don't find this to be an\n> improvement;\n\nI think I just left the variable name and its initialization unchanged.\n\n> including the word \"expected\" seems to me to be\n> substantially clearer. But I think I should have gone with\n> expected_chunk_seq for better consistency.\n\nI agree that is a better name.\n\n> 7. You restored the message \"toast value %u was expected to end at\n> chunk %d, but ended at chunk %d\" which my version deleted. I deleted\n> that message because I thought it was redundant, but I guess it's not:\n> there's nothing else to complain if the sequence of chunks ends early.\n> I think we should change the test from != to < though, because if it's\n>> , then we must have already complained about unexpected chunks.\n\nWe can do it that way if you like. I considered that and had trouble deciding if that made things less clear to users who might be less familiar with the structure of toasted attributes. If some of the attributes have that message and others don't, they might conclude that only some of the attributes ended at the wrong chunk and fail to make the inference that to you or me is obvious.\n\n>> Also,\n> I think the message is actually wrong, because even though you renamed\n> the variable, it still ends up being the expected next chunkno rather\n> than the last chunkno we actually saw.\n\nIf we have seen any chunks, the variable is holding the expected next chunk seq, which is one greater than the last chunk seq we saw.\n\nIf we expect chunks 0..3 and see chunk 0 but not chunk 1, it will complain ...\"expected to end at chunk 4, but ended at chunk 1\". This is clearly by design and not merely a bug, though I tend to agree with you that this is a strange wording choice. I can't remember exactly when and how we decided to word the message this way, but it has annoyed me for a while, and I assumed it was something you suggested a while back, because I don't recall doing it. Either way, since you seem to also be bothered by this, I agree we should change it.\n\n> PFA my counter-proposal based on the above analysis.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 30 Apr 2021 11:31:04 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Fri, Apr 30, 2021 at 2:31 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Just to be clear, I did not use your patch v1 as the starting point.\n\nI thought that might be the case, but I was trying to understand what\nyou didn't like about my version, and comparing them seemed like a way\nto figure that out.\n\n> I took the code as committed to master as the starting point, used your corruption report verbiage changes and at least some of your variable naming choices, but did not use the rest, in large part because it didn't work. It caused corruption messages to be reported against tables that have no corruption. For that matter, your v2 patch doesn't work either, and in the same way. To wit:\n>\n> heap table \"postgres\".\"pg_catalog\".\"pg_rewrite\", block 6, offset 4, attribute 7:\n> toast value 13461 chunk 0 has size 1995, but expected size 1996\n>\n> I think there is something wrong with the way you are trying to calculate and use extsize, because I'm not corrupting pg_catalog.pg_rewrite. You can get these same results by applying your patch to master, building, and running 'make check' from src/bin/pg_amcheck/\n\nArgh, OK, I didn't realize. Should be fixed in this version.\n\n> > 4. You don't return if chunk_seq > last_chunk_seq. That seems wrong,\n> > because we cannot compute a sensible expected size in that case. I\n> > think your code will subtract a larger value from a smaller one and,\n> > this being unsigned arithmetic, say that the expected chunk size is\n> > something gigantic.\n>\n> Your conclusion is probably right, but I think your analysis is based on a misreading of what \"last_chunk_seq\" means. It's not the last one seen, but the last one expected. (Should we rename the variable to avoid confusion?) It won't compute a gigantic size. Rather, it will expect *every* chunk with chunk_seq >= last_chunk_seq to have whatever size is appropriate for the last chunk.\n\nI realize it's the last one expected. That's the point: we don't have\nany expectation for the sizes of chunks higher than the last one we\nexpected to see. If the value is 2000 bytes and the chunk size is 1996\nbytes, we expect chunk 0 to be 1996 bytes and chunk 1 to be 4 bytes.\nIf not, we can complain. But it makes no sense to complain about chunk\n2 being of a size we don't expect. We don't expect it to exist in the\nfirst place, so we have no notion of what size it ought to be.\n\n> If we have seen any chunks, the variable is holding the expected next chunk seq, which is one greater than the last chunk seq we saw.\n>\n> If we expect chunks 0..3 and see chunk 0 but not chunk 1, it will complain ...\"expected to end at chunk 4, but ended at chunk 1\". This is clearly by design and not merely a bug, though I tend to agree with you that this is a strange wording choice. I can't remember exactly when and how we decided to word the message this way, but it has annoyed me for a while, and I assumed it was something you suggested a while back, because I don't recall doing it. Either way, since you seem to also be bothered by this, I agree we should change it.\n\nCan you review this version?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 30 Apr 2021 14:56:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "> On Apr 30, 2021, at 11:56 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Fri, Apr 30, 2021 at 2:31 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> Just to be clear, I did not use your patch v1 as the starting point.\n> \n> I thought that might be the case, but I was trying to understand what\n> you didn't like about my version, and comparing them seemed like a way\n> to figure that out.\n> \n>> I took the code as committed to master as the starting point, used your corruption report verbiage changes and at least some of your variable naming choices, but did not use the rest, in large part because it didn't work. It caused corruption messages to be reported against tables that have no corruption. For that matter, your v2 patch doesn't work either, and in the same way. To wit:\n>> \n>> heap table \"postgres\".\"pg_catalog\".\"pg_rewrite\", block 6, offset 4, attribute 7:\n>> toast value 13461 chunk 0 has size 1995, but expected size 1996\n>> \n>> I think there is something wrong with the way you are trying to calculate and use extsize, because I'm not corrupting pg_catalog.pg_rewrite. You can get these same results by applying your patch to master, building, and running 'make check' from src/bin/pg_amcheck/\n> \n> Argh, OK, I didn't realize. Should be fixed in this version.\n> \n>>> 4. You don't return if chunk_seq > last_chunk_seq. That seems wrong,\n>>> because we cannot compute a sensible expected size in that case. I\n>>> think your code will subtract a larger value from a smaller one and,\n>>> this being unsigned arithmetic, say that the expected chunk size is\n>>> something gigantic.\n>> \n>> Your conclusion is probably right, but I think your analysis is based on a misreading of what \"last_chunk_seq\" means. It's not the last one seen, but the last one expected. (Should we rename the variable to avoid confusion?) It won't compute a gigantic size. Rather, it will expect *every* chunk with chunk_seq >= last_chunk_seq to have whatever size is appropriate for the last chunk.\n> \n> I realize it's the last one expected. That's the point: we don't have\n> any expectation for the sizes of chunks higher than the last one we\n> expected to see. If the value is 2000 bytes and the chunk size is 1996\n> bytes, we expect chunk 0 to be 1996 bytes and chunk 1 to be 4 bytes.\n> If not, we can complain. But it makes no sense to complain about chunk\n> 2 being of a size we don't expect. We don't expect it to exist in the\n> first place, so we have no notion of what size it ought to be.\n> \n>> If we have seen any chunks, the variable is holding the expected next chunk seq, which is one greater than the last chunk seq we saw.\n>> \n>> If we expect chunks 0..3 and see chunk 0 but not chunk 1, it will complain ...\"expected to end at chunk 4, but ended at chunk 1\". This is clearly by design and not merely a bug, though I tend to agree with you that this is a strange wording choice. I can't remember exactly when and how we decided to word the message this way, but it has annoyed me for a while, and I assumed it was something you suggested a while back, because I don't recall doing it. Either way, since you seem to also be bothered by this, I agree we should change it.\n> \n> Can you review this version?\n> \n> -- \n> Robert Haas\n> EDB: http://www.enterprisedb.com\n> <simply-remove-chunkno-concept-v3.patch>\n\nAs requested off-list, here are NOT FOR COMMIT, WIP patches for testing only.\n\nThe first patch allows toast tables to be updated and adds regression tests of corrupted toasted attributes. I never quite got deletes from toast tables to work, and there are probably other gotchas still lurking even with inserts and updates, but it limps along well enough for testing pg_amcheck.\n\nThe second patch updates the expected output of pg_amcheck to match the verbiage that you suggested upthread.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 30 Apr 2021 12:21:09 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Apr 30, 2021, at 11:56 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> Can you review this version?\n\nIt looks mostly good to me. There is a off-by-one error introduced with:\n\n- else if (chunkno != (endchunk + 1))\n+ else if (expected_chunk_seq < last_chunk_seq)\n\nI think that needs to be\n\n+ else if (expected_chunk_seq <= last_chunk_seq)\n\nbecause otherwise it won't complain if the only missing chunk is the very last one.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 30 Apr 2021 12:26:36 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Fri, Apr 30, 2021 at 3:26 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> It looks mostly good to me. There is a off-by-one error introduced with:\n>\n> - else if (chunkno != (endchunk + 1))\n> + else if (expected_chunk_seq < last_chunk_seq)\n>\n> I think that needs to be\n>\n> + else if (expected_chunk_seq <= last_chunk_seq)\n>\n> because otherwise it won't complain if the only missing chunk is the very last one.\n\nOK, how about this version?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 30 Apr 2021 15:29:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Apr 30, 2021, at 12:29 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> OK, how about this version?\n\nI think that's committable.\n\nThe only nitpick might be\n\n- psprintf(\"toast value %u was expected to end at chunk %d, but ended at chunk %d\",\n+ psprintf(\"toast value %u index scan ended early while expecting chunk %d of %d\",\n\nWhen reporting to users about positions within a zero-based indexing scheme, what does \"while expecting chunk 3 of 4\" mean? Is it talking about the last chunk from the set [0..3] which has cardinality 4, or does it mean the next-to-last chunk from [0..4] which ends with chunk 4, or what? The prior language isn't any more clear than what you have here, so I have no objection to committing this, but the prior language was probably as goofy as it was because it was trying to deal with this issue.\n\nThoughts?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 30 Apr 2021 12:41:08 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Fri, Apr 30, 2021 at 3:41 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I think that's committable.\n>\n> The only nitpick might be\n>\n> - psprintf(\"toast value %u was expected to end at chunk %d, but ended at chunk %d\",\n> + psprintf(\"toast value %u index scan ended early while expecting chunk %d of %d\",\n>\n> When reporting to users about positions within a zero-based indexing scheme, what does \"while expecting chunk 3 of 4\" mean? Is it talking about the last chunk from the set [0..3] which has cardinality 4, or does it mean the next-to-last chunk from [0..4] which ends with chunk 4, or what? The prior language isn't any more clear than what you have here, so I have no objection to committing this, but the prior language was probably as goofy as it was because it was trying to deal with this issue.\n\nHmm, I think that might need adjustment, actually. What I was trying\nto do is compensate for the fact that what we now have is the next\nchunk_seq value we expect, not the last one we saw, nor the total\nnumber of chunks we've seen regardless of what chunk_seq they had. But\nI thought it would be too confusing to just give the chunk number we\nwere expecting and not say anything about how many chunks we thought\nthere would be in total. So maybe what I should do is change it to\nsomething like this:\n\ntoast value %u was expected to end at chunk %d, but ended while\nexpecting chunk %d\n\ni.e. same as the currently-committed code, except for changing \"ended\nat\" to \"ended while expecting.\"\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 30 Apr 2021 15:47:39 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Apr 30, 2021, at 12:47 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> Hmm, I think that might need adjustment, actually. What I was trying\n> to do is compensate for the fact that what we now have is the next\n> chunk_seq value we expect, not the last one we saw, nor the total\n> number of chunks we've seen regardless of what chunk_seq they had. But\n> I thought it would be too confusing to just give the chunk number we\n> were expecting and not say anything about how many chunks we thought\n> there would be in total. So maybe what I should do is change it to\n> something like this:\n> \n> toast value %u was expected to end at chunk %d, but ended while\n> expecting chunk %d\n> \n> i.e. same as the currently-committed code, except for changing \"ended\n> at\" to \"ended while expecting.\"\n\nI find the grammar of this new formulation anomalous for hard to articulate reasons not quite the same as but akin to mismatched verb aspect.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 30 Apr 2021 13:04:24 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "\n\n> On Apr 30, 2021, at 1:04 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n>> toast value %u was expected to end at chunk %d, but ended while\n>> expecting chunk %d\n>> \n>> i.e. same as the currently-committed code, except for changing \"ended\n>> at\" to \"ended while expecting.\"\n> \n> I find the grammar of this new formulation anomalous for hard to articulate reasons not quite the same as but akin to mismatched verb aspect.\n\nAfter further reflection, no other verbiage seems any better. I'd say go ahead and commit it this way.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 30 Apr 2021 13:26:49 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Fri, Apr 30, 2021 at 4:26 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> After further reflection, no other verbiage seems any better. I'd say go ahead and commit it this way.\n\nOK. I'll plan to do that on Monday, barring objections.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 30 Apr 2021 17:07:51 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
},
{
"msg_contents": "On Fri, Apr 30, 2021 at 5:07 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Apr 30, 2021 at 4:26 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> > After further reflection, no other verbiage seems any better. I'd say go ahead and commit it this way.\n>\n> OK. I'll plan to do that on Monday, barring objections.\n\nDone now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 3 May 2021 12:34:08 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_amcheck contrib application"
}
] |
[
{
"msg_contents": "While reviewing the patch to speed up Gist indexes with tsvectors [1] by\nsmarter use of popcount, I was reminded that for hardware popcount, we do\nan indirect function call to execute a single CPU instruction, one word at\na time. I went ahead and did some microbenchmarks to see how much that buys\nus, and if we could do better.\n\nFor the tests below I used the attached C file compiled into a shared\nlibrary with gcc 8.4, measuring popcount on an 8kB buffer, median of 3 runs:\n\nselect drive_popcount64(1000000, 1024);\nselect drive_popcount(1000000, 1024);\n\nNote: pg_popcount64() is passed one 8-byte word, and pg_popcount() is\npassed a buffer, which if properly aligned allows to call the appropriate\nword-at-a-time popcount function.\n\n\nmaster -- indirect call to pg_popcount64_asm(), where available. Note\nthat passing a buffer is faster:\n\npg_popcount64()\n2680ms\n\npg_popcount()\n1640ms\n\n\n0001 ignores assembly and uses direct calls to popcount64_slow(). It is\nquite a bit slower, but notice that passing a buffer to pg_popcont() with\nthe slow implementation is not all that much slower than calling the\nindirect assembly one word at a time (2680ms vs 3190ms):\n\npg_popcount64()\n4930ms\n\npg_popcount()\n3190ms\n\nIt's also worth noting that currently all platforms use an indirect\nfunction call, regardless of instruction support, so the direct call here\nis a small win for non-x86-64 platforms.\n\n\n0002 restores indirection through a function pointer, but chooses the\npass-a-buffer function rather than the retail function, and if assembly is\navailable, calls inlined popcount64_asm(). This in itself is not much\nfaster than master (1640ms vs 1430ms):\n\npg_popcount()\n1430ms\n\n\nHowever, 0003 takes the advice of [2] and [3] and uses hand-written\nunrolled assembly, and now we actually have some real improvement, over 3x\nfaster than master:\n\npg_popcount()\n494ms\n\n0004 shows how it would look to use the buffer-passing version for\nbitmapsets and the visibility map. Bitmapsets would benefit from this even\non master, going by the test results above. If we went with something like\n0003, bitmapsets would benefit further automatically. However, counting of\nvisibility map bits is a bit more tricky, since we have to mask off the\nvisible bits and frozen bits to count them separately. 0004 has a simple\nunoptimized function to demonstrate. As I recall, the VM code did not\nbenefit much from the popcount infrastructure to begin with, so a small\nregression might not be noticeable. If it is, it's just a SMOP to offer an\nassembly version here also.\n\nThe motivation for benchmarking this was to have some concrete numbers in\nmind while reviewing gist index creation. If the gist patch can benefit\nfrom any of the above, it's worth considering, as a more holistic approach.\nSince it affects other parts of the system, I wanted to share this in a new\nthread first.\n\n[1] https://commitfest.postgresql.org/32/3023/\n[2] https://danluu.com/assembly-intrinsics/\n[3]\nhttps://stackoverflow.com/questions/25078285/replacing-a-32-bit-loop-counter-with-64-bit-introduces-crazy-performance-deviati\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 3 Mar 2021 12:41:11 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "our use of popcount"
}
] |
[
{
"msg_contents": "Hello all,\r\n\r\nAndrew pointed out elsewhere [1] that it's pretty difficult to add new\r\ncertificates to the test/ssl suite without blowing away the current\r\nstate and starting over. I needed new cases for the NSS backend work,\r\nand ran into the same pain, so here is my attempt to improve the\r\nsituation.\r\n\r\nFor the common case -- adding a new certificate/key pair -- all you\r\nhave to do now is drop one new .config file into the test/ssl\r\ndirectory, add it to either the CLIENTS or SERVERS list, and run `make\r\nsslfiles`. No cleaning necessary.\r\n\r\nThe core architectural addition: by making use of both order-only\r\ndependencies and intermediate file cleanup, the CA state will be\r\nrecreated (exactly once) on demand for each Make run, assign serial\r\nnumbers to new certificates in increasing order, and then be\r\nautomatically removed at the end of the Make run. So it should be much\r\nharder to accumulate junk state during incremental development.\r\n\r\n== Improvements ==\r\n\r\n- The sslfiles target no longer needs to be preceded by sslfiles-clean\r\nto work correctly.\r\n\r\n- I've removed some incorrect dependencies, added missing ones, and\r\nmoved others to order-only (such as the CA state files -- we need them\r\nto exist, but the changes they accumulate should not force other\r\ncertificates to be regenerated).\r\n\r\n- Most of the copy-paste recipes have been consolidated, and some\r\nexisting copy-paste cruft has disappeared as a result. The unused\r\nserver-ss certificate has been removed entirely.\r\n\r\n- Serial number collisions are less likely, thanks to Andrew's idea to\r\nuse the current clock time as the initial serial number in a series.\r\n\r\n- All the .config files are now self-contained (i.e. they contain all\r\nthe required extension information), which simplifies the OpenSSL\r\nrecipes significantly. No more -extfile wrangling.\r\n\r\n== Downsides ==\r\n\r\n- I am making _heavy_ use of GNU Make-isms, which does not improve\r\nlong-term maintainability.\r\n\r\n== Possible Future Work ==\r\n\r\n- I haven't quite fixed the dependency situation for the CRL hash\r\ndirectories -- there are situations where they could be incorrectly\r\nremade. (Relying on directories' timestamps is perilous.) But I think I\r\nhave not made the situation worse than it is today.\r\n\r\n- Because all of these generated files are still checked in, if you run\r\n`make sslfiles` after checking out the ssl artifacts directory for the\r\nfirst time, Make may decide to regenerate some files due to the more\r\nrecent timestamps. I don't see an easy way around this. You can reset\r\nMake's view of things with a `touch ssl/*`, but it'd be nice if it\r\ndidn't happen to begin with.\r\n\r\nI recommend using a diff driver for the new certificates and CRLs so\r\nthat you can see the actual changes -- the only things that should have\r\nchanged are the serial numbers, the timestamps, and the signature\r\nblobs.\r\n\r\nWDYT? I missed the boat slightly for the current commitfest, so I'll\r\nadd this patch to the next one.\r\n\r\n--Jacob\r\n\r\n[1] https://www.postgresql.org/message-id/fd96ae76-a8e3-ef8e-a642-a592f5b76771%40dunslane.net",
"msg_date": "Thu, 4 Mar 2021 00:03:36 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "On Thu, 2021-03-04 at 00:03 +0000, Jacob Champion wrote:\r\n> Hello all,\r\n> \r\n> Andrew pointed out elsewhere [1] that it's pretty difficult to add new\r\n> certificates to the test/ssl suite without blowing away the current\r\n> state and starting over. I needed new cases for the NSS backend work,\r\n> and ran into the same pain, so here is my attempt to improve the\r\n> situation.\r\n\r\nv2 is a rebase to resolve conflicts around SSL compression and the new\r\nclient-dn test case.\r\n\r\n--Jacob",
"msg_date": "Tue, 29 Jun 2021 20:14:12 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "> On 4 Mar 2021, at 01:03, Jacob Champion <pchampion@vmware.com> wrote:\n\n> Andrew pointed out elsewhere [1] that it's pretty difficult to add new\n> certificates to the test/ssl suite without blowing away the current\n> state and starting over. I needed new cases for the NSS backend work,\n> and ran into the same pain, so here is my attempt to improve the\n> situation.\n\nThanks for working on this, I second the pain cited. I've just started to look\nat this, so only a few comments thus far.\n\n> The unused server-ss certificate has been removed entirely.\n\nNice catch, this seems to have been unused since the original import of the SSL\ntest suite. To cut down scope of the patch (even if only a small bit) I\npropose to apply this separately first, as per the attached.\n\n> - Serial number collisions are less likely, thanks to Andrew's idea to\n> use the current clock time as the initial serial number in a series.\n\n+my $serialno = `openssl x509 -serial -noout -in ssl/client.crt`;\n+$serialno =~ s/^serial=//;\n+$serialno = hex($serialno); # OpenSSL prints serial numbers in hexadecimal\n\nWill that work on Windows? We don't currently require the openssl binary to be\nin PATH unless one wants to rebuild sslfiles (which it is quite likely to be\nbut there should at least be errorhandling covering when it's not).\n\n> - I am making _heavy_ use of GNU Make-isms, which does not improve\n> long-term maintainability.\n\nGNU Make is already a requirement, I don't see this shifting the needle in any\ndirection.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Wed, 28 Jul 2021 00:24:36 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "On Wed, 2021-07-28 at 00:24 +0200, Daniel Gustafsson wrote:\r\n> > On 4 Mar 2021, at 01:03, Jacob Champion <pchampion@vmware.com> wrote:\r\n> > Andrew pointed out elsewhere [1] that it's pretty difficult to add new\r\n> > certificates to the test/ssl suite without blowing away the current\r\n> > state and starting over. I needed new cases for the NSS backend work,\r\n> > and ran into the same pain, so here is my attempt to improve the\r\n> > situation.\r\n> \r\n> Thanks for working on this, I second the pain cited. I've just started to look\r\n> at this, so only a few comments thus far.\r\n> \r\n> > The unused server-ss certificate has been removed entirely.\r\n> \r\n> Nice catch, this seems to have been unused since the original import of the SSL\r\n> test suite. To cut down scope of the patch (even if only a small bit) I\r\n> propose to apply this separately first, as per the attached.\r\n\r\nLGTM.\r\n\r\n> > - Serial number collisions are less likely, thanks to Andrew's idea to\r\n> > use the current clock time as the initial serial number in a series.\r\n> \r\n> +my $serialno = `openssl x509 -serial -noout -in ssl/client.crt`;\r\n> +$serialno =~ s/^serial=//;\r\n> +$serialno = hex($serialno); # OpenSSL prints serial numbers in hexadecimal\r\n> \r\n> Will that work on Windows? We don't currently require the openssl binary to be\r\n> in PATH unless one wants to rebuild sslfiles (which it is quite likely to be\r\n> but there should at least be errorhandling covering when it's not).\r\n\r\nHm, that's a good point. It should be easy enough for me to add a\r\nfallback if the invocation fails; I'll give it a shot tomorrow.\r\n\r\n> > - I am making _heavy_ use of GNU Make-isms, which does not improve\r\n> > long-term maintainability.\r\n> \r\n> GNU Make is already a requirement, I don't see this shifting the needle in any\r\n> direction.\r\n\r\nAs long as the .SECONDARYEXPANSION magic is clear enough to others, I'm\r\nhappy.\r\n\r\nThanks!\r\n--Jacob\r\n",
"msg_date": "Wed, 28 Jul 2021 20:10:53 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "\nOn 7/28/21 4:10 PM, Jacob Champion wrote:\n>\n>>> - I am making _heavy_ use of GNU Make-isms, which does not improve\n>>> long-term maintainability.\n>> GNU Make is already a requirement, I don't see this shifting the needle in any\n>> direction.\n> As long as the .SECONDARYEXPANSION magic is clear enough to others, I'm\n> happy.\n>\n\nWe don't currently have any, and so many of us (including me) will have\nto learn to understand it. But that's not to say it's unacceptable. If\nthere's no new infrastructure requirement then I'm OK with it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 28 Jul 2021 16:45:07 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "Jacob Champion <pchampion@vmware.com> writes:\n> On Wed, 2021-07-28 at 00:24 +0200, Daniel Gustafsson wrote:\n>> GNU Make is already a requirement, I don't see this shifting the needle in any\n>> direction.\n\nUm ... the existing requirement is for gmake 3.80 or newer;\nif you want to use newer features we'd have to have a discussion\nabout whether it's worthwhile to move that goalpost.\n\n> As long as the .SECONDARYEXPANSION magic is clear enough to others, I'm\n> happy.\n\nAfter reading the gmake docs about that, I'd have to say it's\nlikely to be next door to unmaintainable. Do we really have\nto be that cute? And, AFAICT, it's not in 3.80.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Jul 2021 17:02:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "> On 28 Jul 2021, at 23:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Jacob Champion <pchampion@vmware.com> writes:\n\n>> As long as the .SECONDARYEXPANSION magic is clear enough to others, I'm\n>> happy.\n> \n> After reading the gmake docs about that, I'd have to say it's likely to be next\n> door to unmaintainable.\n\n\nPersonally, I don’t think it’s that bad, but mileage varies. It’s obviously a\nshow-stopper if maintainers don’t feel comfortable with it.\n\n> And, AFAICT, it's not in 3.80.\n\nThat however, is a very good point that I missed. I think it’s a good tool,\nbut probably not enough to bump the requirement.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 28 Jul 2021 23:09:57 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "On Wed, 2021-07-28 at 23:09 +0200, Daniel Gustafsson wrote:\r\n> > On 28 Jul 2021, at 23:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n> > Jacob Champion <pchampion@vmware.com> writes:\r\n> > > As long as the .SECONDARYEXPANSION magic is clear enough to others, I'm\r\n> > > happy.\r\n> > \r\n> > After reading the gmake docs about that, I'd have to say it's likely to be next\r\n> > door to unmaintainable.\r\n> \r\n> Personally, I don’t think it’s that bad, but mileage varies. It’s obviously a\r\n> show-stopper if maintainers don’t feel comfortable with it.\r\n> \r\n> > And, AFAICT, it's not in 3.80.\r\n> \r\n> That however, is a very good point that I missed. I think it’s a good tool,\r\n> but probably not enough to bump the requirement.\r\n\r\nNo worries, it's easy enough to unroll the expansion manually. The\r\nannoyances without secondary expansion are the duplicated lines for\r\neach individual CA and the need to introduce .INTERMEDIATE targets so\r\nthat cleanup works as intended.\r\n\r\nAttached is a v3 that does that, and introduces a fallback in case\r\nopenssl isn't on the PATH. I also missed a Makefile dependency on\r\ncas.config the first time through, which has been fixed. The patch you\r\npulled out earlier is 0001 in the set.\r\n\r\n--Jacob",
"msg_date": "Fri, 30 Jul 2021 15:11:49 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "On Fri, Jul 30, 2021 at 03:11:49PM +0000, Jacob Champion wrote:\n> No worries, it's easy enough to unroll the expansion manually. The\n> annoyances without secondary expansion are the duplicated lines for\n> each individual CA and the need to introduce .INTERMEDIATE targets so\n> that cleanup works as intended.\n> \n> Attached is a v3 that does that, and introduces a fallback in case\n> openssl isn't on the PATH. I also missed a Makefile dependency on\n> cas.config the first time through, which has been fixed. The patch you\n> pulled out earlier is 0001 in the set.\n\nPatch 0001 is a good cleanup. Daniel, are you planning to apply that?\n\nRegarding 0002, I am not sure. Even if this reduces a lot of\nduplication, which is really nice, enforcing .SECONDARY to not trigger\nwith a change impacting Makefile.global.in does not sound very\nappealing to me in the long-run, TBH.\n--\nMichael",
"msg_date": "Tue, 10 Aug 2021 16:22:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "> On 10 Aug 2021, at 09:22, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Jul 30, 2021 at 03:11:49PM +0000, Jacob Champion wrote:\n>> No worries, it's easy enough to unroll the expansion manually. The\n>> annoyances without secondary expansion are the duplicated lines for\n>> each individual CA and the need to introduce .INTERMEDIATE targets so\n>> that cleanup works as intended.\n>> \n>> Attached is a v3 that does that, and introduces a fallback in case\n>> openssl isn't on the PATH. I also missed a Makefile dependency on\n>> cas.config the first time through, which has been fixed. The patch you\n>> pulled out earlier is 0001 in the set.\n> \n> Patch 0001 is a good cleanup. Daniel, are you planning to apply that?\n\nYes, it’s on my todo for today.\n\n> Regarding 0002, I am not sure. Even if this reduces a lot of\n> duplication, which is really nice, enforcing .SECONDARY to not trigger\n> with a change impacting Makefile.global.in does not sound very\n> appealing to me in the long-run, TBH.\n\nI personally think the increased readability and usability from what we have\ntoday warrants the changes. Is it the use of .SECONDARY or the change in the\nglobal Makefile you object to (or both)?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 10 Aug 2021 09:36:14 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Regarding 0002, I am not sure. Even if this reduces a lot of\n> duplication, which is really nice, enforcing .SECONDARY to not trigger\n> with a change impacting Makefile.global.in does not sound very\n> appealing to me in the long-run, TBH.\n\nYeah, I don't like that change either. It would be one thing if\nwe had several places in which suppressing .SECONDARY was useful,\nbut if there's only one then it seems better to design around it.\n\nAs a concrete example of why this might be a bad idea, how sure\nare you that noplace in Makefile.global depends on that behavior?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Aug 2021 11:26:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 09:36:14AM +0200, Daniel Gustafsson wrote:\n> I personally think the increased readability and usability from what we have\n> today warrants the changes. Is it the use of .SECONDARY or the change in the\n> global Makefile you object to (or both)?\n\nThe part I am mainly objecting to is the change in Makefile.global.in,\nbut I have to admit after thinking about it that enforcing SECONDARY\nmay not be a good idea if other parts of the system rely on that, so\nencouraging the use of clean_intermediates may be dangerous (Tom's\npoint from upthread).\n\nI have not tried so I am not sure, but perhaps we should just focus on\nreducing the number of openssl commands rather than making easier the\nintegration of new files? It could be possible to close the gap with\nthe addition of new files with some more documentation for future\nhackers then?\n--\nMichael",
"msg_date": "Wed, 11 Aug 2021 09:47:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "On Tue, 2021-08-10 at 16:22 +0900, Michael Paquier wrote:\r\n> Regarding 0002, I am not sure. Even if this reduces a lot of\r\n> duplication, which is really nice, enforcing .SECONDARY to not trigger\r\n> with a change impacting Makefile.global.in does not sound very\r\n> appealing to me in the long-run, TBH.\r\n\r\nDe-duplication isn't the primary goal of the .SECONDARY change. It\r\ndefinitely helps with that, but the major improvement is that Make can\r\nmaintain the CA state with less hair-pulling:\r\n\r\n1. Developer updates an arbitrary number of certificates and runs\r\n `make sslfiles`.\r\n2. Make sees that the CA state is missing, and creates it once at the\r\n start of the run.\r\n3. Make runs multiple `openssl ca` commands, depending on the\r\n certificates being changed, which modify the CA state as they go.\r\n4. After Make is finished, it removes all the CA files, putting your\r\n local state back the way it was before you ran Make.\r\n\r\nDoing it this way has several advantages:\r\n\r\n- The magic state files don't persist to influence a future Make run,\r\n so there's less chance of \"I generated with local changes, then\r\n pulled in the changes you made, and now everything's busted in weird\r\n ways because my CA state disagrees with what's in the tree\".\r\n\r\n- Order-only intermediates do The Right Thing in this case -- create\r\n once when needed, accumulate state during the run, remove at the end\r\n -- whether you add a single certificate, or regenerate the entire\r\n tree, or even Ctrl-C halfway through. That's going to be very hard to\r\n imitate by sprinkling `rm`s like the current Makefile does, though\r\n I've been the weeds long enough that maybe I'm missing an obvious\r\n workaround.\r\n\r\n- If, after all that, something still goes wrong (your machine crashes\r\n so Make can't clean up), `git status` will now help you debug\r\n dependency problems because it's no longer \"normal\" to carry the\r\n intermediate litter in your source tree.\r\n\r\nWhat it doesn't fix is the fact that we're still checking in generated\r\nfiles that have interdependencies, so the timestamps Make is looking at\r\nare still going to be wrong during initial checkout. That problem\r\nexisted before and will persist after this change.\r\n\r\nOn Wed, 2021-08-11 at 09:47 +0900, Michael Paquier wrote:\r\n> The part I am mainly objecting to is the change in Makefile.global.in,\r\n> but I have to admit after thinking about it that enforcing SECONDARY\r\n> may not be a good idea if other parts of the system rely on that, so\r\n> encouraging the use of clean_intermediates may be dangerous (Tom's\r\n> point from upthread).\r\n\r\nI don't really want to encourage the use of clean_intermediates. I just\r\nwant Make to have its default, useful behavior for this one Makefile.\r\n\r\n> I have not tried so I am not sure, but perhaps we should just focus on\r\n> reducing the number of openssl commands rather than making easier the\r\n> integration of new files? It could be possible to close the gap with\r\n> the addition of new files with some more documentation for future\r\n> hackers then?\r\n\r\nI'd rather fix the dependency/state bugs than document how to work\r\naround them. I know the workarounds; it doesn't make working with this\r\nMakefile any less maddening.\r\n\r\n--Jacob\r\n",
"msg_date": "Fri, 13 Aug 2021 00:08:05 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "On Tue, 2021-08-10 at 11:26 -0400, Tom Lane wrote:\r\n> Yeah, I don't like that change either. It would be one thing if\r\n> we had several places in which suppressing .SECONDARY was useful,\r\n> but if there's only one then it seems better to design around it.\r\n\r\nMaybe. The current Makefile already tried to design around it, with\r\n`rm`s inserted various places. That strategy won't work for the CA\r\nstate, and my personal interest in trying to manually replicate built-\r\nin Make features is... low.\r\n\r\n> As a concrete example of why this might be a bad idea, how sure\r\n> are you that noplace in Makefile.global depends on that behavior?\r\n\r\nI was hoping that, by scoping the change to only a single Makefile with\r\nthe clean_intermediates flag, I could simplify that question to \"does\r\nany place in that one Makefile rely on an affected rule from\r\nMakefile.global?\" And the answer to that appears to be \"no\" at the\r\nmoment, because that Makefile doesn't really need the globals for\r\nanything but the prove_ macros.\r\n\r\n(Things would get hairier if someone included the SSL Makefile\r\nsomewhere else, but I don't see anyone doing that now and I don't know\r\nwhy someone would.)\r\n\r\nBut -- if I do spend the time to answer your broader question, does it\r\nactually help my case? Someone could always add more stuff to\r\nMakefile.global. It sounds like the actual fear is that we don't\r\nunderstand what might be interacting with a very broad global target,\r\nand that fear is too great to try a scoped change, in a niche Makefile,\r\nearly in a release cycle, to improve a development issue multiple\r\ncommitters have now complained about.\r\n\r\nIf _that's_ the case, then our build system is holding all of us\r\nhostage. Which is frustrating to me. Should I shift focus to help with\r\nthat, first?\r\n\r\n--Jacob\r\n",
"msg_date": "Fri, 13 Aug 2021 00:08:16 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "On Fri, Aug 13, 2021 at 12:08:16AM +0000, Jacob Champion wrote:\n> (Things would get hairier if someone included the SSL Makefile\n> somewhere else, but I don't see anyone doing that now and I don't know\n> why someone would.)\n\nThat would not happen. Hopefully.\n\n> But -- if I do spend the time to answer your broader question, does it\n> actually help my case? Someone could always add more stuff to\n> Makefile.global. It sounds like the actual fear is that we don't\n> understand what might be interacting with a very broad global target,\n> and that fear is too great to try a scoped change, in a niche Makefile,\n> early in a release cycle, to improve a development issue multiple\n> committers have now complained about.\n> \n> If _that's_ the case, then our build system is holding all of us\n> hostage. Which is frustrating to me. Should I shift focus to help with\n> that, first?\n\nFresh ideas in this area are welcome, yes. FWIW, I'll try to spend a\ncouple of hours on what you had upthread in 0002 for the\nsimplification of SSL stuff generation and see if I can come up with\nsomething.\n--\nMichael",
"msg_date": "Fri, 27 Aug 2021 15:02:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "On Fri, 2021-08-27 at 15:02 +0900, Michael Paquier wrote:\r\n> On Fri, Aug 13, 2021 at 12:08:16AM +0000, Jacob Champion wrote:\r\n> > (Things would get hairier if someone included the SSL Makefile\r\n> > somewhere else, but I don't see anyone doing that now and I don't know\r\n> > why someone would.)\r\n> \r\n> That would not happen. Hopefully.\r\n\r\n:)\r\n\r\n> FWIW, I'll try to spend a\r\n> couple of hours on what you had upthread in 0002 for the\r\n> simplification of SSL stuff generation and see if I can come up with\r\n> something.\r\n\r\nThanks! The two-patch v3 no longer applies so I've attached a v4 to\r\nmake the cfbot happy.\r\n\r\n--Jacob",
"msg_date": "Wed, 1 Sep 2021 16:12:44 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "On Fri, 2021-08-27 at 15:02 +0900, Michael Paquier wrote:\r\n> On Fri, Aug 13, 2021 at 12:08:16AM +0000, Jacob Champion wrote:\r\n> > If _that's_ the case, then our build system is holding all of us\r\n> > hostage. Which is frustrating to me. Should I shift focus to help with\r\n> > that, first?\r\n> \r\n> Fresh ideas in this area are welcome, yes.\r\n\r\nSince the sslfiles target is its own little island in the dependency\r\ngraph (it doesn't need anything from Makefile.global), would it be\r\nacceptable to just move it to a standalone sslfiles.mk that the\r\nMakefile defers to? E.g.\r\n\r\n sslfiles:\r\n $(MAKE) -f sslfiles.mk\r\nThen we wouldn't have to touch Makefile.global at all, because\r\nsslfiles.mk wouldn't need to include it. This also reduces .NOTPARALLEL\r\npollution as a bonus.\r\n\r\n--Jacob\r\n",
"msg_date": "Thu, 2 Sep 2021 00:09:49 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "\nOn 9/1/21 8:09 PM, Jacob Champion wrote:\n> On Fri, 2021-08-27 at 15:02 +0900, Michael Paquier wrote:\n>> On Fri, Aug 13, 2021 at 12:08:16AM +0000, Jacob Champion wrote:\n>>> If _that's_ the case, then our build system is holding all of us\n>>> hostage. Which is frustrating to me. Should I shift focus to help with\n>>> that, first?\n>> Fresh ideas in this area are welcome, yes.\n> Since the sslfiles target is its own little island in the dependency\n> graph (it doesn't need anything from Makefile.global), would it be\n> acceptable to just move it to a standalone sslfiles.mk that the\n> Makefile defers to? E.g.\n>\n> sslfiles:\n> $(MAKE) -f sslfiles.mk\n> Then we wouldn't have to touch Makefile.global at all, because\n> sslfiles.mk wouldn't need to include it. This also reduces .NOTPARALLEL\n> pollution as a bonus.\n>\n\nI had he same thought yesterday, so I like the idea :-)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 2 Sep 2021 07:09:08 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "On Thu, 2021-09-02 at 07:09 -0400, Andrew Dunstan wrote:\r\n> \r\n> I had he same thought yesterday, so I like the idea :-)\r\n\r\nDone that way in v5. It's a lot of moved code, so I've kept it as two\r\ncommits for review purposes.\r\n\r\nThanks!\r\n--Jacob",
"msg_date": "Thu, 2 Sep 2021 16:42:14 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "On Thu, Sep 02, 2021 at 04:42:14PM +0000, Jacob Champion wrote:\n> Done that way in v5. It's a lot of moved code, so I've kept it as two\n> commits for review purposes.\n\nNice. This is neat. The split helps a lot to understand how you've\nchanged things from the original implementation. As a whole, this\nlooks rather committable to me.\n\nOne small-ish comment that I have is about all the .config files we\nhave at the root of src/test/ssl/, bloating the whole. I think that\nit would be a bit cleaner to put all of them in a different\nsub-directory, say just config/ or conf/.\n--\nMichael",
"msg_date": "Fri, 3 Sep 2021 09:46:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "On Fri, 2021-09-03 at 09:46 +0900, Michael Paquier wrote:\r\n> Nice. This is neat. The split helps a lot to understand how you've\r\n> changed things from the original implementation. As a whole, this\r\n> looks rather committable to me.\r\n\r\nGreat!\r\n\r\n> One small-ish comment that I have is about all the .config files we\r\n> have at the root of src/test/ssl/, bloating the whole. I think that\r\n> it would be a bit cleaner to put all of them in a different\r\n> sub-directory, say just config/ or conf/.\r\n\r\nThat sounds reasonable. I won't be able to get to it before the holiday\r\nweekend, but I can put up a patch sometime next week.\r\n\r\nThanks,\r\n--Jacob\r\n",
"msg_date": "Fri, 3 Sep 2021 23:21:59 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "On Fri, 2021-09-03 at 23:21 +0000, Jacob Champion wrote:\r\n> > One small-ish comment that I have is about all the .config files we\r\n> > have at the root of src/test/ssl/, bloating the whole. I think that\r\n> > it would be a bit cleaner to put all of them in a different\r\n> > sub-directory, say just config/ or conf/.\r\n> \r\n> That sounds reasonable. I won't be able to get to it before the holiday\r\n> weekend, but I can put up a patch sometime next week.\r\n\r\nDone in v6, a three-patch squashable set. I chose conf/ as the\r\ndirectory.\r\n\r\n--Jacob",
"msg_date": "Wed, 8 Sep 2021 19:32:03 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "On Wed, Sep 08, 2021 at 07:32:03PM +0000, Jacob Champion wrote:\n> On Fri, 2021-09-03 at 23:21 +0000, Jacob Champion wrote:\n> > > One small-ish comment that I have is about all the .config files we\n> > > have at the root of src/test/ssl/, bloating the whole. I think that\n> > > it would be a bit cleaner to put all of them in a different\n> > > sub-directory, say just config/ or conf/.\n> > \n> > That sounds reasonable. I won't be able to get to it before the holiday\n> > weekend, but I can put up a patch sometime next week.\n> \n> Done in v6, a three-patch squashable set. I chose conf/ as the\n> directory.\n\nLooks sensible to me. One thing I can see, while poking at it, is\nthat the README mentions sslfiles to recreate the set of files. But\nit is necessary to do sslfiles-clean once, as sslfiles is a no-op if\nthe set of files exists.\n\nI have not been able to check that this is compatible across all the\nversions of OpenSSL we support on HEAD. By looking at the code, that\nshould be fine but it would be good to be sure.\n\nDaniel, you are registered as a reviewer of this thread\n(https://commitfest.postgresql.org/34/3029/). So I guess that you\nwould prefer to look at that by yourself and perhaps take care of the\ncommit?\n--\nMichael",
"msg_date": "Thu, 9 Sep 2021 10:32:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "> On 9 Sep 2021, at 03:32, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Sep 08, 2021 at 07:32:03PM +0000, Jacob Champion wrote:\n>> On Fri, 2021-09-03 at 23:21 +0000, Jacob Champion wrote:\n>>>> One small-ish comment that I have is about all the .config files we\n>>>> have at the root of src/test/ssl/, bloating the whole. I think that\n>>>> it would be a bit cleaner to put all of them in a different\n>>>> sub-directory, say just config/ or conf/.\n>>> \n>>> That sounds reasonable. I won't be able to get to it before the holiday\n>>> weekend, but I can put up a patch sometime next week.\n>> \n>> Done in v6, a three-patch squashable set. I chose conf/ as the\n>> directory.\n> \n> Looks sensible to me.\n\nI concur, I like this more readable approach and it makes adding new keys and\ncertificates easier.\n\n> One thing I can see, while poking at it, is\n> that the README mentions sslfiles to recreate the set of files. But\n> it is necessary to do sslfiles-clean once, as sslfiles is a no-op if\n> the set of files exists.\n\nA few things noted (and fixed as per the attached, which is v6 squashed and\nrebased on HEAD; commitmessage hasn't been addressed yet) while reviewing:\n\n* Various comment reflowing to fit within 79 characters\n\n* Pass through the clean targets into sslfiles.mk rather than rewrite them to\nmake clean (even if they are the same in sslfiles.mk).\n\n* Moved the lists of keys/certs to be one object per line to match the style\nintroduced in 01368e5d9. The previous Makefile was violating that as well, but\nwhen we're fixing this file for other things we might as well fix that too.\n\n* Bumped the password protected key output to AES256 to match the server files,\nsince it's more realistic to see than AES128 (and I was fiddling around here\nanyways testing different things, see below).\n\n> I have not been able to check that this is compatible across all the\n> versions of OpenSSL we support on HEAD. By looking at the code, that\n> should be fine but it would be good to be sure.\n\nThe submitted patch works for 1.0.1, 1.0.2 and 1.1.1 when running the below\nsequence:\n\n\tmake check\n\tmake ssfiles-clean\n\tmake sslfiles\n\tmake check\n\nTesting what's in the tree, recreating the keys/certs and testing against the\nnew ones.\n\nIn OpenSSL 3.0.0, the final make check on the generated files breaks on the\nencrypted DER key. The key generates fine, and running \"openssl rsa ..\n-check\" validates it, but it fails to load into postgres. Removing the\nexplicit selection of cipher makes the test pass again (also included in the\nattached). I haven't been able to track down exactly what the issue is, but I\nhave a suspicion that it's in OpenSSL rather than libpq. This issue is present\nin master too, so fixing it is orthogonal to this work, but it needs to be\nfigured out.\n\nAnother point of interest here is that 3.0.0 put \"openssl rsa\" in maintenance\nmode, so maybe we'll have to look at supporting \"openssl pkeyutl\" as well for\nsome parts should future bugs remain unfixed in the rsa command.\n\n> Daniel, you are registered as a reviewer of this thread\n> (https://commitfest.postgresql.org/34/3029/). So I guess that you\n> would prefer to look at that by yourself and perhaps take care of the\n> commit?\n\nSure, I can take care of it.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Mon, 13 Sep 2021 15:04:28 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "On Mon, 2021-09-13 at 15:04 +0200, Daniel Gustafsson wrote:\r\n> A few things noted (and fixed as per the attached, which is v6 squashed and\r\n> rebased on HEAD; commitmessage hasn't been addressed yet) while reviewing:\r\n> \r\n> * Various comment reflowing to fit within 79 characters\r\n> \r\n> * Pass through the clean targets into sslfiles.mk rather than rewrite them to\r\n> make clean (even if they are the same in sslfiles.mk).\r\n> \r\n> * Moved the lists of keys/certs to be one object per line to match the style\r\n> introduced in 01368e5d9. The previous Makefile was violating that as well, but\r\n> when we're fixing this file for other things we might as well fix that too.\r\n\r\nLooks good, thanks!\r\n\r\n> * Bumped the password protected key output to AES256 to match the server files,\r\n> since it's more realistic to see than AES128 (and I was fiddling around here\r\n> anyways testing different things, see below).\r\n\r\nFew thoughts about this part of the diff:\r\n\r\n> -# Convert client.key to encrypted PEM (X.509 text) and DER (X.509 ASN.1) formats\r\n> -# to test libpq's support for the sslpassword= option.\r\n> -ssl/client-encrypted-pem.key: outform := PEM\r\n> -ssl/client-encrypted-der.key: outform := DER\r\n> +# Convert client.key to encrypted PEM (X.509 text) and DER (X.509 ASN.1)\r\n> +# formats to test libpq's support for the sslpassword= option.\r\n> ssl/client-encrypted-pem.key ssl/client-encrypted-der.key: ssl/client.key\r\n> - openssl rsa -in $< -outform $(outform) -aes128 -passout 'pass:dUmmyP^#+' -out $@\r\n> + openssl rsa -in $< -outform PEM -aes256 -passout 'pass:dUmmyP^#+' -out $@\r\n> +ssl/client-encrypted-der.key: ssl/client.key\r\n> + openssl rsa -in $< -outform DER -passout 'pass:dUmmyP^#+' -out $@\r\n\r\n1. Should the DER key be AES256 as well?\r\n2. The ssl/client-encrypted-der.key target for the first recipe should\r\nbe removed; I get a duplication warning from Make.\r\n3. The new client key will need to be included in the patch; the one\r\nthere now is still the AES128 version.\r\n\r\nAnd one doc comment:\r\n\r\n> ssl/ subdirectory. The Makefile also contains a rule, \"make sslfiles\", to\r\n> -recreate them if you need to make changes.\r\n> +recreate them if you need to make changes. \"make sslfiles-clean\" is required\r\n> +in order to recreate.\r\n\r\nThis is only true if you need to rebuild the entire tree; if you just\r\nwant to recreate a single cert pair, you can just touch the config file\r\nfor it (or remove the key, if you want to regenerate the pair) and\r\n`make sslfiles` again.\r\n\r\n> The submitted patch works for 1.0.1, 1.0.2 and 1.1.1 when running the below\r\n> sequence:\r\n> \r\n> make check\r\n> make ssfiles-clean\r\n> make sslfiles\r\n> make check\r\n> \r\n> Testing what's in the tree, recreating the keys/certs and testing against the\r\n> new ones.\r\n\r\nGreat, thanks!\r\n\r\n> In OpenSSL 3.0.0, the final make check on the generated files breaks on the\r\n> encrypted DER key. The key generates fine, and running \"openssl rsa ..\r\n> -check\" validates it, but it fails to load into postgres. Removing the\r\n> explicit selection of cipher makes the test pass again (also included in the\r\n> attached). I haven't been able to track down exactly what the issue is, but I\r\n> have a suspicion that it's in OpenSSL rather than libpq. This issue is present\r\n> in master too, so fixing it is orthogonal to this work, but it needs to be\r\n> figured out.\r\n> \r\n> Another point of interest here is that 3.0.0 put \"openssl rsa\" in maintenance\r\n> mode, so maybe we'll have to look at supporting \"openssl pkeyutl\" as well for\r\n> some parts should future bugs remain unfixed in the rsa command.\r\n\r\nGood to know. Agreed that it should be a separate patch.\r\n\r\n--Jacob\r\n",
"msg_date": "Tue, 14 Sep 2021 22:14:16 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "> On 15 Sep 2021, at 00:14, Jacob Champion <pchampion@vmware.com> wrote:\n> On Mon, 2021-09-13 at 15:04 +0200, Daniel Gustafsson wrote:\n\n>> -# Convert client.key to encrypted PEM (X.509 text) and DER (X.509 ASN.1) formats\n>> -# to test libpq's support for the sslpassword= option.\n>> -ssl/client-encrypted-pem.key: outform := PEM\n>> -ssl/client-encrypted-der.key: outform := DER\n>> +# Convert client.key to encrypted PEM (X.509 text) and DER (X.509 ASN.1)\n>> +# formats to test libpq's support for the sslpassword= option.\n>> ssl/client-encrypted-pem.key ssl/client-encrypted-der.key: ssl/client.key\n>> - openssl rsa -in $< -outform $(outform) -aes128 -passout 'pass:dUmmyP^#+' -out $@\n>> + openssl rsa -in $< -outform PEM -aes256 -passout 'pass:dUmmyP^#+' -out $@\n>> +ssl/client-encrypted-der.key: ssl/client.key\n>> + openssl rsa -in $< -outform DER -passout 'pass:dUmmyP^#+' -out $@\n> \n> 1. Should the DER key be AES256 as well?\n\nIt should, but then it fails to load by postgres, my email wasn't clear about\nthis, sorry. The diff to revert from aes256 (and aes128 for that matter) is to\nmake the key load at all.\n\n> 2. The ssl/client-encrypted-der.key target for the first recipe should\n> be removed; I get a duplication warning from Make.\n\nInteresting, I didn't see that, will check.\n\n> 3. The new client key will need to be included in the patch; the one\n> there now is still the AES128 version.\n\nGood point, that's a reason to keep it aes128 until the encrypter DER key in\n3.0.0 issue has been fixed.\n\n> And one doc comment:\n> \n>> ssl/ subdirectory. The Makefile also contains a rule, \"make sslfiles\", to\n>> -recreate them if you need to make changes.\n>> +recreate them if you need to make changes. \"make sslfiles-clean\" is required\n>> +in order to recreate.\n> \n> This is only true if you need to rebuild the entire tree; if you just\n> want to recreate a single cert pair, you can just touch the config file\n> for it (or remove the key, if you want to regenerate the pair) and\n> `make sslfiles` again.\n\nCorrect. In my head, \"rebuild\" is when dealing with individually changed files\nand \"recreate\" means rebuild everything regardless. Thats just my in my head\nthough, so clearly the wording should be expanded. Will do.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 15 Sep 2021 00:31:31 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "On Wed, Sep 15, 2021 at 12:31:31AM +0200, Daniel Gustafsson wrote:\n> Correct. In my head, \"rebuild\" is when dealing with individually changed files\n> and \"recreate\" means rebuild everything regardless. Thats just my in my head\n> though, so clearly the wording should be expanded. Will do.\n\nSo this will be addressed and perhaps merged into the tree, right?\n--\nMichael",
"msg_date": "Fri, 1 Oct 2021 15:59:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "> On 1 Oct 2021, at 08:59, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Sep 15, 2021 at 12:31:31AM +0200, Daniel Gustafsson wrote:\n>> Correct. In my head, \"rebuild\" is when dealing with individually changed files\n>> and \"recreate\" means rebuild everything regardless. Thats just my in my head\n>> though, so clearly the wording should be expanded. Will do.\n> \n> So this will be addressed and perhaps merged into the tree, right?\n\nYes, I'm finalizing testing of this patch across platforms and OpenSSL versions.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 1 Oct 2021 09:02:46 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "> On 1 Oct 2021, at 09:02, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 1 Oct 2021, at 08:59, Michael Paquier <michael@paquier.xyz> wrote:\n>> \n>> On Wed, Sep 15, 2021 at 12:31:31AM +0200, Daniel Gustafsson wrote:\n>>> Correct. In my head, \"rebuild\" is when dealing with individually changed files\n>>> and \"recreate\" means rebuild everything regardless. Thats just my in my head\n>>> though, so clearly the wording should be expanded. Will do.\n>> \n>> So this will be addressed and perhaps merged into the tree, right?\n> \n> Yes, I'm finalizing testing of this patch across platforms and OpenSSL versions.\n\nThe attached contains the requested fixes, and has been tested on all (by us)\nsupported versions of OpenSSL. Unless there are objections I will apply this\nto master shortly.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Fri, 15 Oct 2021 14:01:30 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "> On 15 Oct 2021, at 14:01, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> The attached contains the requested fixes, and has been tested on all (by us)\n> supported versions of OpenSSL. Unless there are objections I will apply this\n> to master shortly.\n\nWhich is now done.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 19 Oct 2021 20:21:31 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
},
{
"msg_contents": "On Tue, 2021-10-19 at 20:21 +0200, Daniel Gustafsson wrote:\r\n> > On 15 Oct 2021, at 14:01, Daniel Gustafsson <daniel@yesql.se> wrote:\r\n> > The attached contains the requested fixes, and has been tested on all (by us)\r\n> > supported versions of OpenSSL. Unless there are objections I will apply this\r\n> > to master shortly.\r\n> \r\n> Which is now done.\r\n\r\nThanks very much! Hopefully this makes that area of the code easier on\r\neveryone.\r\n\r\n--Jacob\r\n",
"msg_date": "Tue, 19 Oct 2021 18:25:01 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] test/ssl: rework the sslfiles Makefile target"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile reviewing \"autoanalyze on partitioned table\" patch, I realized\nthat pg_stat_xxx_tables.n_mod_since_analyze is not reset at TRUNCATE.\nOn the other hand, n_ins_since_vacuum is reset. I think it should be\nreset because otherwise we end up unnecessarily triggering autoanalyze\neven when the actual number of newly inserted tuples is less than the\nautoanalyze thresholds.\n\nSteps to reproduce:\n\n=# create table t (c int);\n=# insert into t values (1), (2), (3);\n=# update t set c = 999;\n=# select relname, n_mod_since_analyze, n_ins_since_vacuum from\npg_stat_user_tables;\n relname | n_mod_since_analyze | n_ins_since_vacuum\n---------+---------------------+--------------------\n t | 6 | 3\n(1 row)\n\n=# truncate t;\n=# select relname, n_mod_since_analyze, n_ins_since_vacuum from\npg_stat_user_tables;\n relname | n_mod_since_analyze | n_ins_since_vacuum\n---------+---------------------+--------------------\n t | 6 | 0\n(1 row)\n\nI've attached a small patch to fix this. Please review it.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 4 Mar 2021 10:35:19 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "n_mod_since_analyze isn't reset at table truncation"
},
{
"msg_contents": "On Thu, Mar 04, 2021 at 10:35:19AM +0900, Masahiko Sawada wrote:\n> \n> While reviewing \"autoanalyze on partitioned table\" patch, I realized\n> that pg_stat_xxx_tables.n_mod_since_analyze is not reset at TRUNCATE.\n> On the other hand, n_ins_since_vacuum is reset. I think it should be\n> reset because otherwise we end up unnecessarily triggering autoanalyze\n> even when the actual number of newly inserted tuples is less than the\n> autoanalyze thresholds.\n\nAgreed.\n\n> I've attached a small patch to fix this. Please review it.\n\nThe patch looks sensible to me, but the stats.sql (around l. 158) test should\nbe modified to also check for effect on that field.\n\n\n",
"msg_date": "Thu, 4 Mar 2021 10:24:25 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: n_mod_since_analyze isn't reset at table truncation"
},
{
"msg_contents": "\n\nOn 2021/03/04 11:24, Julien Rouhaud wrote:\n> On Thu, Mar 04, 2021 at 10:35:19AM +0900, Masahiko Sawada wrote:\n>>\n>> While reviewing \"autoanalyze on partitioned table\" patch, I realized\n>> that pg_stat_xxx_tables.n_mod_since_analyze is not reset at TRUNCATE.\n>> On the other hand, n_ins_since_vacuum is reset. I think it should be\n>> reset because otherwise we end up unnecessarily triggering autoanalyze\n>> even when the actual number of newly inserted tuples is less than the\n>> autoanalyze thresholds.\n\nIn that case, conversely, we want to trigger autoanalyze ASAP because the contents in the table was changed very much?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 4 Mar 2021 12:21:14 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: n_mod_since_analyze isn't reset at table truncation"
},
{
"msg_contents": "On Thu, Mar 04, 2021 at 12:21:14PM +0900, Fujii Masao wrote:\n> \n> \n> On 2021/03/04 11:24, Julien Rouhaud wrote:\n> > On Thu, Mar 04, 2021 at 10:35:19AM +0900, Masahiko Sawada wrote:\n> > > \n> > > While reviewing \"autoanalyze on partitioned table\" patch, I realized\n> > > that pg_stat_xxx_tables.n_mod_since_analyze is not reset at TRUNCATE.\n> > > On the other hand, n_ins_since_vacuum is reset. I think it should be\n> > > reset because otherwise we end up unnecessarily triggering autoanalyze\n> > > even when the actual number of newly inserted tuples is less than the\n> > > autoanalyze thresholds.\n> \n> In that case, conversely, we want to trigger autoanalyze ASAP because the contents in the table was changed very much?\n\nWe might want, but wouldn't keeping the current n_mod_since_analyze would make\nthings unpredictable?\n\nAlso the selectivity estimation functions already take into account the actual\nrelation size, so the estimates aren't completely off after a truncate.\n\n\n",
"msg_date": "Thu, 4 Mar 2021 11:40:17 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: n_mod_since_analyze isn't reset at table truncation"
},
{
"msg_contents": "On Thu, Mar 4, 2021 at 11:23 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Mar 04, 2021 at 10:35:19AM +0900, Masahiko Sawada wrote:\n> >\n> > While reviewing \"autoanalyze on partitioned table\" patch, I realized\n> > that pg_stat_xxx_tables.n_mod_since_analyze is not reset at TRUNCATE.\n> > On the other hand, n_ins_since_vacuum is reset. I think it should be\n> > reset because otherwise we end up unnecessarily triggering autoanalyze\n> > even when the actual number of newly inserted tuples is less than the\n> > autoanalyze thresholds.\n>\n> Agreed.\n>\n> > I've attached a small patch to fix this. Please review it.\n>\n> The patch looks sensible to me, but the stats.sql (around l. 158) test should\n> be modified to also check for effect on that field.\n\nThank you for looking at the patch!\n\nAgreed. I've attached the updated patch.\n\nI was going to add tests for both n_mod_since_analyze and\nn_ins_since_analyze but it seems to require somewhat change pgstat\ncode to check n_ins_since_vacuum. Since n_ins_since_vacuum internally\nuses the counter used also for n_tup_ins whose value isn't reset at\nTRUNCATE, the values isn’t reset in some cases, for example, where a\ntable stats message for a new table has a truncation information\n(i.g., tabmsg->t_counts.t_truncated is true). For example, even if we\nadd a check of n_ins_since_vacuum in stats.sql (at L158), the value is\nnot reset by running stats.sql regression test. So in this patch, I\nadded a check just for n_mod_since_analyze.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Fri, 5 Mar 2021 13:44:09 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: n_mod_since_analyze isn't reset at table truncation"
},
{
"msg_contents": "On Thu, Mar 4, 2021 at 12:39 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Mar 04, 2021 at 12:21:14PM +0900, Fujii Masao wrote:\n> >\n> >\n> > On 2021/03/04 11:24, Julien Rouhaud wrote:\n> > > On Thu, Mar 04, 2021 at 10:35:19AM +0900, Masahiko Sawada wrote:\n> > > >\n> > > > While reviewing \"autoanalyze on partitioned table\" patch, I realized\n> > > > that pg_stat_xxx_tables.n_mod_since_analyze is not reset at TRUNCATE.\n> > > > On the other hand, n_ins_since_vacuum is reset. I think it should be\n> > > > reset because otherwise we end up unnecessarily triggering autoanalyze\n> > > > even when the actual number of newly inserted tuples is less than the\n> > > > autoanalyze thresholds.\n> >\n> > In that case, conversely, we want to trigger autoanalyze ASAP because the contents in the table was changed very much?\n>\n> We might want, but wouldn't keeping the current n_mod_since_analyze would make\n> things unpredictable?\n\nRight. I guess we should manually execute ANALYZE on the table to\nrefresh the statistics if we want to do that ASAP. It seems to me it\ndoesn't make any sense to trigger autoanalyze earlier than expected.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 5 Mar 2021 13:50:24 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: n_mod_since_analyze isn't reset at table truncation"
},
{
"msg_contents": "\n\nOn 2021/03/04 12:40, Julien Rouhaud wrote:\n> On Thu, Mar 04, 2021 at 12:21:14PM +0900, Fujii Masao wrote:\n>>\n>>\n>> On 2021/03/04 11:24, Julien Rouhaud wrote:\n>>> On Thu, Mar 04, 2021 at 10:35:19AM +0900, Masahiko Sawada wrote:\n>>>>\n>>>> While reviewing \"autoanalyze on partitioned table\" patch, I realized\n>>>> that pg_stat_xxx_tables.n_mod_since_analyze is not reset at TRUNCATE.\n>>>> On the other hand, n_ins_since_vacuum is reset. I think it should be\n>>>> reset because otherwise we end up unnecessarily triggering autoanalyze\n>>>> even when the actual number of newly inserted tuples is less than the\n>>>> autoanalyze thresholds.\n>>\n>> In that case, conversely, we want to trigger autoanalyze ASAP because the contents in the table was changed very much?\n> \n> We might want, but wouldn't keeping the current n_mod_since_analyze would make\n> things unpredictable?\n\nI don't think so. autoanalyze still works based on the number of\nmodifications since last analyze, so ISTM that's predictable...\n\n> Also the selectivity estimation functions already take into account the actual\n> relation size, so the estimates aren't completely off after a truncate.\n\nBut the statistics is out of date and which may affect the planning badly?\nAlso if generic plan is used, it's not updated until next analyze, i.e.,\ngeneric plan created based on old statistics is used for a while.\n\nSo I'm still not sure why you want to defer autoanalyze in that case.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 5 Mar 2021 15:31:33 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: n_mod_since_analyze isn't reset at table truncation"
},
{
"msg_contents": "On Fri, Mar 05, 2021 at 03:31:33PM +0900, Fujii Masao wrote:\n> \n> On 2021/03/04 12:40, Julien Rouhaud wrote:\n> > > In that case, conversely, we want to trigger autoanalyze ASAP because the contents in the table was changed very much?\n> > \n> > We might want, but wouldn't keeping the current n_mod_since_analyze would make\n> > things unpredictable?\n> \n> I don't think so. autoanalyze still works based on the number of\n> modifications since last analyze, so ISTM that's predictable...\n\nIf we keep n_mod_since_analyze, autoanalyze might trigger after the first write\nor might wait for a full cycle of writes, depending on the kept value. So yes\nit can make autoanalyze triggers earlier in some cases, but that's not\npredictable from the TRUNCATE even point of view.\n\n> > Also the selectivity estimation functions already take into account the actual\n> > relation size, so the estimates aren't completely off after a truncate.\n> \n> But the statistics is out of date and which may affect the planning badly?\n> Also if generic plan is used, it's not updated until next analyze, i.e.,\n> generic plan created based on old statistics is used for a while.\n> \n> So I'm still not sure why you want to defer autoanalyze in that case.\n\nI don't especially want to defer autoanalyze in that case. But an autoanalyze\nhappening quickly after a TRUNCATE is critical for performance, I'd prefer to\nfind a way to trigger autoanalyze reliably.\n\n\n",
"msg_date": "Fri, 5 Mar 2021 14:59:49 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: n_mod_since_analyze isn't reset at table truncation"
},
{
"msg_contents": "\n\nOn 2021/03/05 15:59, Julien Rouhaud wrote:\n> On Fri, Mar 05, 2021 at 03:31:33PM +0900, Fujii Masao wrote:\n>>\n>> On 2021/03/04 12:40, Julien Rouhaud wrote:\n>>>> In that case, conversely, we want to trigger autoanalyze ASAP because the contents in the table was changed very much?\n>>>\n>>> We might want, but wouldn't keeping the current n_mod_since_analyze would make\n>>> things unpredictable?\n>>\n>> I don't think so. autoanalyze still works based on the number of\n>> modifications since last analyze, so ISTM that's predictable...\n> \n> If we keep n_mod_since_analyze, autoanalyze might trigger after the first write\n> or might wait for a full cycle of writes, depending on the kept value. So yes\n> it can make autoanalyze triggers earlier in some cases, but that's not\n> predictable from the TRUNCATE even point of view.\n> \n>>> Also the selectivity estimation functions already take into account the actual\n>>> relation size, so the estimates aren't completely off after a truncate.\n>>\n>> But the statistics is out of date and which may affect the planning badly?\n>> Also if generic plan is used, it's not updated until next analyze, i.e.,\n>> generic plan created based on old statistics is used for a while.\n>>\n>> So I'm still not sure why you want to defer autoanalyze in that case.\n> \n> I don't especially want to defer autoanalyze in that case. But an autoanalyze\n> happening quickly after a TRUNCATE is critical for performance, I'd prefer to\n> find a way to trigger autoanalyze reliably.\n\nOne just idea is to make TRUNCATE increase n_mod_since_analyze by\nthe number of records to truncate. That is, we treat TRUNCATE\nin the same way as \"DELETE without WHERE\".\n\nIf the table has lots of records and is truncated, n_mod_since_analyze\nwill be increased very much and which would trigger autoanalyze soon.\nThis might be expected behavior because the statistics collected before\ntruncate is very \"different\" from the status of the table after truncate.\n\nOTOH, if the table is very small, TRUNCATE doesn't increase\nn_mod_since_analyze so much. So analyze might not be triggered soon.\nBut this might be ok because the statistics collected before truncate is\nnot so \"different\" from the status of the table after truncate.\n\nI'm not sure how much this idea is \"reliable\" and would be helpful in\npractice, though.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 5 Mar 2021 18:07:05 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: n_mod_since_analyze isn't reset at table truncation"
},
{
"msg_contents": "On Fri, Mar 05, 2021 at 06:07:05PM +0900, Fujii Masao wrote:\n> \n> On 2021/03/05 15:59, Julien Rouhaud wrote:\n> > \n> > I don't especially want to defer autoanalyze in that case. But an autoanalyze\n> > happening quickly after a TRUNCATE is critical for performance, I'd prefer to\n> > find a way to trigger autoanalyze reliably.\n> \n> One just idea is to make TRUNCATE increase n_mod_since_analyze by\n> the number of records to truncate. That is, we treat TRUNCATE\n> in the same way as \"DELETE without WHERE\".\n\nYes, that's the approach I had in mind to make it more reliable.\n\n> If the table has lots of records and is truncated, n_mod_since_analyze\n> will be increased very much and which would trigger autoanalyze soon.\n> This might be expected behavior because the statistics collected before\n> truncate is very \"different\" from the status of the table after truncate.\n> \n> OTOH, if the table is very small, TRUNCATE doesn't increase\n> n_mod_since_analyze so much. So analyze might not be triggered soon.\n> But this might be ok because the statistics collected before truncate is\n> not so \"different\" from the status of the table after truncate.\n> \n> I'm not sure how much this idea is \"reliable\" and would be helpful in\n> practice, though.\n\nIt seems like a better approach as it it would have the same results on\nautovacuum as a DELETE, so +1 from me.\n\n\n",
"msg_date": "Fri, 5 Mar 2021 17:51:29 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: n_mod_since_analyze isn't reset at table truncation"
},
{
"msg_contents": "On Fri, Mar 5, 2021 at 6:51 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Fri, Mar 05, 2021 at 06:07:05PM +0900, Fujii Masao wrote:\n> >\n> > On 2021/03/05 15:59, Julien Rouhaud wrote:\n> > >\n> > > I don't especially want to defer autoanalyze in that case. But an autoanalyze\n> > > happening quickly after a TRUNCATE is critical for performance, I'd prefer to\n> > > find a way to trigger autoanalyze reliably.\n> >\n> > One just idea is to make TRUNCATE increase n_mod_since_analyze by\n> > the number of records to truncate. That is, we treat TRUNCATE\n> > in the same way as \"DELETE without WHERE\".\n\nMakes sense. I had been thinking we can treat TRUNCATE as like \"DROP\nTABLE and CREATE TABLE\" in terms of the statistics but it's rather\n\"DELETE without WHERE\" as you mentioned.\n\n>\n> > If the table has lots of records and is truncated, n_mod_since_analyze\n> > will be increased very much and which would trigger autoanalyze soon.\n> > This might be expected behavior because the statistics collected before\n> > truncate is very \"different\" from the status of the table after truncate.\n> >\n> > OTOH, if the table is very small, TRUNCATE doesn't increase\n> > n_mod_since_analyze so much. So analyze might not be triggered soon.\n> > But this might be ok because the statistics collected before truncate is\n> > not so \"different\" from the status of the table after truncate.\n> >\n> > I'm not sure how much this idea is \"reliable\" and would be helpful in\n> > practice, though.\n>\n> It seems like a better approach as it it would have the same results on\n> autovacuum as a DELETE, so +1 from me.\n\nI think we can use n_live_tup for that but since it's an estimation\nvalue it doesn't necessarily have the same result as DELETE and I'm\nnot sure it's reliable.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 5 Mar 2021 22:43:51 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: n_mod_since_analyze isn't reset at table truncation"
},
{
"msg_contents": "On Fri, Mar 05, 2021 at 10:43:51PM +0900, Masahiko Sawada wrote:\n> \n> I think we can use n_live_tup for that but since it's an estimation\n> value it doesn't necessarily have the same result as DELETE and I'm\n> not sure it's reliable.\n\nI agree that it's not 100% reliable, but in my experience those estimates are\nquite good and of the same order of magnitude, which should be enough for this\nuse case. It will be in any case better that simply keeping the old value, and\nI doubt that we can do better anyway.\n\n\n",
"msg_date": "Sat, 6 Mar 2021 01:13:28 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: n_mod_since_analyze isn't reset at table truncation"
},
{
"msg_contents": "On Fri, Mar 5, 2021 at 10:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Mar 5, 2021 at 6:51 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Fri, Mar 05, 2021 at 06:07:05PM +0900, Fujii Masao wrote:\n> > >\n> > > On 2021/03/05 15:59, Julien Rouhaud wrote:\n> > > >\n> > > > I don't especially want to defer autoanalyze in that case. But an autoanalyze\n> > > > happening quickly after a TRUNCATE is critical for performance, I'd prefer to\n> > > > find a way to trigger autoanalyze reliably.\n> > >\n> > > One just idea is to make TRUNCATE increase n_mod_since_analyze by\n> > > the number of records to truncate. That is, we treat TRUNCATE\n> > > in the same way as \"DELETE without WHERE\".\n>\n> Makes sense. I had been thinking we can treat TRUNCATE as like \"DROP\n> TABLE and CREATE TABLE\" in terms of the statistics but it's rather\n> \"DELETE without WHERE\" as you mentioned.\n\nHmm I'm a bit confused. Executing TRANCATE against the table set\npg_class.reltuples to -1, meaning it's never yet vacuumed and the\nplanner applies 10 pages. Also, it seems to clear plan caches\ninvolving the table being truncated. The table statistics in\npg_statistic and pg_statistic_ext might be out of date but that would\nnot affect the plan badly since we assume the table has 10 pages. That\nbehavior makes me think that TRUNCATE is something like \"DROP and\nCREATE table\" in terms of statistics. I'm concerned that if we hasten\nautoanalyze after TRUNCATE, we could trigger autoanalyze soon and the\nstatistics could be out of date at the time when we insert rows enough\nto exceed autoanalyze threshold. I might be missing something though.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 8 Mar 2021 10:49:20 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: n_mod_since_analyze isn't reset at table truncation"
},
{
"msg_contents": "On Mon, Mar 08, 2021 at 10:49:20AM +0900, Masahiko Sawada wrote:\n> On Fri, Mar 5, 2021 at 10:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Mar 5, 2021 at 6:51 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > >\n> > > On Fri, Mar 05, 2021 at 06:07:05PM +0900, Fujii Masao wrote:\n> > > >\n> > > > On 2021/03/05 15:59, Julien Rouhaud wrote:\n> > > > >\n> > > > > I don't especially want to defer autoanalyze in that case. But an autoanalyze\n> > > > > happening quickly after a TRUNCATE is critical for performance, I'd prefer to\n> > > > > find a way to trigger autoanalyze reliably.\n> > > >\n> > > > One just idea is to make TRUNCATE increase n_mod_since_analyze by\n> > > > the number of records to truncate. That is, we treat TRUNCATE\n> > > > in the same way as \"DELETE without WHERE\".\n> >\n> > Makes sense. I had been thinking we can treat TRUNCATE as like \"DROP\n> > TABLE and CREATE TABLE\" in terms of the statistics but it's rather\n> > \"DELETE without WHERE\" as you mentioned.\n> \n> Hmm I'm a bit confused. Executing TRANCATE against the table set\n> pg_class.reltuples to -1, meaning it's never yet vacuumed and the\n> planner applies 10 pages. Also, it seems to clear plan caches\n> involving the table being truncated. The table statistics in\n> pg_statistic and pg_statistic_ext might be out of date but that would\n> not affect the plan badly since we assume the table has 10 pages. That\n> behavior makes me think that TRUNCATE is something like \"DROP and\n> CREATE table\" in terms of statistics. I'm concerned that if we hasten\n> autoanalyze after TRUNCATE, we could trigger autoanalyze soon and the\n> statistics could be out of date at the time when we insert rows enough\n> to exceed autoanalyze threshold. I might be missing something though.\n\nAh, I mentioned previously that the planner would already come up with sensible\nestimates as it takes into account the relation size, but if truncates actually\nsets pg_class.reltuples to -1 then indeed it's kind of behaving as a DROP/CREATE\nfor the size estimate.\n\nBut the previous stats will be kept so estimates will still be done according\nto what used to be in that table, but there's no guarantee that further writes\non that table will have the same pattern as before.\n\nMaybe we should also delete the related pg_statistic entries, reset\nn_mod_since_analyze and let autoanalyze behave as it would do after a\nDROP/CREATE?\n\n\n",
"msg_date": "Mon, 8 Mar 2021 12:34:45 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: n_mod_since_analyze isn't reset at table truncation"
}
] |
[
{
"msg_contents": "Hi, hackers\r\n\r\nDuring installation from source code, I created a build directory separate from the source tree, and execute the following command in the build directory:\r\n/home/postgres/postgresql-13.2/configure -- enable-coverage\r\nmake\r\nmake check\r\nmake coverage-html\r\n\r\n\r\nHowever, while executing make coverage-html, it failed with the following error messages:\r\n/bin/lcov --gcov-tool /bin/gcov -q --no-external -c -i -d . -d /home/postgres/postgresql-13.2/ -o lcve_base.info\r\n...\r\ngeninfo: ERROR: no .gcno files found in /home/postgres/postgresql-13.2/!\r\nmake: *** [lcov_base.info] Error 255\r\nmake: *** Deleting file 'lcov_base.info'\r\n\r\n\r\n\r\n\r\nif I repeat the above steps within the source tree directory, make coverage-html works fine. From the official documentation, I didn't find any limitations for \"make coverage-html\", not sure if I miss something.\r\n\r\n\r\nthanks\r\nwalker\r\n \nHi, hackersDuring installation from source code, I created a build directory separate from the source tree, and execute the following command in the build directory:/home/postgres/postgresql-13.2/configure -- enable-coveragemakemake checkmake coverage-htmlHowever, while executing make coverage-html, it failed with the following error messages:/bin/lcov --gcov-tool /bin/gcov -q --no-external -c -i -d . -d /home/postgres/postgresql-13.2/ -o lcve_base.info...geninfo: ERROR: no .gcno files found in /home/postgres/postgresql-13.2/!make: *** [lcov_base.info] Error 255make: *** Deleting file 'lcov_base.info'if I repeat the above steps within the source tree directory, make coverage-html works fine. From the official documentation, I didn't find any limitations for \"make coverage-html\", not sure if I miss something.thankswalker",
"msg_date": "Thu, 4 Mar 2021 10:33:34 +0800",
"msg_from": "\"=?ISO-8859-1?B?d2Fsa2Vy?=\" <failaway@qq.com>",
"msg_from_op": true,
"msg_subject": "make coverage-html would fail within build directory separate from\n source tree"
},
{
"msg_contents": "On 2021-Mar-04, walker wrote:\n\n> Hi, hackers\n> \n> During installation from source code, I created a build directory separate from the source tree, and execute the following command in the build directory:\n> /home/postgres/postgresql-13.2/configure -- enable-coverage\n> make\n> make check\n> make coverage-html\n> \n> \n> However, while executing make coverage-html, it failed with the following error messages:\n> /bin/lcov --gcov-tool /bin/gcov -q --no-external -c -i -d . -d /home/postgres/postgresql-13.2/ -o lcve_base.info\n> ...\n> geninfo: ERROR: no .gcno files found in /home/postgres/postgresql-13.2/!\n> make: *** [lcov_base.info] Error 255\n> make: *** Deleting file 'lcov_base.info'\n\nHmm, it works fine for me. config.log says I do this (in\n/pgsql/build/master-coverage):\n\n $ /pgsql/source/REL_13_STABLE/configure --enable-debug --enable-depend --enable-cassert --enable-coverage --cache-file=/home/alvherre/run/pgconfig.master-coverage.cache --enable-thread-safety --enable-tap-tests --with-python --with-perl --with-tcl --with-openssl --with-libxml --with-tclconfig=/usr/lib/tcl8.6 PYTHON=/usr/bin/python3 --prefix=/pgsql/install/master-coverage --with-pgport=55451\n\nI do run \"make install\" too, though (and \"make -C contrib install\").\nNot sure if that makes a difference. \nBut for sure there are no .gcno files in the source dir -- they're all\nin the build dir.\n\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"La persona que no quer�a pecar / estaba obligada a sentarse\n en duras y empinadas sillas / desprovistas, por cierto\n de blandos atenuantes\" (Patricio Vogel)\n\n\n",
"msg_date": "Thu, 4 Mar 2021 10:31:42 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: make coverage-html would fail within build directory separate\n from source tree"
},
{
"msg_contents": "On 2021-Mar-04, Alvaro Herrera wrote:\n\n> On 2021-Mar-04, walker wrote:\n> \n> > Hi, hackers\n> > \n> > During installation from source code, I created a build directory separate from the source tree, and execute the following command in the build directory:\n> > /home/postgres/postgresql-13.2/configure -- enable-coverage\n> > make\n> > make check\n> > make coverage-html\n> > \n> > \n> > However, while executing make coverage-html, it failed with the following error messages:\n> > /bin/lcov --gcov-tool /bin/gcov -q --no-external -c -i -d . -d /home/postgres/postgresql-13.2/ -o lcve_base.info\n> > ...\n> > geninfo: ERROR: no .gcno files found in /home/postgres/postgresql-13.2/!\n\n\"make coverage-html\" outputs this: note that I get a WARNING about the\nsource directory rather than an ERROR:\n\n/usr/bin/lcov --gcov-tool /usr/bin/gcov -q --no-external -c -i -d . -d /pgsql/source/REL_13_STABLE/ -o lcov_base.info\ngeninfo: WARNING: no .gcno files found in /pgsql/source/REL_13_STABLE/ - skipping!\n/usr/bin/lcov --gcov-tool /usr/bin/gcov -q --no-external -c -d . -d /pgsql/source/REL_13_STABLE/ -o lcov_test.info\ngeninfo: WARNING: no .gcda files found in /pgsql/source/REL_13_STABLE/ - skipping!\nrm -rf coverage\n/usr/bin/genhtml -q --legend -o coverage --title='PostgreSQL 13.2' --num-spaces=4 lcov_base.info lcov_test.info\ntouch coverage-html-stamp\n\n(In my system, /pgsql is a symlink to /home/alvhere/Code/pgsql/)\n\n$ geninfo --version\ngeninfo: LCOV version 1.13\n\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Thu, 4 Mar 2021 10:35:10 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: make coverage-html would fail within build directory separate\n from source tree"
},
{
"msg_contents": "thanks for your reply. it indeed that there are no .gcon files in source tree directory, they're in build tree directory, which results in failures.\r\n\r\n\r\nThat's a bit wired.\r\n\r\n\r\nAdd more detailed testing steps:\r\nmkdir build_dir\r\ncd build_dir\r\n/home/postgres/postgresql-13.2/configure -- enable-coverage\r\nmake\r\nmake check\r\nmake coverage-html\r\n\r\n\r\nthanks\r\nwalker\r\n\r\n\r\n------------------ Original ------------------\r\nFrom: \"Alvaro Herrera\" <alvherre@alvh.no-ip.org>;\r\nDate: Thu, Mar 4, 2021 09:31 PM\r\nTo: \"walker\"<failaway@qq.com>;\r\nCc: \"pgsql-hackers\"<pgsql-hackers@lists.postgresql.org>;\r\nSubject: Re: make coverage-html would fail within build directory separate from source tree\r\n\r\n\r\n\r\nOn 2021-Mar-04, walker wrote:\r\n\r\n> Hi, hackers\r\n> \r\n> During installation from source code, I created a build directory separate from the source tree, and execute the following command in the build directory:\r\n> /home/postgres/postgresql-13.2/configure -- enable-coverage\r\n> make\r\n> make check\r\n> make coverage-html\r\n> \r\n> \r\n> However, while executing make coverage-html, it failed with the following error messages:\r\n> /bin/lcov --gcov-tool /bin/gcov -q --no-external -c -i -d . -d /home/postgres/postgresql-13.2/ -o lcve_base.info\r\n> ...\r\n> geninfo: ERROR: no .gcno files found in /home/postgres/postgresql-13.2/!\r\n> make: *** [lcov_base.info] Error 255\r\n> make: *** Deleting file 'lcov_base.info'\r\n\r\nHmm, it works fine for me. config.log says I do this (in\r\n/pgsql/build/master-coverage):\r\n\r\n $ /pgsql/source/REL_13_STABLE/configure --enable-debug --enable-depend --enable-cassert --enable-coverage --cache-file=/home/alvherre/run/pgconfig.master-coverage.cache --enable-thread-safety --enable-tap-tests --with-python --with-perl --with-tcl --with-openssl --with-libxml --with-tclconfig=/usr/lib/tcl8.6 PYTHON=/usr/bin/python3 --prefix=/pgsql/install/master-coverage --with-pgport=55451\r\n\r\nI do run \"make install\" too, though (and \"make -C contrib install\").\r\nNot sure if that makes a difference. \r\nBut for sure there are no .gcno files in the source dir -- they're all\r\nin the build dir.\r\n\r\n\r\n-- \r\nÁlvaro Herrera Valdivia, Chile\r\n\"La persona que no quería pecar / estaba obligada a sentarse\r\n en duras y empinadas sillas / desprovistas, por cierto\r\n de blandos atenuantes\" (Patricio Vogel)\nthanks for your reply. it indeed that there are no .gcon files in source tree directory, they're in build tree directory, which results in failures.That's a bit wired.Add more detailed testing steps:mkdir build_dircd build_dir/home/postgres/postgresql-13.2/configure -- enable-coveragemakemake checkmake coverage-htmlthankswalker------------------ Original ------------------From: \"Alvaro Herrera\" <alvherre@alvh.no-ip.org>;Date: Thu, Mar 4, 2021 09:31 PMTo: \"walker\"<failaway@qq.com>;Cc: \"pgsql-hackers\"<pgsql-hackers@lists.postgresql.org>;Subject: Re: make coverage-html would fail within build directory separate from source treeOn 2021-Mar-04, walker wrote:> Hi, hackers> > During installation from source code, I created a build directory separate from the source tree, and execute the following command in the build directory:> /home/postgres/postgresql-13.2/configure -- enable-coverage> make> make check> make coverage-html> > > However, while executing make coverage-html, it failed with the following error messages:> /bin/lcov --gcov-tool /bin/gcov -q --no-external -c -i -d . -d /home/postgres/postgresql-13.2/ -o lcve_base.info> ...> geninfo: ERROR: no .gcno files found in /home/postgres/postgresql-13.2/!> make: *** [lcov_base.info] Error 255> make: *** Deleting file 'lcov_base.info'Hmm, it works fine for me. config.log says I do this (in/pgsql/build/master-coverage): $ /pgsql/source/REL_13_STABLE/configure --enable-debug --enable-depend --enable-cassert --enable-coverage --cache-file=/home/alvherre/run/pgconfig.master-coverage.cache --enable-thread-safety --enable-tap-tests --with-python --with-perl --with-tcl --with-openssl --with-libxml --with-tclconfig=/usr/lib/tcl8.6 PYTHON=/usr/bin/python3 --prefix=/pgsql/install/master-coverage --with-pgport=55451I do run \"make install\" too, though (and \"make -C contrib install\").Not sure if that makes a difference. But for sure there are no .gcno files in the source dir -- they're allin the build dir.-- Álvaro Herrera Valdivia, Chile\"La persona que no quería pecar / estaba obligada a sentarse en duras y empinadas sillas / desprovistas, por cierto de blandos atenuantes\" (Patricio Vogel)",
"msg_date": "Thu, 4 Mar 2021 22:00:11 +0800",
"msg_from": "\"=?utf-8?B?d2Fsa2Vy?=\" <failaway@qq.com>",
"msg_from_op": false,
"msg_subject": "Re: make coverage-html would fail within build directory separate\n from source tree"
},
{
"msg_contents": "On 2021-Mar-04, walker wrote:\n\n> thanks for your reply. it indeed that there are no .gcon files in source tree directory, they're in build tree directory, which results in failures.\n> \n> \n> That's a bit wired.\n> \n> \n> Add more detailed testing steps:\n> mkdir build_dir\n> cd build_dir\n> /home/postgres/postgresql-13.2/configure -- enable-coverage\n> make\n> make check\n> make coverage-html\n\nHmm, my build dir is not inside the source dir -- is yours? Maybe that\nmakes a difference? Also, what version of lcov do you have?\n\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Someone said that it is at least an order of magnitude more work to do\nproduction software than a prototype. I think he is wrong by at least\nan order of magnitude.\" (Brian Kernighan)\n\n\n",
"msg_date": "Thu, 4 Mar 2021 11:05:55 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: make coverage-html would fail within build directory separate\n from source tree"
},
{
"msg_contents": "The same, the build directory is outside the source tree.\r\n\r\n\r\nthe version of lcov is 1.10\r\n\r\n\r\nthanks\r\nwalker\r\n\r\n\r\n\r\n\r\n------------------ Original ------------------\r\nFrom: \"Alvaro Herrera\" <alvherre@alvh.no-ip.org>;\r\nDate: Thu, Mar 4, 2021 10:05 PM\r\nTo: \"walker\"<failaway@qq.com>;\r\nCc: \"pgsql-hackers\"<pgsql-hackers@lists.postgresql.org>;\r\nSubject: Re: make coverage-html would fail within build directory separate from source tree\r\n\r\n\r\n\r\nOn 2021-Mar-04, walker wrote:\r\n\r\n> thanks for your reply. it indeed that there are no .gcon files in source tree directory, they're in build tree directory, which results in failures.\r\n> \r\n> \r\n> That's a bit wired.\r\n> \r\n> \r\n> Add more detailed testing steps:\r\n> mkdir build_dir\r\n> cd build_dir\r\n> /home/postgres/postgresql-13.2/configure -- enable-coverage\r\n> make\r\n> make check\r\n> make coverage-html\r\n\r\nHmm, my build dir is not inside the source dir -- is yours? Maybe that\r\nmakes a difference? Also, what version of lcov do you have?\r\n\r\n\r\n-- \r\nÁlvaro Herrera 39°49'30\"S 73°17'W\r\n\"Someone said that it is at least an order of magnitude more work to do\r\nproduction software than a prototype. I think he is wrong by at least\r\nan order of magnitude.\" (Brian Kernighan)\nThe same, the build directory is outside the source tree.the version of lcov is 1.10thankswalker------------------ Original ------------------From: \"Alvaro Herrera\" <alvherre@alvh.no-ip.org>;Date: Thu, Mar 4, 2021 10:05 PMTo: \"walker\"<failaway@qq.com>;Cc: \"pgsql-hackers\"<pgsql-hackers@lists.postgresql.org>;Subject: Re: make coverage-html would fail within build directory separate from source treeOn 2021-Mar-04, walker wrote:> thanks for your reply. it indeed that there are no .gcon files in source tree directory, they're in build tree directory, which results in failures.> > > That's a bit wired.> > > Add more detailed testing steps:> mkdir build_dir> cd build_dir> /home/postgres/postgresql-13.2/configure -- enable-coverage> make> make check> make coverage-htmlHmm, my build dir is not inside the source dir -- is yours? Maybe thatmakes a difference? Also, what version of lcov do you have?-- Álvaro Herrera 39°49'30\"S 73°17'W\"Someone said that it is at least an order of magnitude more work to doproduction software than a prototype. I think he is wrong by at leastan order of magnitude.\" (Brian Kernighan)",
"msg_date": "Thu, 4 Mar 2021 22:09:17 +0800",
"msg_from": "\"=?utf-8?B?d2Fsa2Vy?=\" <failaway@qq.com>",
"msg_from_op": false,
"msg_subject": "Re: make coverage-html would fail within build directory separate\n from source tree"
},
{
"msg_contents": "On 2021-Mar-04, walker wrote:\n\n> The same, the build directory is outside the source tree.\n> \n> \n> the version of lcov is 1.10\n\nThat seems *really* ancient. Please try with a fresher version.\n\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"How amazing is that? I call it a night and come back to find that a bug has\nbeen identified and patched while I sleep.\" (Robert Davidson)\n http://archives.postgresql.org/pgsql-sql/2006-03/msg00378.php\n\n\n",
"msg_date": "Thu, 4 Mar 2021 11:20:31 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: make coverage-html would fail within build directory separate\n from source tree"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Hmm, my build dir is not inside the source dir -- is yours?\n\nI recall that the gcc build instructions strongly warn against that\nsort of setup. Maybe we should too?\n\nActually, our build instructions already say this specifically:\n\n You can also run configure in a directory outside the source tree, and\n then build there, if you want to keep the build directory separate\n from the original source files. This procedure is called a VPATH\n build ...\n\nMaybe \"outside the source tree\" needs to be emphasized a bit more.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Mar 2021 10:21:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make coverage-html would fail within build directory separate\n from source tree"
},
{
"msg_contents": "Thanks for your reminder, Tom.\r\n\r\n\r\nBefore I understand VPATH well, I always thought \"outside the source tree\" means the build tree is not under source tree.\r\n\r\n\r\nHere because of VPATH build, which means build tree should be a subdirectory of source tree. according to this rule, I retry this scenario with old version of lcov(1.10).\r\ntar -zxf source_dir.tar.gz\r\ncd source_dir\r\nmkdir build_dir && cd build_dir\r\n../configure --enable-coverage\r\nmake\r\nmake check\r\nmake coverage-html\r\n\r\n\r\nAnd \"make coverage-html\" works fine, no any error, or warning, output is like this:\r\n/bin/lcov --gcov-tool /bin/gcov -q --no-external -c -i -d . -d source_dir/build_dir/../ -o lcov_base.info\r\n/bin/lcov --gcov-tool /bin/gcov -q --no-external -c -d . -d source_dir/build_dir/../ -o lcov_test.info\r\nrm -rf coverage\r\n/bin/genhtml -q --legend -o coverage --title='PostgreSQL 13.2' --num-spaces=4 lcov_base.info lcov_test.info\r\ntouch coverage-html-stamp\r\n\r\n\r\nthanks\r\nwalker\r\n\r\n\r\n\r\n\r\n------------------ Original ------------------\r\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>;\r\nDate: Thu, Mar 4, 2021 11:21 PM\r\nTo: \"Alvaro Herrera\"<alvherre@alvh.no-ip.org>;\r\nCc: \"walker\"<failaway@qq.com>;\"pgsql-hackers\"<pgsql-hackers@lists.postgresql.org>;\r\nSubject: Re: make coverage-html would fail within build directory separate from source tree\r\n\r\n\r\n\r\nAlvaro Herrera <alvherre@alvh.no-ip.org> writes:\r\n> Hmm, my build dir is not inside the source dir -- is yours?\r\n\r\nI recall that the gcc build instructions strongly warn against that\r\nsort of setup. Maybe we should too?\r\n\r\nActually, our build instructions already say this specifically:\r\n\r\n You can also run configure in a directory outside the source tree, and\r\n then build there, if you want to keep the build directory separate\r\n from the original source files. This procedure is called a VPATH\r\n build ...\r\n\r\nMaybe \"outside the source tree\" needs to be emphasized a bit more.\r\n\r\n\t\t\tregards, tom lane\nThanks for your reminder, Tom.Before I understand VPATH well, I always thought \"outside the source tree\" means the build tree is not under source tree.Here because of VPATH build, which means build tree should be a subdirectory of source tree. according to this rule, I retry this scenario with old version of lcov(1.10).tar -zxf source_dir.tar.gzcd source_dirmkdir build_dir && cd build_dir../configure --enable-coveragemakemake checkmake coverage-htmlAnd \"make coverage-html\" works fine, no any error, or warning, output is like this:/bin/lcov --gcov-tool /bin/gcov -q --no-external -c -i -d . -d source_dir/build_dir/../ -o lcov_base.info/bin/lcov --gcov-tool /bin/gcov -q --no-external -c -d . -d source_dir/build_dir/../ -o lcov_test.inform -rf coverage/bin/genhtml -q --legend -o coverage --title='PostgreSQL 13.2' --num-spaces=4 lcov_base.info lcov_test.infotouch coverage-html-stampthankswalker------------------ Original ------------------From: \"Tom Lane\" <tgl@sss.pgh.pa.us>;Date: Thu, Mar 4, 2021 11:21 PMTo: \"Alvaro Herrera\"<alvherre@alvh.no-ip.org>;Cc: \"walker\"<failaway@qq.com>;\"pgsql-hackers\"<pgsql-hackers@lists.postgresql.org>;Subject: Re: make coverage-html would fail within build directory separate from source treeAlvaro Herrera <alvherre@alvh.no-ip.org> writes:> Hmm, my build dir is not inside the source dir -- is yours?I recall that the gcc build instructions strongly warn against thatsort of setup. Maybe we should too?Actually, our build instructions already say this specifically: You can also run configure in a directory outside the source tree, and then build there, if you want to keep the build directory separate from the original source files. This procedure is called a VPATH build ...Maybe \"outside the source tree\" needs to be emphasized a bit more.\t\t\tregards, tom lane",
"msg_date": "Fri, 5 Mar 2021 09:55:53 +0800",
"msg_from": "\"=?ISO-8859-1?B?d2Fsa2Vy?=\" <failaway@qq.com>",
"msg_from_op": true,
"msg_subject": "Re: make coverage-html would fail within build directory separate\n from source tree"
}
] |
[
{
"msg_contents": "Hi PG Team,\n\nWhat would be the best way to provide feedback on the PostgreSQL\ndocumentation? Is it a specific email group or email ID? Please if you\ncould guide me.\n\nWarm regards,\nAjay Patel\n\nHi PG Team,What would be the best way to provide feedback on the PostgreSQL documentation? Is it a specific email group or email ID? Please if you could guide me.Warm regards,Ajay Patel",
"msg_date": "Thu, 4 Mar 2021 00:09:50 -0500",
"msg_from": "Ajay Patel <mailajaypatel@gmail.com>",
"msg_from_op": true,
"msg_subject": "How to provide Documentation Feedback"
},
{
"msg_contents": "Ajay Patel <mailajaypatel@gmail.com> writes:\n> What would be the best way to provide feedback on the PostgreSQL\n> documentation? Is it a specific email group or email ID?\n\npgsql-docs@lists.postgresql.org is the place.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Mar 2021 00:12:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: How to provide Documentation Feedback"
},
{
"msg_contents": "Thanks tom. Much appreciated.\n\nOn Thu, Mar 4, 2021 at 12:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Ajay Patel <mailajaypatel@gmail.com> writes:\n> > What would be the best way to provide feedback on the PostgreSQL\n> > documentation? Is it a specific email group or email ID?\n>\n> pgsql-docs@lists.postgresql.org is the place.\n>\n> regards, tom lane\n>\n\nThanks tom. Much appreciated. On Thu, Mar 4, 2021 at 12:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Ajay Patel <mailajaypatel@gmail.com> writes:\n> What would be the best way to provide feedback on the PostgreSQL\n> documentation? Is it a specific email group or email ID?\n\npgsql-docs@lists.postgresql.org is the place.\n\n regards, tom lane",
"msg_date": "Thu, 4 Mar 2021 00:22:36 -0500",
"msg_from": "Ajay Patel <mailajaypatel@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How to provide Documentation Feedback"
}
] |
[
{
"msg_contents": "I posted this earlier at \nhttps://www.postgresql.org/message-id/9ec25819-0a8a-d51a-17dc-4150bb3cca3b@iki.fi, \nand that led to removing FE/BE protocol version 2 support. That's been \ncommitted now, so here's COPY FROM patch again, rebased. To recap:\n\nPreviously COPY FROM could not look ahead in the COPY stream, because in \nthe v2 protocol, it had to detect the end-of-copy marker correctly. With \nv2 protocol gone, that's no longer an issue, and we can simplify the \nparsing slightly. Simpler should also mean faster, but I haven't tried \nthat measuring that.\n\n- Heikki",
"msg_date": "Thu, 4 Mar 2021 11:13:40 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Force lookahead in COPY FROM parsing"
},
{
"msg_contents": "On Thu, Mar 4, 2021 at 5:13 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> I posted this earlier at\n>\nhttps://www.postgresql.org/message-id/9ec25819-0a8a-d51a-17dc-4150bb3cca3b@iki.fi\n,\n> and that led to removing FE/BE protocol version 2 support. That's been\n> committed now, so here's COPY FROM patch again, rebased.\n\nLooks good to me. Just a couple minor things:\n\n+ * Look at the next character. If we're at EOF, c2 will wind\n+ * up as '\\0' because of the guaranteed pad of raw_buf.\n */\n- IF_NEED_REFILL_AND_NOT_EOF_CONTINUE(0);\n-\n- /* get next char */\n c = copy_raw_buf[raw_buf_ptr];\n\nThe new comment seems copy-pasted from the c2 statements further down.\n\n- if (raw_buf_ptr >= copy_buf_len || need_data)\n+#define COPY_READ_LINE_LOOKAHEAD 4\n+ if (raw_buf_ptr + COPY_READ_LINE_LOOKAHEAD >= copy_buf_len)\n\nIs this #define deliberately put down here rather than at the top of the\nfile?\n\n- * of the buffer and then we load more data after that. This case occurs\nonly\n- * when a multibyte character crosses a bufferload boundary.\n+ * of the buffer and then we load more data after that.\n\nIs the removed comment really invalidated by this patch? I figured it was\nsomething not affected until the patch to do the encoding conversion in\nlarger chunks.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Mar 4, 2021 at 5:13 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:>> I posted this earlier at> https://www.postgresql.org/message-id/9ec25819-0a8a-d51a-17dc-4150bb3cca3b@iki.fi,> and that led to removing FE/BE protocol version 2 support. That's been> committed now, so here's COPY FROM patch again, rebased. Looks good to me. Just a couple minor things:+\t\t\t\t * Look at the next character. If we're at EOF, c2 will wind+\t\t\t\t * up as '\\0' because of the guaranteed pad of raw_buf. \t\t\t\t */-\t\t\t\tIF_NEED_REFILL_AND_NOT_EOF_CONTINUE(0);--\t\t\t\t/* get next char */ \t\t\t\tc = copy_raw_buf[raw_buf_ptr];The new comment seems copy-pasted from the c2 statements further down.-\t\tif (raw_buf_ptr >= copy_buf_len || need_data)+#define COPY_READ_LINE_LOOKAHEAD\t4+\t\tif (raw_buf_ptr + COPY_READ_LINE_LOOKAHEAD >= copy_buf_len)Is this #define deliberately put down here rather than at the top of the file?- * of the buffer and then we load more data after that. This case occurs only- * when a multibyte character crosses a bufferload boundary.+ * of the buffer and then we load more data after that.Is the removed comment really invalidated by this patch? I figured it was something not affected until the patch to do the encoding conversion in larger chunks.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 4 Mar 2021 10:37:50 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Force lookahead in COPY FROM parsing"
},
{
"msg_contents": "The cfbot thinks this patch no longer applies, but it works for me, so\nstill set to RFC. It looks like it's because the thread to remove the v2\nFE/BE protocol was still attached to the commitfest entry. I've deleted\nthat, so let's see if that helps.\n\nTo answer the side question of whether it makes any difference in\nperformance, I used the blackhole AM [1] to isolate the copy code path as\nmuch as possible. Forcing lookahead seems to not make a noticeable\ndifference (min of 5 runs):\n\nmaster:\n306ms\n\nforce lookahead:\n304ms\n\n[1] https://github.com/michaelpq/pg_plugins/tree/master/blackhole_am\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nThe cfbot thinks this patch no longer applies, but it works for me, so still set to RFC. It looks like it's because the thread to remove the v2 FE/BE protocol was still attached to the commitfest entry. I've deleted that, so let's see if that helps.To answer the side question of whether it makes any difference in performance, I used the blackhole AM [1] to isolate the copy code path as much as possible. Forcing lookahead seems to not make a noticeable difference (min of 5 runs):master:306msforce lookahead:304ms[1] https://github.com/michaelpq/pg_plugins/tree/master/blackhole_am-- John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 18 Mar 2021 11:09:34 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Force lookahead in COPY FROM parsing"
},
{
"msg_contents": "On 18/03/2021 17:09, John Naylor wrote:\n> The cfbot thinks this patch no longer applies, but it works for me, so \n> still set to RFC. It looks like it's because the thread to remove the v2 \n> FE/BE protocol was still attached to the commitfest entry. I've deleted \n> that, so let's see if that helps.\n\nIt was now truly bitrotted by commit f82de5c46b (the encoding conversion \nchanges). Rebase attached.\n\n> To answer the side question of whether it makes any difference in \n> performance, I used the blackhole AM [1] to isolate the copy code path \n> as much as possible. Forcing lookahead seems to not make a noticeable \n> difference (min of 5 runs):\n> \n> master:\n> 306ms\n> \n> force lookahead:\n> 304ms\n> \n> I forgot to mention the small detail of what the test was:\n> \n> create extension blackhole_am;\n> create table blackhole_tab (a text) using blackhole_am ;\n> copy blackhole_tab from program 'for i in {1..100}; do cat /path/to/UTF-8\\ Sampler.htm ; done;' ;\n> \n> ...where the .htm file is at http://kermitproject.org/utf8.html\n\nOk, I wouldn't expect to see much difference in that test, it gets \ndrowned in all the other parsing overhead. I tested this now with this:\n\ncopy (select repeat('x', 10000) from generate_series(1, 100000)) to \n'/tmp/copydata-x.txt'\ncreate table blackhole_tab (a text);\n\ncopy blackhole_tab from '/tmp/copydata-x.txt' where false;\n\nI took the min of 5 runs of that COPY FROM statement:\n\nmaster:\n4107 ms\n\nv3-0001-Simplify-COPY-FROM-parsing-by-forcing-lookahead.patch:\n3172 ms\n\nI was actually surprised it was so effective on that test, I expected a \nsmall but noticeable gain. But I'll take it.\n\nReplying to your earlier comments (sorry for the delay):\n\n> Looks good to me. Just a couple minor things:\n> \n> + * Look at the next character. If we're at EOF, c2 will wind\n> + * up as '\\0' because of the guaranteed pad of raw_buf.\n> */\n> - IF_NEED_REFILL_AND_NOT_EOF_CONTINUE(0);\n> -\n> - /* get next char */\n> c = copy_raw_buf[raw_buf_ptr];\n> \n> The new comment seems copy-pasted from the c2 statements further down.\n\nFixed.\n\n> - if (raw_buf_ptr >= copy_buf_len || need_data)\n> +#define COPY_READ_LINE_LOOKAHEAD 4\n> + if (raw_buf_ptr + COPY_READ_LINE_LOOKAHEAD >= copy_buf_len)\n> \n> Is this #define deliberately put down here rather than at the top of the file?\n\nYeah, it's only used right here locally. Matter of taste, but I'd prefer \nto keep it here.\n\n> - * of the buffer and then we load more data after that. This case occurs only\n> - * when a multibyte character crosses a bufferload boundary.\n> + * of the buffer and then we load more data after that.\n> \n> Is the removed comment really invalidated by this patch? I figured it was something not affected until the patch to do the encoding conversion in larger chunks.\n\nNot sure anymore, but this is moot now, since the other patch was committed.\n\nThanks for the review and the testing!\n\n- Heikki",
"msg_date": "Thu, 1 Apr 2021 23:47:37 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Force lookahead in COPY FROM parsing"
},
{
"msg_contents": "On Thu, Apr 1, 2021 at 4:47 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> Ok, I wouldn't expect to see much difference in that test, it gets\n> drowned in all the other parsing overhead. I tested this now with this:\n>\n> copy (select repeat('x', 10000) from generate_series(1, 100000)) to\n> '/tmp/copydata-x.txt'\n> create table blackhole_tab (a text);\n>\n> copy blackhole_tab from '/tmp/copydata-x.txt' where false;\n>\n> I took the min of 5 runs of that COPY FROM statement:\n>\n> master:\n> 4107 ms\n>\n> v3-0001-Simplify-COPY-FROM-parsing-by-forcing-lookahead.patch:\n> 3172 ms\n>\n> I was actually surprised it was so effective on that test, I expected a\n> small but noticeable gain. But I'll take it.\n\nNice! With this test on my laptop I only get 7-8% increase, but that's much\nbetter than what I saw before.\n\nI have nothing further so it's RFC. The patch is pretty simple compared to\nthe earlier ones, but is worth running the fuzzer again as added insurance?\n\nAs an aside, I noticed the URL near the top of copyfromparse.c that\nexplains a detail of macros has moved from\n\nhttp://www.cit.gu.edu.au/~anthony/info/C/C.macros\n\nto\n\nhttps://antofthy.gitlab.io/info/C/C_macros.txt\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Apr 1, 2021 at 4:47 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:> Ok, I wouldn't expect to see much difference in that test, it gets> drowned in all the other parsing overhead. I tested this now with this:>> copy (select repeat('x', 10000) from generate_series(1, 100000)) to> '/tmp/copydata-x.txt'> create table blackhole_tab (a text);>> copy blackhole_tab from '/tmp/copydata-x.txt' where false;>> I took the min of 5 runs of that COPY FROM statement:>> master:> 4107 ms>> v3-0001-Simplify-COPY-FROM-parsing-by-forcing-lookahead.patch:> 3172 ms>> I was actually surprised it was so effective on that test, I expected a> small but noticeable gain. But I'll take it.Nice! With this test on my laptop I only get 7-8% increase, but that's much better than what I saw before.I have nothing further so it's RFC. The patch is pretty simple compared to the earlier ones, but is worth running the fuzzer again as added insurance?As an aside, I noticed the URL near the top of copyfromparse.c that explains a detail of macros has moved fromhttp://www.cit.gu.edu.au/~anthony/info/C/C.macrostohttps://antofthy.gitlab.io/info/C/C_macros.txt--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 2 Apr 2021 13:21:59 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Force lookahead in COPY FROM parsing"
},
{
"msg_contents": "On 02/04/2021 20:21, John Naylor wrote:\n> I have nothing further so it's RFC. The patch is pretty simple compared \n> to the earlier ones, but is worth running the fuzzer again as added \n> insurance?\n\nGood idea. I did that, and indeed it revealed bugs. If the client sent \njust a single byte in one CopyData message, we only loaded that one byte \ninto the buffer, instead of the full 4 bytes needed for lookahead. \nAttached is a new version that fixes that.\n\nUnfortunately, that's not the end of it. Consider the byte sequence \n\"\\.<NL><some invalid bytes>\" appearing at the end of the input. We \nshould detect the end-of-copy marker \\. and stop reading without \ncomplaining about the garbage after the end-of-copy marker. That doesn't \nwork if we force 4 bytes of lookahead; the invalid byte sequence fits in \nthe lookahead window, so we will try to convert it.\n\nI'm sure that can be fixed, for example by adding special handling for \nthe last few bytes of the input. But it needs some more thinking, this \npatch isn't quite ready to be committed yet :-(.\n\n- Heikki",
"msg_date": "Tue, 6 Apr 2021 20:50:11 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Force lookahead in COPY FROM parsing"
}
] |
[
{
"msg_contents": "Hi, hackers\r\n\r\n\r\n\r\ncd source_dir\r\n./configure --enable-tap-tests --with-wal-blocksize=16\r\nmake world\r\nmake install-world\r\ncd source_dir/src/test/recovery\r\nmake check PROVE_TESTS='t/011_crash_recovery.pl' PROVE_FLAGS='--verbose'\r\n\r\n\r\nthe output of the last command is:\r\n011_crash_recovery.pl ..\r\n1..3\r\nok 1 - own xid is in-progress\r\nnot ok 2 - new xid after restart is greater\r\n\r\n\r\n# Failed test 'new xid after restart is greater'\r\n# at t/011_crash_recovery.pl line 61\r\n# '485'\r\n# >\r\n# '485'\r\nnot ok 3 - xid is aborted after crash\r\n\r\n\r\n# Failed test 'xid is aborted after crash'\r\n# at t/011_crash_recovery.pl line 65.\r\n# got: 'committed'\r\n# expected: 'aborted'\r\n# Looks like you failed 2 tests of 3.\r\nDubious test returned 2(stat 512, 0x200)\r\nFailed 2/3 subtests\r\n......\r\n\r\n\r\n\r\n\r\nBut if I modified something in t/011_crash_recovery.pl, this perl script works fine, as follows:\r\nis($node->safe_psql('postgres'), qq[SELECT pg_xact_status('$xid');]),\r\n 'in progress', 'own xid is in-progress');\r\n\r\n\r\nsleep(1); # here new added, just make sure the CREATE TABLE XLOG can be flushed into WAL segment file on disk.\r\n\r\n\r\n# Crash and restart the postmaster\r\n$node->stop('immediate');\r\n$node->start;\r\n\r\n\r\n\r\n\r\nI think the problem is that before crash(simulated by stop with immediate mode), the XLOG of \"create table mine\" didn't get flushed into wal file on disk. Instead, if delay some time, e.g. 200ms, or more after issue create table, in theory, the data in wal buffer should be written to disk by wal writer.\r\n\r\n\r\nHowever, I'm not sure the root cause. what's the difference between wal_blocksize=8k and wal_blocksize=16k while flushing wal buffer data to disk?\r\n\r\n\r\nthanks\r\nwalker\nHi, hackerscd source_dir./configure --enable-tap-tests --with-wal-blocksize=16make worldmake install-worldcd source_dir/src/test/recoverymake check PROVE_TESTS='t/011_crash_recovery.pl' PROVE_FLAGS='--verbose'the output of the last command is:011_crash_recovery.pl ..1..3ok 1 - own xid is in-progressnot ok 2 - new xid after restart is greater# Failed test 'new xid after restart is greater'# at t/011_crash_recovery.pl line 61# '485'# ># '485'not ok 3 - xid is aborted after crash# Failed test 'xid is aborted after crash'# at t/011_crash_recovery.pl line 65.# got: 'committed'# expected: 'aborted'# Looks like you failed 2 tests of 3.Dubious test returned 2(stat 512, 0x200)Failed 2/3 subtests......But if I modified something in t/011_crash_recovery.pl, this perl script works fine, as follows:is($node->safe_psql('postgres'), qq[SELECT pg_xact_status('$xid');]), 'in progress', 'own xid is in-progress');sleep(1); # here new added, just make sure the CREATE TABLE XLOG can be flushed into WAL segment file on disk.# Crash and restart the postmaster$node->stop('immediate');$node->start;I think the problem is that before crash(simulated by stop with immediate mode), the XLOG of \"create table mine\" didn't get flushed into wal file on disk. Instead, if delay some time, e.g. 200ms, or more after issue create table, in theory, the data in wal buffer should be written to disk by wal writer.However, I'm not sure the root cause. what's the difference between wal_blocksize=8k and wal_blocksize=16k while flushing wal buffer data to disk?thankswalker",
"msg_date": "Thu, 4 Mar 2021 22:34:38 +0800",
"msg_from": "\"=?ISO-8859-1?B?d2Fsa2Vy?=\" <failaway@qq.com>",
"msg_from_op": true,
"msg_subject": "011_crash_recovery.pl failes using wal_block_size=16K"
},
{
"msg_contents": "Oops! I forgot that the issue starts from this mail.\n\nAt Thu, 4 Mar 2021 22:34:38 +0800, \"walker\" <failaway@qq.com> wrote in \n> 011_crash_recovery.pl ..\n> 1..3\n> ok 1 - own xid is in-progress\n> not ok 2 - new xid after restart is greater\n\n> But if I modified something in t/011_crash_recovery.pl, this perl script works fine, as follows:\n> is($node->safe_psql('postgres'), qq[SELECT pg_xact_status('$xid');]),\n> 'in progress', 'own xid is in-progress');\n> \n> \n> sleep(1); # here new added, just make sure the CREATE TABLE XLOG can be flushed into WAL segment file on disk.\n\nThe sleep let the unwriten WAL records go out to disk (buffer).\n\n> I think the problem is that before crash(simulated by stop with immediate mode), the XLOG of \"create table mine\" didn't get flushed into wal file on disk. Instead, if delay some time, e.g. 200ms, or more after issue create table, in theory, the data in wal buffer should be written to disk by wal writer.\n\nRight.\n\n> However, I'm not sure the root cause. what's the difference between wal_blocksize=8k and wal_blocksize=16k while flushing wal buffer data to disk?\n\nI'm sorry that I didn't follow this message. However, the explanation\nis in the following mail.\n\nhttps://www.postgresql.org/message-id/20210305.135342.384699732619433016.horikyota.ntt%40gmail.com\n\nIn short, the doubled block size prevents wal-writes from happen.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 05 Mar 2021 17:37:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 011_crash_recovery.pl failes using wal_block_size=16K"
}
] |
[
{
"msg_contents": "install a newer version of lcov 1.13, it works fine with WARNING just same as yours.\r\n\r\n\r\nmuch appreciated\r\n\r\n\r\nthanks\r\nwalker\r\n\r\n\r\n------------------ Original ------------------\r\nFrom: \"Alvaro Herrera\" <alvherre@alvh.no-ip.org>;\r\nDate: Thu, Mar 4, 2021 10:20 PM\r\nTo: \"walker\"<failaway@qq.com>;\r\nCc: \"pgsql-hackers\"<pgsql-hackers@lists.postgresql.org>;\r\nSubject: Re: make coverage-html would fail within build directory separate from source tree\r\n\r\n\r\n\r\nOn 2021-Mar-04, walker wrote:\r\n\r\n> The same, the build directory is outside the source tree.\r\n> \r\n> \r\n> the version of lcov is 1.10\r\n\r\nThat seems *really* ancient. Please try with a fresher version.\r\n\r\n\r\n-- \r\nÁlvaro Herrera 39°49'30\"S 73°17'W\r\n\"How amazing is that? I call it a night and come back to find that a bug has\r\nbeen identified and patched while I sleep.\" (Robert Davidson)\r\n http://archives.postgresql.org/pgsql-sql/2006-03/msg00378.php\ninstall a newer version of lcov 1.13, it works fine with WARNING just same as yours.much appreciatedthankswalker------------------ Original ------------------From: \"Alvaro Herrera\" <alvherre@alvh.no-ip.org>;Date: Thu, Mar 4, 2021 10:20 PMTo: \"walker\"<failaway@qq.com>;Cc: \"pgsql-hackers\"<pgsql-hackers@lists.postgresql.org>;Subject: Re: make coverage-html would fail within build directory separate from source treeOn 2021-Mar-04, walker wrote:> The same, the build directory is outside the source tree.> > > the version of lcov is 1.10That seems *really* ancient. Please try with a fresher version.-- Álvaro Herrera 39°49'30\"S 73°17'W\"How amazing is that? I call it a night and come back to find that a bug hasbeen identified and patched while I sleep.\" (Robert Davidson) http://archives.postgresql.org/pgsql-sql/2006-03/msg00378.php",
"msg_date": "Thu, 4 Mar 2021 23:06:07 +0800",
"msg_from": "\"=?utf-8?B?d2Fsa2Vy?=\" <failaway@qq.com>",
"msg_from_op": true,
"msg_subject": "Re: make coverage-html would fail within build directory separate\n from source tree"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen one tests postgres in a some of the popular CI systems (all that\nuse docker for windows), some of the tests fail in weird ways. Like\n\nhttps://www.postgresql.org/message-id/20210303052011.ybplxw6q4tafwogk%40alap3.anarazel.de\n\n> t/003_recovery_targets.pl ............ 7/9\n> # Failed test 'multiple conflicting settings'\n> # at t/003_recovery_targets.pl line 151.\n> \n> # Failed test 'recovery end before target reached is a fatal error'\n> # at t/003_recovery_targets.pl line 177.\n> t/003_recovery_targets.pl ............ 9/9 # Looks like you failed 2 tests of 9.\n> t/003_recovery_targets.pl ............ Dubious, test returned 2 (wstat 512, 0x200)\n> Failed 2/9 subtests\n> \n> I think it's pretty dangerous if we have a substantial number of tests\n> that aren't run on windows - I think a lot of us just assume that the\n> BF would catch windows specific problems...\n\nA lot of debugging later I figured out that the problem is that postgres\ndecides not to write anything to stderr, but send everything to the\nwindows event log instead. This includes error messages when starting\npostgres with wrong parameters or such...\n\nThe reason for that elog.c and pg_ctl.c use\nsrc/port/win32security.c:pgwin32_is_service() to detect whether they're\nrunning as a service:\n\nstatic void\nsend_message_to_server_log(ErrorData *edata)\n...\n\t\t/*\n\t\t * In a win32 service environment, there is no usable stderr. Capture\n\t\t * anything going there and write it to the eventlog instead.\n\t\t *\n\t\t * If stderr redirection is active, it was OK to write to stderr above\n\t\t * because that's really a pipe to the syslogger process.\n\t\t */\n\t\telse if (pgwin32_is_service())\n\t\t\twrite_eventlog(edata->elevel, buf.data, buf.len);\n..\nvoid\nwrite_stderr(const char *fmt,...)\n...\n\t/*\n\t * On Win32, we print to stderr if running on a console, or write to\n\t * eventlog if running as a service\n\t */\n\tif (pgwin32_is_service())\t/* Running as a service */\n\t{\n\t\twrite_eventlog(ERROR, errbuf, strlen(errbuf));\n\n\nbut pgwin32_is_service() doesn't actually reliably detect if running as\na service - it's a heuristic that also triggers when running postgres\nwithin a windows docker container (presumably because that itself is run\nfrom within a service?).\n\n\nISTM that that's a problem, and is likely to become more of a problem\ngoing forward (assuming that docker on windows will become more\npopular).\n\n\nMy opinion is that the whole attempt at guessing whether we are running\nas a service is a bad idea. This isn't the first time to be a problem,\nsee e.g. [1].\n\nWhy don't we instead have pgwin32_doRegister() include a parameter that\nindicates we're running as a service and remove all the heuristics?\n\n\nI tried to look around to see if there's a simple way to drop the\nproblematic memberships that trigger pgwin32_is_service() - but there\nseem to be no commandline tools doing so (but there are C APIs).\n\nDoes anybody have an alternative way of fixing this?\n\n\nGreetings,\n\nAndres Freund\n\n[1]\ncommit ff30aec759bdc4de78912d91f650ec8fd95ff6bc\nAuthor: Heikki Linnakangas <heikki.linnakangas@iki.fi>\nDate: 2017-03-17 11:14:01 +0200\n\n Fix and simplify check for whether we're running as Windows service.\n\n\n",
"msg_date": "Thu, 4 Mar 2021 11:08:17 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "CI/windows docker vs \"am a service\" autodetection on windows"
},
{
"msg_contents": "\nOn 3/4/21 2:08 PM, Andres Freund wrote:\n> [...] pgwin32_is_service() doesn't actually reliably detect if running as\n> a service - it's a heuristic that also triggers when running postgres\n> within a windows docker container (presumably because that itself is run\n> from within a service?).\n>\n>\n> ISTM that that's a problem, and is likely to become more of a problem\n> going forward (assuming that docker on windows will become more\n> popular).\n>\n>\n> My opinion is that the whole attempt at guessing whether we are running\n> as a service is a bad idea. This isn't the first time to be a problem,\n> see e.g. [1].\n>\n> Why don't we instead have pgwin32_doRegister() include a parameter that\n> indicates we're running as a service and remove all the heuristics?\n\n\n\nI assume you mean a postmaster parameter, that would be set via pg_ctl?\nSeems reasonable.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 4 Mar 2021 14:33:05 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: CI/windows docker vs \"am a service\" autodetection on windows"
},
{
"msg_contents": "On Thu, Mar 4, 2021 at 8:33 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 3/4/21 2:08 PM, Andres Freund wrote:\n> > [...] pgwin32_is_service() doesn't actually reliably detect if running as\n> > a service - it's a heuristic that also triggers when running postgres\n> > within a windows docker container (presumably because that itself is run\n> > from within a service?).\n> >\n> >\n> > ISTM that that's a problem, and is likely to become more of a problem\n> > going forward (assuming that docker on windows will become more\n> > popular).\n> >\n> >\n> > My opinion is that the whole attempt at guessing whether we are running\n> > as a service is a bad idea. This isn't the first time to be a problem,\n> > see e.g. [1].\n> >\n> > Why don't we instead have pgwin32_doRegister() include a parameter that\n> > indicates we're running as a service and remove all the heuristics?\n>\n>\n>\n> I assume you mean a postmaster parameter, that would be set via pg_ctl?\n> Seems reasonable.\n\nThe problem with doing it at register time is that everybody who\nbuilds an installer for PostgreSQL will then have to do it in their\nown registration (I'm pretty sure most of them don't use pg_ctl\nregister).\n\nThe same thing in pgwin32_doRunAsService() might help with that. But\nthen we'd have to figure out what to do if pg_ctl fails prior to\nreaching that point... There aren't that many such paths, but there\nare some.\n\nJust throwing out ideas without spending time thinking about it, maybe\nlog to *both* in the case when we pick by it by autodetection?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 4 Mar 2021 21:08:30 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: CI/windows docker vs \"am a service\" autodetection on windows"
},
{
"msg_contents": "Hi,\n\nOn 2021-03-04 21:08:30 +0100, Magnus Hagander wrote:\n> The problem with doing it at register time is that everybody who\n> builds an installer for PostgreSQL will then have to do it in their\n> own registration (I'm pretty sure most of them don't use pg_ctl\n> register).\n\nWell, hm, maybe they should change that?\n\n\n> The same thing in pgwin32_doRunAsService() might help with that.\n\nWhat do you mean by this?\n\n\n> But then we'd have to figure out what to do if pg_ctl fails prior to\n> reaching that point... There aren't that many such paths, but there\n> are some.\n>\n> Just throwing out ideas without spending time thinking about it, maybe\n> log to *both* in the case when we pick by it by autodetection?\n\nI think that's a good answer for pg_ctl - not so sure about postgres\nitself, at least once it's up and running. I don't know what lead to all\nof this autodetection stuff, but is there the possibility of blocking on\nwhatever stderr is set too as a service?\n\n\nPerhaps we could make the service detection more reliable by checking\nwhether stderr is actually something useful?\n\nThere does seem to be isatty(), so we could improve the case of\npg_ctl/postgres run interactively without breaking a sweat. And there is\nfstat() too, so if stderr in a service is something distinguishable...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 Mar 2021 12:30:08 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: CI/windows docker vs \"am a service\" autodetection on windows"
},
{
"msg_contents": "On Thu, Mar 4, 2021 at 9:30 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-03-04 21:08:30 +0100, Magnus Hagander wrote:\n> > The problem with doing it at register time is that everybody who\n> > builds an installer for PostgreSQL will then have to do it in their\n> > own registration (I'm pretty sure most of them don't use pg_ctl\n> > register).\n>\n> Well, hm, maybe they should change that?\n>\n>\n> > The same thing in pgwin32_doRunAsService() might help with that.\n>\n> What do you mean by this?\n\nI mean controlling this flag by entry into pgwin32_doRunAsService().\nSo that when we start *postgres* we pass a parameter along saying that\nit's a service and should use eventlog for the early exit.\npgwin32_doRunAsService() will (of course) only be called when started\nwith runservice. That would, I think, sort out the problem for the\npostgres processes, and leave us with just pg_ctl to figure out.\n\n\n> > But then we'd have to figure out what to do if pg_ctl fails prior to\n> > reaching that point... There aren't that many such paths, but there\n> > are some.\n> >\n> > Just throwing out ideas without spending time thinking about it, maybe\n> > log to *both* in the case when we pick by it by autodetection?\n>\n> I think that's a good answer for pg_ctl - not so sure about postgres\n> itself, at least once it's up and running. I don't know what lead to all\n> of this autodetection stuff, but is there the possibility of blocking on\n> whatever stderr is set too as a service?\n>\n> Perhaps we could make the service detection more reliable by checking\n> whether stderr is actually something useful?\n\nSo IIRC, and mind that this is like 15 years ago, there is something\nthat looks like stderr, but the contents are thrown away. It probably\nexists specifically so that programs won't crash when run as a\nservice...\n\n\n> There does seem to be isatty(), so we could improve the case of\n> pg_ctl/postgres run interactively without breaking a sweat. And there is\n> fstat() too, so if stderr in a service is something distinguishable...\n\nWe seem to have used that at some point, but commit\na967613911f7ef7b6387b9e8718f0ab8f0c4d9c8 got rid of it... But maybe\napply it in a combination.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 4 Mar 2021 21:36:23 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: CI/windows docker vs \"am a service\" autodetection on windows"
},
{
"msg_contents": "Hi,\n\nOn 2021-03-04 21:36:23 +0100, Magnus Hagander wrote:\n> > I think that's a good answer for pg_ctl - not so sure about postgres\n> > itself, at least once it's up and running. I don't know what lead to all\n> > of this autodetection stuff, but is there the possibility of blocking on\n> > whatever stderr is set too as a service?\n> >\n> > Perhaps we could make the service detection more reliable by checking\n> > whether stderr is actually something useful?\n> \n> So IIRC, and mind that this is like 15 years ago, there is something\n> that looks like stderr, but the contents are thrown away. It probably\n> exists specifically so that programs won't crash when run as a\n> service...\n\nYea, that'd make sense.\n\nI wish we had tests for the service stuff, but that's from long before\nthere were tap tests...\n\n\n> > There does seem to be isatty(), so we could improve the case of\n> > pg_ctl/postgres run interactively without breaking a sweat. And there is\n> > fstat() too, so if stderr in a service is something distinguishable...\n> \n> We seem to have used that at some point, but commit\n> a967613911f7ef7b6387b9e8718f0ab8f0c4d9c8 got rid of it...\n\nHm. The bug #13592 referenced in that commit appears to be about\nsomething else. Looks to be #13594\nhttps://postgr.es/m/20150828104658.2089.83265%40wrigleys.postgresql.org\n\n\n> But maybe apply it in a combination.\n\nYea, that's what I was thinking.\n\n\nGah, I don't really want to know anything about windows, I just want to\nhack on aio with proper working CI.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 Mar 2021 12:48:59 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: CI/windows docker vs \"am a service\" autodetection on windows"
},
{
"msg_contents": "On 2021-Mar-04, Andres Freund wrote:\n\n> > > There does seem to be isatty(), so we could improve the case of\n> > > pg_ctl/postgres run interactively without breaking a sweat. And there is\n> > > fstat() too, so if stderr in a service is something distinguishable...\n> > \n> > We seem to have used that at some point, but commit\n> > a967613911f7ef7b6387b9e8718f0ab8f0c4d9c8 got rid of it...\n> \n> Hm. The bug #13592 referenced in that commit appears to be about\n> something else. Looks to be #13594\n> https://postgr.es/m/20150828104658.2089.83265%40wrigleys.postgresql.org\n\nYeah, that's a typo in the commit message.\n\n> > But maybe apply it in a combination.\n> \n> Yea, that's what I was thinking.\n\nThat makes sense. At the time we were not thinking (*I* was not\nthinking, for sure) that you could have a not-a-service process that\nruns inside a service. The fixed bug was in the same direction that you\nwant to fix now, just differently: the bare \"isatty\" test was\nconsidering too many things as under a service, and replaced it with the\npgwin32_is_service which considers a different set of too many things as\nunder a service. I agree with the idea that *both* tests have to pass\nin order to consider it as under a service.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"No hay hombre que no aspire a la plenitud, es decir,\nla suma de experiencias de que un hombre es capaz\"\n\n\n",
"msg_date": "Thu, 4 Mar 2021 18:04:03 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: CI/windows docker vs \"am a service\" autodetection on windows"
},
{
"msg_contents": "Hi,\n\nOn 2021-03-04 12:48:59 -0800, Andres Freund wrote:\n> On 2021-03-04 21:36:23 +0100, Magnus Hagander wrote:\n> > > I think that's a good answer for pg_ctl - not so sure about postgres\n> > > itself, at least once it's up and running. I don't know what lead to all\n> > > of this autodetection stuff, but is there the possibility of blocking on\n> > > whatever stderr is set too as a service?\n> > >\n> > > Perhaps we could make the service detection more reliable by checking\n> > > whether stderr is actually something useful?\n> > \n> > So IIRC, and mind that this is like 15 years ago, there is something\n> > that looks like stderr, but the contents are thrown away. It probably\n> > exists specifically so that programs won't crash when run as a\n> > service...\n> \n> Yea, that'd make sense.\n> \n> I wish we had tests for the service stuff, but that's from long before\n> there were tap tests...\n\nAfter fighting with a windows VM for a bit (ugh), it turns out that yes,\nthere is stderr, but that fileno(stderr) returns -2, and\nGetStdHandle(STD_ERROR_HANDLE) returns NULL (not INVALID_HANDLE_VALUE).\n\nThe complexity however is that while that's true for pg_ctl within\npgwin32_ServiceMain:\nchecking what stderr=00007FF8687DFCB0 is (handle: 0, fileno=-2)\nbut not for postmaster or backends\nWARNING: 01000: checking what stderr=00007FF880F5FCB0 is (handle: 92, fileno=2)\n\nwhich makes sense in a way, because we don't tell CreateProcessAsUser()\nthat it should pass stdin/out/err down (which then seems to magically\nget access to the \"topmost\" console applications output - damn, this\nstuff is weird).\n\nYou'd earlier mentioned that other distributions may not use pg_ctl\nregister - but I assume they use pg_ctl runservice? Or do they actually\nre-implement all those magic incantations in pg_ctl.c?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 5 Mar 2021 10:57:52 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: CI/windows docker vs \"am a service\" autodetection on windows"
},
{
"msg_contents": "Hi,\n\nOn 2021-03-05 10:57:52 -0800, Andres Freund wrote:\n> On 2021-03-04 12:48:59 -0800, Andres Freund wrote:\n> > On 2021-03-04 21:36:23 +0100, Magnus Hagander wrote:\n> > > > I think that's a good answer for pg_ctl - not so sure about postgres\n> > > > itself, at least once it's up and running. I don't know what lead to all\n> > > > of this autodetection stuff, but is there the possibility of blocking on\n> > > > whatever stderr is set too as a service?\n> > > >\n> > > > Perhaps we could make the service detection more reliable by checking\n> > > > whether stderr is actually something useful?\n> > > \n> > > So IIRC, and mind that this is like 15 years ago, there is something\n> > > that looks like stderr, but the contents are thrown away. It probably\n> > > exists specifically so that programs won't crash when run as a\n> > > service...\n> > \n> > Yea, that'd make sense.\n> > \n> > I wish we had tests for the service stuff, but that's from long before\n> > there were tap tests...\n> \n> After fighting with a windows VM for a bit (ugh), it turns out that yes,\n> there is stderr, but that fileno(stderr) returns -2, and\n> GetStdHandle(STD_ERROR_HANDLE) returns NULL (not INVALID_HANDLE_VALUE).\n> \n> The complexity however is that while that's true for pg_ctl within\n> pgwin32_ServiceMain:\n> checking what stderr=00007FF8687DFCB0 is (handle: 0, fileno=-2)\n> but not for postmaster or backends\n> WARNING: 01000: checking what stderr=00007FF880F5FCB0 is (handle: 92, fileno=2)\n> \n> which makes sense in a way, because we don't tell CreateProcessAsUser()\n> that it should pass stdin/out/err down (which then seems to magically\n> get access to the \"topmost\" console applications output - damn, this\n> stuff is weird).\n\nThat part is not too hard to address - it seems we only need to do that\nin pg_ctl pgwin32_doRunAsService(). It seems that the\nstdin/stderr/stdout being set to invalid will then be passed down to\npostmaster children.\n\nhttps://docs.microsoft.com/en-us/windows/console/getstdhandle\n\"If an application does not have associated standard handles, such as a\nservice running on an interactive desktop, and has not redirected them,\nthe return value is NULL.\"\n\nThere does seem to be some difference between what services get as std*\n- GetStdHandle() returns NULL, and what explicitly passing down invalid\nhandles to postmaster does - GetStdHandle() returns\nINVALID_HANDLE_VALUE. But passing down NULL rather than\nINVALID_HANDLE_VALUE to postmaster seems to lead to postmaster\nre-opening console buffers.\n\nPatch attached.\n\nI'd like to commit something to address this issue to master soon - it\nallows us to run a lot more tests in cirrus-ci... But probably not\nbackpatch it [yet] - there've not really been field complaints, and who\nknows if there end up being some unintentional side-effects...\n\n\n> You'd earlier mentioned that other distributions may not use pg_ctl\n> register - but I assume they use pg_ctl runservice? Or do they actually\n> re-implement all those magic incantations in pg_ctl.c?\n\nIt seems that we, in addition to the above patch, should add a guc that\npg_ctl runservice passes down to postgres. And then rip out the call to\npgwin32_is_service() from the backend. That doesn't require other\ndistributions to use pg_ctl register, just pg_ctl runservice - which I\nthink they need to do anyway, unless they want to duplicate all the\nlogic around pgwin32_SetServiceStatus()?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 5 Mar 2021 12:55:37 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: CI/windows docker vs \"am a service\" autodetection on windows"
},
{
"msg_contents": "Hi,\n\nMagnus, Michael, Anyone - I'd appreciate a look.\n\nOn 2021-03-05 12:55:37 -0800, Andres Freund wrote:\n> > After fighting with a windows VM for a bit (ugh), it turns out that yes,\n> > there is stderr, but that fileno(stderr) returns -2, and\n> > GetStdHandle(STD_ERROR_HANDLE) returns NULL (not INVALID_HANDLE_VALUE).\n> > \n> > The complexity however is that while that's true for pg_ctl within\n> > pgwin32_ServiceMain:\n> > checking what stderr=00007FF8687DFCB0 is (handle: 0, fileno=-2)\n> > but not for postmaster or backends\n> > WARNING: 01000: checking what stderr=00007FF880F5FCB0 is (handle: 92, fileno=2)\n> > \n> > which makes sense in a way, because we don't tell CreateProcessAsUser()\n> > that it should pass stdin/out/err down (which then seems to magically\n> > get access to the \"topmost\" console applications output - damn, this\n> > stuff is weird).\n> \n> That part is not too hard to address - it seems we only need to do that\n> in pg_ctl pgwin32_doRunAsService(). It seems that the\n> stdin/stderr/stdout being set to invalid will then be passed down to\n> postmaster children.\n> \n> https://docs.microsoft.com/en-us/windows/console/getstdhandle\n> \"If an application does not have associated standard handles, such as a\n> service running on an interactive desktop, and has not redirected them,\n> the return value is NULL.\"\n> \n> There does seem to be some difference between what services get as std*\n> - GetStdHandle() returns NULL, and what explicitly passing down invalid\n> handles to postmaster does - GetStdHandle() returns\n> INVALID_HANDLE_VALUE. But passing down NULL rather than\n> INVALID_HANDLE_VALUE to postmaster seems to lead to postmaster\n> re-opening console buffers.\n> \n> Patch attached.\n\n> I'd like to commit something to address this issue to master soon - it\n> allows us to run a lot more tests in cirrus-ci... But probably not\n> backpatch it [yet] - there've not really been field complaints, and who\n> knows if there end up being some unintentional side-effects...\n\nBecause it'd allow us to run more tests as part of cfbot and other CI\nefforts, I'd like to push this forward. So I'm planning to commit this\nto master soon-ish, unless somebody wants to take this over? I'm really\nnot a windows person...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 13 Aug 2021 06:02:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: CI/windows docker vs \"am a service\" autodetection on windows"
},
{
"msg_contents": "Em sex., 13 de ago. de 2021 às 10:03, Andres Freund <andres@anarazel.de>\nescreveu:\n\n> Hi,\n>\n> Magnus, Michael, Anyone - I'd appreciate a look.\n>\n> On 2021-03-05 12:55:37 -0800, Andres Freund wrote:\n> > > After fighting with a windows VM for a bit (ugh), it turns out that\n> yes,\n> > > there is stderr, but that fileno(stderr) returns -2, and\n> > > GetStdHandle(STD_ERROR_HANDLE) returns NULL (not INVALID_HANDLE_VALUE).\n> > >\n> > > The complexity however is that while that's true for pg_ctl within\n> > > pgwin32_ServiceMain:\n> > > checking what stderr=00007FF8687DFCB0 is (handle: 0, fileno=-2)\n> > > but not for postmaster or backends\n> > > WARNING: 01000: checking what stderr=00007FF880F5FCB0 is (handle: 92,\n> fileno=2)\n> > >\n> > > which makes sense in a way, because we don't tell CreateProcessAsUser()\n> > > that it should pass stdin/out/err down (which then seems to magically\n> > > get access to the \"topmost\" console applications output - damn, this\n> > > stuff is weird).\n> >\n> > That part is not too hard to address - it seems we only need to do that\n> > in pg_ctl pgwin32_doRunAsService(). It seems that the\n> > stdin/stderr/stdout being set to invalid will then be passed down to\n> > postmaster children.\n> >\n> > https://docs.microsoft.com/en-us/windows/console/getstdhandle\n> > \"If an application does not have associated standard handles, such as a\n> > service running on an interactive desktop, and has not redirected them,\n> > the return value is NULL.\"\n> >\n> > There does seem to be some difference between what services get as std*\n> > - GetStdHandle() returns NULL, and what explicitly passing down invalid\n> > handles to postmaster does - GetStdHandle() returns\n> > INVALID_HANDLE_VALUE. But passing down NULL rather than\n> > INVALID_HANDLE_VALUE to postmaster seems to lead to postmaster\n> > re-opening console buffers.\n> >\n> > Patch attached.\n>\n> > I'd like to commit something to address this issue to master soon - it\n> > allows us to run a lot more tests in cirrus-ci... But probably not\n> > backpatch it [yet] - there've not really been field complaints, and who\n> > knows if there end up being some unintentional side-effects...\n>\n> Because it'd allow us to run more tests as part of cfbot and other CI\n> efforts, I'd like to push this forward. So I'm planning to commit this\n> to master soon-ish, unless somebody wants to take this over? I'm really\n> not a windows person...\n>\nHi Andres,\n\nI found this function on the web, from OpenSSL, but I haven't tested it.\nI think that there is one more way to test if a service is running\n(SECURITY_INTERACTIVE_RID).\n\nCan you test on a Windows VM?\nIf this works I can elaborate a bit.\n\nAttached.\n\nregards,\nRanier Vilela",
"msg_date": "Sun, 15 Aug 2021 11:25:01 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CI/windows docker vs \"am a service\" autodetection on windows"
},
{
"msg_contents": "On Fri, Aug 13, 2021 at 3:03 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> Magnus, Michael, Anyone - I'd appreciate a look.\n>\n> On 2021-03-05 12:55:37 -0800, Andres Freund wrote:\n> > > After fighting with a windows VM for a bit (ugh), it turns out that yes,\n> > > there is stderr, but that fileno(stderr) returns -2, and\n> > > GetStdHandle(STD_ERROR_HANDLE) returns NULL (not INVALID_HANDLE_VALUE).\n> > >\n> > > The complexity however is that while that's true for pg_ctl within\n> > > pgwin32_ServiceMain:\n> > > checking what stderr=00007FF8687DFCB0 is (handle: 0, fileno=-2)\n> > > but not for postmaster or backends\n> > > WARNING: 01000: checking what stderr=00007FF880F5FCB0 is (handle: 92, fileno=2)\n> > >\n> > > which makes sense in a way, because we don't tell CreateProcessAsUser()\n> > > that it should pass stdin/out/err down (which then seems to magically\n> > > get access to the \"topmost\" console applications output - damn, this\n> > > stuff is weird).\n> >\n> > That part is not too hard to address - it seems we only need to do that\n> > in pg_ctl pgwin32_doRunAsService(). It seems that the\n> > stdin/stderr/stdout being set to invalid will then be passed down to\n> > postmaster children.\n> >\n> > https://docs.microsoft.com/en-us/windows/console/getstdhandle\n> > \"If an application does not have associated standard handles, such as a\n> > service running on an interactive desktop, and has not redirected them,\n> > the return value is NULL.\"\n> >\n> > There does seem to be some difference between what services get as std*\n> > - GetStdHandle() returns NULL, and what explicitly passing down invalid\n> > handles to postmaster does - GetStdHandle() returns\n> > INVALID_HANDLE_VALUE. But passing down NULL rather than\n> > INVALID_HANDLE_VALUE to postmaster seems to lead to postmaster\n> > re-opening console buffers.\n> >\n> > Patch attached.\n>\n> > I'd like to commit something to address this issue to master soon - it\n> > allows us to run a lot more tests in cirrus-ci... But probably not\n> > backpatch it [yet] - there've not really been field complaints, and who\n> > knows if there end up being some unintentional side-effects...\n>\n> Because it'd allow us to run more tests as part of cfbot and other CI\n> efforts, I'd like to push this forward. So I'm planning to commit this\n> to master soon-ish, unless somebody wants to take this over? I'm really\n> not a windows person...\n\nIt certainly sounds reasonable. It does make me wonder why we didn't\nuse that GetStdHandle in the first place -- mostly in that \"did we try\nthat and it didn't work\", but that was long enough ago that I really\ncan't remember, and I am unable to find any references in my mail\nhistory either. So it may very well just be that we missed it. But\ngive the number of times we've had issues around this, it makes me\nwonder. It could of course also be something that didn't use to be\nreliable but us now -- the world of Windows has changed a lot since\nthat was written.\n\nIt wouldn't surprise me if it does break some *other* weird\ncornercase, but based on the docs page you linked to it doesn't look\nlike it would break any of the normal/standard usecases. But I'm also\nvery much not a Windows person these days, and most of my knowledge on\nthe API side is quite outdated by now -- so I can only base that on\nreading the same manual page you did...\n\nGaining better testability definitely seems worth it, so I think an\napproach of \"push to master and see what explodes\" is reasonable :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 16 Aug 2021 15:34:51 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: CI/windows docker vs \"am a service\" autodetection on windows"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-15 11:25:01 -0300, Ranier Vilela wrote:\n> I found this function on the web, from OpenSSL, but I haven't tested it.\n> I think that there is one more way to test if a service is running\n> (SECURITY_INTERACTIVE_RID).\n\nI don't think that really addresses the issue. If a service starts postgres\nsomewhere within, it won't have SECURITY_INTERACTIVE_RID, but postgres isn't\nquite running as a service nevertheless. I think that's the situation in the\nCI case that triggered me to look into this.\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Tue, 7 Sep 2021 11:49:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: CI/windows docker vs \"am a service\" autodetection on windows"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-16 15:34:51 +0200, Magnus Hagander wrote:\n> It wouldn't surprise me if it does break some *other* weird\n> cornercase, but based on the docs page you linked to it doesn't look\n> like it would break any of the normal/standard usecases.\n\nYea, me neither...\n\nI do suspect that it'd have been better to have a --windows-service flag to\npostgres. But we can't easily change the past...\n\n\n> Gaining better testability definitely seems worth it, so I think an\n> approach of \"push to master and see what explodes\" is reasonable :)\n\nDone.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 Sep 2021 12:03:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: CI/windows docker vs \"am a service\" autodetection on windows"
}
] |
[
{
"msg_contents": "Hi,\n\nRight now it's harder than necessary to capture the log output from tap\ntests because the the regression tests files don't end with a common\nfile ending with other types of logs. They're\n\t# Open the test log file, whose name depends on the test name.\n\t$test_logfile = basename($0);\n\t$test_logfile =~ s/\\.[^.]+$//;\n\t$test_logfile = \"$log_path/regress_log_$test_logfile\";\n\nThis was essentially introduced in 1ea06203b82: \"Improve logging of TAP tests.\"\n\nWould anybody object to replacing _logfile with .log? I realize that'd\npotentially would cause some short-term pain on the buildfarm, but I\nthink it'd improve things longer term.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 4 Mar 2021 11:24:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Add .log file ending to tap test log files"
},
{
"msg_contents": "On 2021-Mar-04, Andres Freund wrote:\n\n> Right now it's harder than necessary to capture the log output from tap\n> tests because the the regression tests files don't end with a common\n> file ending with other types of logs. They're\n> \t# Open the test log file, whose name depends on the test name.\n> \t$test_logfile = basename($0);\n> \t$test_logfile =~ s/\\.[^.]+$//;\n> \t$test_logfile = \"$log_path/regress_log_$test_logfile\";\n>\n> This was essentially introduced in 1ea06203b82: \"Improve logging of TAP tests.\"\n\nYou're misreading this code (I did too): there's no \"_logfile\" suffix --\n$test_logfile is the name of a single variable, it's not $test followed\nby _logfile. So the name is \"regress_log_001_FOOBAR\" with the basename\nat the end. But I agree:\n\n> Would anybody object to replacing _logfile with .log? I realize that'd\n> potentially would cause some short-term pain on the buildfarm, but I\n> think it'd improve things longer term.\n\nLet's add a .log prefix. And also, I would propose a more extensive\nrenaming, if we're going to do it -- I dislike that the server log files\nstart with \"00x\" and the regress ones have the 00x bit in the middle of\nthe name. So how about we make this\n $log_path/$test_logfile.regress.log.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Los dioses no protegen a los insensatos. �stos reciben protecci�n de\notros insensatos mejor dotados\" (Luis Wu, Mundo Anillo)\n\n\n",
"msg_date": "Thu, 4 Mar 2021 18:28:57 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add .log file ending to tap test log files"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm starting a new thread for this patch that originated as a\nside-discussion in [1], to give it its own CF entry in the next cycle.\nThis is a WIP with an open question to research: what could actually\nbreak if we did this?\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGLdemy2gBm80kz20GTe6hNVwoErE8KwcJk6-U56oStjtg%40mail.gmail.com",
"msg_date": "Fri, 5 Mar 2021 11:02:01 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On 05/03/2021 00:02, Thomas Munro wrote:\n> Hi,\n> \n> I'm starting a new thread for this patch that originated as a\n> side-discussion in [1], to give it its own CF entry in the next cycle.\n> This is a WIP with an open question to research: what could actually\n> break if we did this?\n\nI don't see a problem.\n\nIt would indeed be nice to have some other mechanism to prevent the \nissue with wal_level=minimal, the tombstone files feel hacky and \ncomplicated. Maybe a new shared memory hash table to track the \nrelfilenodes of dropped tables.\n\n- Heikki\n\n\n",
"msg_date": "Thu, 10 Jun 2021 13:47:49 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Thu, Jun 10, 2021 at 6:47 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> It would indeed be nice to have some other mechanism to prevent the\n> issue with wal_level=minimal, the tombstone files feel hacky and\n> complicated. Maybe a new shared memory hash table to track the\n> relfilenodes of dropped tables.\n\nJust to summarize the issue here as I understand it, if a relfilenode\nis used for two unrelated relations during the same checkpoint cycle\nwith wal_level=minimal, and if the WAL-skipping optimization is\napplied to the second of those but not to the first, then crash\nrecovery will lose our only copy of the new relation's data, because\nwe'll replay the removal of the old relfilenode but will not have\nlogged the new data. Furthermore, we've wondered about writing an\nend-of-recovery record in all cases rather than sometimes writing an\nend-of-recovery record and sometimes a checkpoint record. That would\nallow another version of this same problem, since a single checkpoint\ncycle could now span multiple server lifetimes. At present, we dodge\nall this by keeping the first segment of the main fork around as a\nzero-length file for the rest of the checkpoint cycle, which I think\nprevents the problem in both cases. Now, apparently that caused some\nproblem with the AIO patch set so Thomas is curious about getting rid\nof it, and Heikki concurs that it's a hack.\n\nI guess my concern about this patch is that it just seems to be\nreducing the number of cases where that hack is used without actually\ngetting rid of it. Rarely-taken code paths are more likely to have\nundiscovered bugs, and that seems particularly likely in this case,\nbecause this is a low-probability scenario to begin with. A lot of\nclusters probably never have an OID counter wraparound ever, and even\nin those that do, getting an OID collision with just the right timing\nfollowed by a crash before a checkpoint can intervene has got to be\nsuper-unlikely. Even as things are today, if this mechanism has subtle\nbugs, it seems entirely possible that they could have escaped notice\nup until now.\n\nSo I spent some time thinking about the question of getting rid of\ntombstone files altogether. I don't think that Heikki's idea of a\nshared memory hash table to track dropped relfilenodes can work. The\nhash table will have to be of some fixed size N, and whatever the\nvalue of N, the approach will break down if N+1 relfilenodes are\ndropped in the same checkpoint cycle.\n\nThe two most principled solutions to this problem that I can see are\n(1) remove wal_level=minimal and (2) use 64-bit relfilenodes. I have\nbeen reluctant to support #1 because it's hard for me to believe that\nthere aren't cases where being able to skip a whole lot of WAL-logging\ndoesn't work out to a nice performance win, but I realize opinions on\nthat topic vary. And I'm pretty sure that Andres, at least, will hate\n#2 because he's unhappy with the width of buffer tags already. So I\ndon't really have a good idea. I agree this tombstone system is a bit\nof a wart, but I'm not sure that this patch really makes anything any\nbetter, and I'm not really seeing another idea that seems better\neither.\n\nMaybe I am missing something...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 2 Aug 2021 16:03:31 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-02 16:03:31 -0400, Robert Haas wrote:\n> The two most principled solutions to this problem that I can see are\n> (1) remove wal_level=minimal and\n\nI'm personally not opposed to this. It's not practically relevant and makes a\nlot of stuff more complicated. We imo should rather focus on optimizing the\nthings wal_level=minimal accelerates a lot than adding complications for\nwal_level=minimal. Such optimizations would have practical relevance, and\nthere's plenty low hanging fruits.\n\n\n> (2) use 64-bit relfilenodes. I have\n> been reluctant to support #1 because it's hard for me to believe that\n> there aren't cases where being able to skip a whole lot of WAL-logging\n> doesn't work out to a nice performance win, but I realize opinions on\n> that topic vary. And I'm pretty sure that Andres, at least, will hate\n> #2 because he's unhappy with the width of buffer tags already.\n\nYep :/\n\nI guess there's a somewhat hacky way to get somewhere without actually\nincreasing the size. We could take 3 bytes from the fork number and use that\nto get to a 7 byte relfilenode portion. 7 bytes are probably enough for\neveryone.\n\nIt's not like we can use those bytes in a useful way, due to alignment\nrequirements. Declaring that the high 7 bytes are for the relNode portion and\nthe low byte for the fork would still allow efficient comparisons and doesn't\nseem too ugly.\n\n\n> So I don't really have a good idea. I agree this tombstone system is a\n> bit of a wart, but I'm not sure that this patch really makes anything\n> any better, and I'm not really seeing another idea that seems better\n> either.\n\n> Maybe I am missing something...\n\nWhat I proposed in the past was to have a new shared table that tracks\nrelfilenodes. I still think that's a decent solution for just the problem at\nhand. But it'd also potentially be the way to redesign relation forks and even\nslim down buffer tags:\n\nRight now a buffer tag is:\n- 4 byte tablespace oid\n- 4 byte database oid\n- 4 byte \"relfilenode oid\" (don't think we have a good name for this)\n- 4 byte fork number\n- 4 byte block number\n\nIf we had such a shared table we could put at least tablespace, fork number\ninto that table mapping them to an 8 byte \"new relfilenode\". That'd only make\nthe \"new relfilenode\" unique within a database, but that'd be sufficient for\nour purposes. It'd give use a buffertag consisting out of the following:\n- 4 byte database oid\n- 8 byte \"relfilenode\"\n- 4 byte block number\n\nOf course, it'd add some complexity too, because a buffertag alone wouldn't be\nsufficient to read data (as you'd need the tablespace oid from elsewhere). But\nthat's probably ok, I think all relevant places would have that information.\n\n\nIt's probably possible to remove the database oid from the tag as well, but\nit'd make CREATE DATABASE tricker - we'd need to change the filenames of\ntables as we copy, to adjust them to the differing oid.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 2 Aug 2021 15:38:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Mon, Aug 2, 2021 at 6:38 PM Andres Freund <andres@anarazel.de> wrote:\n> What I proposed in the past was to have a new shared table that tracks\n> relfilenodes. I still think that's a decent solution for just the problem at\n> hand.\n\nIt's not really clear to me what problem is at hand. The problems that\nthe tombstone system created for the async I/O stuff weren't really\nexplained properly, IMHO. And I don't think the current system is all\nthat ugly. it's not the most beautiful thing in the world but we have\nlots of way worse hacks. And, it's easy to understand, requires very\nlittle code, and has few moving parts that can fail. As hacks go it's\na quality hack, I would say.\n\n> But it'd also potentially be the way to redesign relation forks and even\n> slim down buffer tags:\n>\n> Right now a buffer tag is:\n> - 4 byte tablespace oid\n> - 4 byte database oid\n> - 4 byte \"relfilenode oid\" (don't think we have a good name for this)\n> - 4 byte fork number\n> - 4 byte block number\n>\n> If we had such a shared table we could put at least tablespace, fork number\n> into that table mapping them to an 8 byte \"new relfilenode\". That'd only make\n> the \"new relfilenode\" unique within a database, but that'd be sufficient for\n> our purposes. It'd give use a buffertag consisting out of the following:\n> - 4 byte database oid\n> - 8 byte \"relfilenode\"\n> - 4 byte block number\n\nYep. I think this is a good direction.\n\n> Of course, it'd add some complexity too, because a buffertag alone wouldn't be\n> sufficient to read data (as you'd need the tablespace oid from elsewhere). But\n> that's probably ok, I think all relevant places would have that information.\n\nI think the thing to look at would be the places that call\nrelpathperm() or relpathbackend(). I imagine this can be worked out,\nbut it might require some adjustment.\n\n> It's probably possible to remove the database oid from the tag as well, but\n> it'd make CREATE DATABASE tricker - we'd need to change the filenames of\n> tables as we copy, to adjust them to the differing oid.\n\nYeah, I'm not really sure that works out to a win. I tend to think\nthat we should be trying to make databases within the same cluster\nmore rather than less independent of each other. If we switch to using\na radix tree for the buffer mapping table as you have previously\nproposed, then presumably each backend can cache a pointer to the\nsecond level, after the database OID has been resolved. Then you have\nno need to compare database OIDs for every lookup. That might turn out\nto be better for performance than shoving everything into the buffer\ntag anyway, because then backends in different databases would be\naccessing distinct parts of the buffer mapping data structure instead\nof contending with one another.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 3 Aug 2021 11:22:31 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Fri, Mar 5, 2021 at 11:02 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> This is a WIP with an open question to research: what could actually\n> break if we did this?\n\nI thought this part of bgwriter.c might be a candidate:\n\n if (FirstCallSinceLastCheckpoint())\n {\n /*\n * After any checkpoint, close all smgr files. This is so we\n * won't hang onto smgr references to deleted files indefinitely.\n */\n smgrcloseall();\n }\n\nHmm, on closer inspection, isn't the lack of real interlocking with\ncheckpoints a bit suspect already? What stops bgwriter from writing\nto the previous relfilenode generation's fd, if a relfilenode is\nrecycled while BgBufferSync() is running? Not sinval, and not the\nabove code that only runs between BgBufferSync() invocations.\n\n\n",
"msg_date": "Wed, 29 Sep 2021 16:07:32 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Wed, Aug 4, 2021 at 3:22 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> It's not really clear to me what problem is at hand. The problems that\n> the tombstone system created for the async I/O stuff weren't really\n> explained properly, IMHO. And I don't think the current system is all\n> that ugly. it's not the most beautiful thing in the world but we have\n> lots of way worse hacks. And, it's easy to understand, requires very\n> little code, and has few moving parts that can fail. As hacks go it's\n> a quality hack, I would say.\n\nIt's not really an AIO problem. It's just that while testing the AIO\nstuff across a lot of operating systems, we had tests failing on\nWindows because the extra worker processes you get if you use\nio_method=worker were holding cached descriptors and causing stuff\nlike DROP TABLESPACE to fail. AFAIK every problem we discovered in\nthat vein is a current live bug in all versions of PostgreSQL for\nWindows (it just takes other backends or the bgwriter to hold an fd at\nthe wrong moment). The solution I'm proposing to that general class\nof problem is https://commitfest.postgresql.org/34/2962/ .\n\nIn the course of thinking about that, it seemed natural to look into\nthe possibility of getting rid of the tombstones, so that at least\nUnix systems don't find themselves having to suffer through a\nCHECKPOINT just to drop a tablespace that happens to contain a\ntombstone.\n\n\n",
"msg_date": "Wed, 29 Sep 2021 16:29:16 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Wed, Sep 29, 2021 at 4:07 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Hmm, on closer inspection, isn't the lack of real interlocking with\n> checkpoints a bit suspect already? What stops bgwriter from writing\n> to the previous relfilenode generation's fd, if a relfilenode is\n> recycled while BgBufferSync() is running? Not sinval, and not the\n> above code that only runs between BgBufferSync() invocations.\n\nI managed to produce a case where live data is written to an unlinked\nfile and lost, with a couple of tweaks to get the right timing and\nsimulate OID wraparound. See attached. If you run the following\ncommands repeatedly with shared_buffers=256kB and\nbgwriter_lru_multiplier=10, you should see a number lower than 10,000\nfrom the last query in some runs, depending on timing.\n\ncreate extension if not exists chaos;\ncreate extension if not exists pg_prewarm;\n\ndrop table if exists t1, t2;\ncheckpoint;\nvacuum pg_class;\n\nselect clobber_next_oid(200000);\ncreate table t1 as select 42 i from generate_series(1, 10000);\nselect pg_prewarm('t1'); -- fill buffer pool with t1\nupdate t1 set i = i; -- dirty t1 buffers so bgwriter writes some\nselect pg_sleep(2); -- give bgwriter some time\n\ndrop table t1;\ncheckpoint;\nvacuum pg_class;\n\nselect clobber_next_oid(200000);\ncreate table t2 as select 0 i from generate_series(1, 10000);\nselect pg_prewarm('t2'); -- fill buffer pool with t2\nupdate t2 set i = 1 where i = 0; -- dirty t2 buffers so bgwriter writes some\nselect pg_sleep(2); -- give bgwriter some time\n\nselect pg_prewarm('pg_attribute'); -- evict all clean t2 buffers\nselect sum(i) as t2_sum_should_be_10000 from t2; -- have any updates been lost?",
"msg_date": "Thu, 30 Sep 2021 23:32:20 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Thu, Sep 30, 2021 at 11:32 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I managed to produce a case where live data is written to an unlinked\n> file and lost\n\nI guess this must have been broken since release 9.2 moved checkpoints\nout of here[1]. The connection between checkpoints, tombstone files\nand file descriptor cache invalidation in auxiliary (non-sinval)\nbackends was not documented as far as I can see (or at least not\nanywhere near the load-bearing parts).\n\nHow could it be fixed, simply and backpatchably? If BgSyncBuffer()\ndid if-FirstCallSinceLastCheckpoint()-then-smgrcloseall() after\nlocking each individual buffer and before flushing, then I think it\nmight logically have the correct interlocking against relfilenode\nwraparound, but that sounds a tad expensive :-( I guess it could be\nmade cheaper by using atomics for the checkpoint counter instead of\nspinlocks. Better ideas?\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BU5nMLv2ah-HNHaQ%3D2rxhp_hDJ9jcf-LL2kW3sE4msfnUw9gA%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 5 Oct 2021 16:21:39 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Tue, Oct 5, 2021 at 4:21 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Sep 30, 2021 at 11:32 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I managed to produce a case where live data is written to an unlinked\n> > file and lost\n\nIn conclusion, there *is* something else that would break, so I'm\nwithdrawing this CF entry (#3030) for now. Also, that something else\nis already subtly broken, so I'll try to come up with a fix for that\nseparately.\n\n\n",
"msg_date": "Fri, 29 Oct 2021 16:57:58 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Mon, Aug 2, 2021 at 6:38 PM Andres Freund <andres@anarazel.de> wrote:\n> I guess there's a somewhat hacky way to get somewhere without actually\n> increasing the size. We could take 3 bytes from the fork number and use that\n> to get to a 7 byte relfilenode portion. 7 bytes are probably enough for\n> everyone.\n>\n> It's not like we can use those bytes in a useful way, due to alignment\n> requirements. Declaring that the high 7 bytes are for the relNode portion and\n> the low byte for the fork would still allow efficient comparisons and doesn't\n> seem too ugly.\n\nI think this idea is worth more consideration. It seems like 2^56\nrelfilenodes ought to be enough for anyone, recalling that you can\nonly ever have 2^64 bytes of WAL. So if we do this, we can eliminate a\nbunch of code that is there to guard against relfilenodes being\nreused. In particular, we can remove the code that leaves a 0-length\ntombstone file around until the next checkpoint to guard against\nrelfilenode reuse. On Windows, we still need\nhttps://commitfest.postgresql.org/36/2962/ because of the problem that\nWindows won't remove files from the directory listing until they are\nboth unlinked and closed. But in general this seems like it would lead\nto cleaner code. For example, GetNewRelFileNode() needn't loop. If it\nallocate the smallest unsigned integer that the cluster (or database)\nhas never previously assigned, the file should definitely not exist on\ndisk, and if it does, an ERROR is appropriate, as the database is\ncorrupted. This does assume that allocations from this new 56-bit\nrelfilenode counter are properly WAL-logged.\n\nI think this would also solve a problem Dilip mentioned to me today:\nsuppose you make ALTER DATABASE SET TABLESPACE WAL-logged, as he's\nbeen trying to do. Then suppose you do \"ALTER DATABASE foo SET\nTABLESPACE used_recently_but_not_any_more\". You might get an error\ncomplaining that “some relations of database \\“%s\\” are already in\ntablespace \\“%s\\“” because there could be tombstone files in that\ndatabase. With this combination of changes, you could just use the\nbarrier mechanism from https://commitfest.postgresql.org/36/2962/ to\nwait for those files to disappear, because they've got to be\npreviously-unliked files that Windows is still returning because\nthey're still opening -- or else they could be a sign of a corrupted\ndatabase, but there are no other possibilities.\n\nI think, anyway.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 5 Jan 2022 16:37:38 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Thu, Jan 6, 2022 at 3:07 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Aug 2, 2021 at 6:38 PM Andres Freund <andres@anarazel.de> wrote:\n> > I guess there's a somewhat hacky way to get somewhere without actually\n> > increasing the size. We could take 3 bytes from the fork number and use that\n> > to get to a 7 byte relfilenode portion. 7 bytes are probably enough for\n> > everyone.\n> >\n> > It's not like we can use those bytes in a useful way, due to alignment\n> > requirements. Declaring that the high 7 bytes are for the relNode portion and\n> > the low byte for the fork would still allow efficient comparisons and doesn't\n> > seem too ugly.\n>\n> I think this idea is worth more consideration. It seems like 2^56\n> relfilenodes ought to be enough for anyone, recalling that you can\n> only ever have 2^64 bytes of WAL. So if we do this, we can eliminate a\n> bunch of code that is there to guard against relfilenodes being\n> reused. In particular, we can remove the code that leaves a 0-length\n> tombstone file around until the next checkpoint to guard against\n> relfilenode reuse.\n\n+1\n\n>\n> I think this would also solve a problem Dilip mentioned to me today:\n> suppose you make ALTER DATABASE SET TABLESPACE WAL-logged, as he's\n> been trying to do. Then suppose you do \"ALTER DATABASE foo SET\n> TABLESPACE used_recently_but_not_any_more\". You might get an error\n> complaining that “some relations of database \\“%s\\” are already in\n> tablespace \\“%s\\“” because there could be tombstone files in that\n> database. With this combination of changes, you could just use the\n> barrier mechanism from https://commitfest.postgresql.org/36/2962/ to\n> wait for those files to disappear, because they've got to be\n> previously-unliked files that Windows is still returning because\n> they're still opening -- or else they could be a sign of a corrupted\n> database, but there are no other possibilities.\n\nYes, this approach will solve the problem for the WAL-logged ALTER\nDATABASE we are facing.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 6 Jan 2022 13:12:29 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Thu, Jan 6, 2022 at 1:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> >\n> > I think this idea is worth more consideration. It seems like 2^56\n> > relfilenodes ought to be enough for anyone, recalling that you can\n> > only ever have 2^64 bytes of WAL. So if we do this, we can eliminate a\n> > bunch of code that is there to guard against relfilenodes being\n> > reused. In particular, we can remove the code that leaves a 0-length\n> > tombstone file around until the next checkpoint to guard against\n> > relfilenode reuse.\n>\n> +1\n>\n\nI IMHO a few top level point for implementing this idea would be as listed here,\n\n1) the \"relfilenode\" member inside the RelFileNode will be now 64\nbytes and remove the \"forkNum\" all together from the BufferTag. So\nnow whenever we want to use the relfilenode or fork number we can use\nthe respective mask and fetch their values.\n2) GetNewRelFileNode() will not loop for checking the file existence\nand retry with other relfilenode.\n3) Modify mdunlinkfork() so that we immediately perform the unlink\nrequest, make sure to register_forget_request() before unlink.\n4) In checkpointer, now we don't need any handling for pendingUnlinks.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 6 Jan 2022 13:43:27 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Thu, Jan 6, 2022 at 9:13 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> On Thu, Jan 6, 2022 at 1:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > I think this idea is worth more consideration. It seems like 2^56\n> > > relfilenodes ought to be enough for anyone, recalling that you can\n> > > only ever have 2^64 bytes of WAL. So if we do this, we can eliminate a\n> > > bunch of code that is there to guard against relfilenodes being\n> > > reused. In particular, we can remove the code that leaves a 0-length\n> > > tombstone file around until the next checkpoint to guard against\n> > > relfilenode reuse.\n> >\n> > +1\n\n+1\n\n> I IMHO a few top level point for implementing this idea would be as listed here,\n>\n> 1) the \"relfilenode\" member inside the RelFileNode will be now 64\n> bytes and remove the \"forkNum\" all together from the BufferTag. So\n> now whenever we want to use the relfilenode or fork number we can use\n> the respective mask and fetch their values.\n> 2) GetNewRelFileNode() will not loop for checking the file existence\n> and retry with other relfilenode.\n> 3) Modify mdunlinkfork() so that we immediately perform the unlink\n> request, make sure to register_forget_request() before unlink.\n> 4) In checkpointer, now we don't need any handling for pendingUnlinks.\n\nAnother problem is that relfilenodes are normally allocated with\nGetNewOidWithIndex(), and initially match a relation's OID. We'd need\na new allocator, and they won't be able to match the OID in general\n(while we have 32 bit OIDs at least).\n\n\n",
"msg_date": "Thu, 6 Jan 2022 21:46:42 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Thu, Jan 6, 2022 at 3:47 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Another problem is that relfilenodes are normally allocated with\n> GetNewOidWithIndex(), and initially match a relation's OID. We'd need\n> a new allocator, and they won't be able to match the OID in general\n> (while we have 32 bit OIDs at least).\n\nPersonally I'm not sad about that. Values that are the same in simple\ncases but diverge in more complex cases are kind of a trap for the\nunwary. There's no real reason to have them ever match. Yeah, in\ntheory, it makes it easier to tell which file matches which relation,\nbut in practice, you always have to double-check in case the table has\never been rewritten. It doesn't seem worth continuing to contort the\ncode for a property we can't guarantee anyway.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 6 Jan 2022 08:52:01 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On 2022-01-06 08:52:01 -0500, Robert Haas wrote:\n> On Thu, Jan 6, 2022 at 3:47 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Another problem is that relfilenodes are normally allocated with\n> > GetNewOidWithIndex(), and initially match a relation's OID. We'd need\n> > a new allocator, and they won't be able to match the OID in general\n> > (while we have 32 bit OIDs at least).\n> \n> Personally I'm not sad about that. Values that are the same in simple\n> cases but diverge in more complex cases are kind of a trap for the\n> unwary.\n\n+1\n\n\n",
"msg_date": "Thu, 6 Jan 2022 12:49:43 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Thu, Jan 6, 2022 at 7:22 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Jan 6, 2022 at 3:47 AM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> > Another problem is that relfilenodes are normally allocated with\n> > GetNewOidWithIndex(), and initially match a relation's OID. We'd need\n> > a new allocator, and they won't be able to match the OID in general\n> > (while we have 32 bit OIDs at least).\n>\n> Personally I'm not sad about that. Values that are the same in simple\n> cases but diverge in more complex cases are kind of a trap for the\n> unwary. There's no real reason to have them ever match. Yeah, in\n> theory, it makes it easier to tell which file matches which relation,\n> but in practice, you always have to double-check in case the table has\n> ever been rewritten. It doesn't seem worth continuing to contort the\n> code for a property we can't guarantee anyway.\n>\n\nMake sense, I have started working on this idea, I will try to post the\nfirst version by early next week.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Jan 6, 2022 at 7:22 PM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Jan 6, 2022 at 3:47 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Another problem is that relfilenodes are normally allocated with\n> GetNewOidWithIndex(), and initially match a relation's OID. We'd need\n> a new allocator, and they won't be able to match the OID in general\n> (while we have 32 bit OIDs at least).\n\nPersonally I'm not sad about that. Values that are the same in simple\ncases but diverge in more complex cases are kind of a trap for the\nunwary. There's no real reason to have them ever match. Yeah, in\ntheory, it makes it easier to tell which file matches which relation,\nbut in practice, you always have to double-check in case the table has\never been rewritten. It doesn't seem worth continuing to contort the\ncode for a property we can't guarantee anyway.\nMake sense, I have started working on this idea, I will try to post the first version by early next week.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 19 Jan 2022 10:37:21 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Wed, Jan 19, 2022 at 10:37 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Jan 6, 2022 at 7:22 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>>\n>> On Thu, Jan 6, 2022 at 3:47 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> > Another problem is that relfilenodes are normally allocated with\n>> > GetNewOidWithIndex(), and initially match a relation's OID. We'd need\n>> > a new allocator, and they won't be able to match the OID in general\n>> > (while we have 32 bit OIDs at least).\n>>\n>> Personally I'm not sad about that. Values that are the same in simple\n>> cases but diverge in more complex cases are kind of a trap for the\n>> unwary. There's no real reason to have them ever match. Yeah, in\n>> theory, it makes it easier to tell which file matches which relation,\n>> but in practice, you always have to double-check in case the table has\n>> ever been rewritten. It doesn't seem worth continuing to contort the\n>> code for a property we can't guarantee anyway.\n>\n>\n> Make sense, I have started working on this idea, I will try to post the first version by early next week.\n\nHere is the first working patch, with that now we don't need to\nmaintain the TombStone file until the next checkpoint. This is still\na WIP patch with this I can see my problem related to ALTER DATABASE\nSET TABLESPACE WAL-logged problem is solved which Robert reported a\ncouple of mails above in the same thread.\n\nGeneral idea of the patch:\n- Change the RelFileNode.relNode to be 64bit wide, out of which 8 bits\nfor fork number and 56 bits for the relNode as shown below. [1]\n- GetNewRelFileNode() will just generate a new unique relfilenode and\ncheck the file existence and if it already exists then throw an error,\nso no loop. We also need to add the logic for preserving the\nnextRelNode across restart and also WAL logging it but that is similar\nto the preserving nextOid.\n- mdunlinkfork, will directly forget the relfilenode, so we get rid of\nall unlinking code from the code.\n- Now, we don't need any post checkpoint unlinking activity.\n\n[1]\n/*\n* RelNodeId:\n*\n* this is a storage type for RelNode. The reasoning behind using this is same\n* as using the BlockId so refer comment atop BlockId.\n*/\ntypedef struct RelNodeId\n{\n uint32 rn_hi;\n uint32 rn_lo;\n} RelNodeId;\ntypedef struct RelFileNode\n{\n Oid spcNode; /* tablespace */\n Oid dbNode; /* database */\n RelNodeId relNode; /* relation */\n} RelFileNode;\n\nTODO:\n\nThere are a couple of TODOs and FIXMEs which I am planning to improve\nby next week. I am also planning to do the testing where relfilenode\nconsumes more than 32 bits, maybe for that we can set the\nFirstNormalRelfileNode to higher value for the testing purpose. And,\nImprove comments.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 28 Jan 2022 20:10:15 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Fri, Jan 28, 2022 at 8:10 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Jan 19, 2022 at 10:37 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n\n>\n> TODO:\n>\n> There are a couple of TODOs and FIXMEs which I am planning to improve\n> by next week. I am also planning to do the testing where relfilenode\n> consumes more than 32 bits, maybe for that we can set the\n> FirstNormalRelfileNode to higher value for the testing purpose. And,\n> Improve comments.\n>\n\nI have fixed most of TODO and FIXMEs but there are still a few which I\ncould not decide, the main one currently we do not have uint8 data\ntype only int8 is there so I have used int8 for storing relfilenode +\nforknumber. Although this is sufficient because I don't think we will\never get more than 128 fork numbers. But my question is should we\nthink for adding uint8 as new data type or infect make RelNode itself\nas new data type like we have Oid.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 31 Jan 2022 10:59:33 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Mon, Jan 31, 2022 at 12:29 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> the main one currently we do not have uint8 data\n> type only int8 is there so I have used int8 for storing relfilenode +\n> forknumber.\n\nI'm confused. We use int8 in tons of places, so I feel like it must exist.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 31 Jan 2022 09:04:41 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Mon, Jan 31, 2022 at 9:04 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Jan 31, 2022 at 12:29 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > the main one currently we do not have uint8 data\n> > type only int8 is there so I have used int8 for storing relfilenode +\n> > forknumber.\n>\n> I'm confused. We use int8 in tons of places, so I feel like it must exist.\n\nRather, we use uint8 in tons of places, so I feel like it must exist.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 31 Jan 2022 09:06:33 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Mon, Jan 31, 2022 at 7:36 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jan 31, 2022 at 9:04 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Mon, Jan 31, 2022 at 12:29 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > the main one currently we do not have uint8 data\n> > > type only int8 is there so I have used int8 for storing relfilenode +\n> > > forknumber.\n> >\n> > I'm confused. We use int8 in tons of places, so I feel like it must exist.\n>\n> Rather, we use uint8 in tons of places, so I feel like it must exist.\n\nHmm, at least pg_type doesn't have anything with a name like uint8.\n\npostgres[101702]=# select oid, typname from pg_type where typname like '%int8';\n oid | typname\n------+---------\n 20 | int8\n 1016 | _int8\n(2 rows)\n\npostgres[101702]=# select oid, typname from pg_type where typname like '%uint%';\n oid | typname\n-----+---------\n(0 rows)\n\nI agree that we are using 8 bytes unsigned int multiple places in code\nas uint64. But I don't see it as an exposed data type and not used as\npart of any exposed function. But we will have to use the relfilenode\nin the exposed c function e.g.\nbinary_upgrade_set_next_heap_relfilenode().\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 31 Jan 2022 20:07:17 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Mon, Jan 31, 2022 at 9:37 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I agree that we are using 8 bytes unsigned int multiple places in code\n> as uint64. But I don't see it as an exposed data type and not used as\n> part of any exposed function. But we will have to use the relfilenode\n> in the exposed c function e.g.\n> binary_upgrade_set_next_heap_relfilenode().\n\nOh, I thought we were talking about the C data type uint8 i.e. an\n8-bit unsigned integer. Which in retrospect was a dumb thought because\nyou said you wanted to store the relfilenode AND the fork number\nthere, which only make sense if you were talking about SQL data types\nrather than C data types. It is confusing that we have an SQL data\ntype called int8 and a C data type called int8 and they're not the\nsame.\n\nBut if you're talking about SQL data types, why? pg_class only stores\nthe relfilenode and not the fork number currently, and I don't see why\nthat would change. I think that the data type for the relfilenode\ncolumn would change to a 64-bit signed integer (i.e. bigint or int8)\nthat only ever uses the low-order 56 bits, and then when you need to\nstore a relfilenode and a fork number in the same 8-byte quantity\nyou'd do that using either a struct with bit fields or by something\nlike combined = ((uint64) signed_representation_of_relfilenode) |\n(((int) forknumber) << 56);\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 2 Feb 2022 08:27:28 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Wed, Feb 2, 2022 at 6:57 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Jan 31, 2022 at 9:37 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I agree that we are using 8 bytes unsigned int multiple places in code\n> > as uint64. But I don't see it as an exposed data type and not used as\n> > part of any exposed function. But we will have to use the relfilenode\n> > in the exposed c function e.g.\n> > binary_upgrade_set_next_heap_relfilenode().\n>\n> Oh, I thought we were talking about the C data type uint8 i.e. an\n> 8-bit unsigned integer. Which in retrospect was a dumb thought because\n> you said you wanted to store the relfilenode AND the fork number\n> there, which only make sense if you were talking about SQL data types\n> rather than C data types. It is confusing that we have an SQL data\n> type called int8 and a C data type called int8 and they're not the\n> same.\n>\n> But if you're talking about SQL data types, why? pg_class only stores\n> the relfilenode and not the fork number currently, and I don't see why\n> that would change. I think that the data type for the relfilenode\n> column would change to a 64-bit signed integer (i.e. bigint or int8)\n> that only ever uses the low-order 56 bits, and then when you need to\n> store a relfilenode and a fork number in the same 8-byte quantity\n> you'd do that using either a struct with bit fields or by something\n> like combined = ((uint64) signed_representation_of_relfilenode) |\n> (((int) forknumber) << 56);\n\nYeah you're right. I think whenever we are using combined then we can\nuse uint64 C type and in pg_class we can keep it as int64 because that\nis only representing the relfilenode part. I think I was just\nconfused and tried to use the same data type everywhere whether it is\ncombined with fork number or not. Thanks for your input, I will\nchange this.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 2 Feb 2022 19:09:15 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Wed, Feb 2, 2022 at 7:09 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Feb 2, 2022 at 6:57 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\nI have splitted the patch into multiple patches which can be\nindependently committable and easy to review. I have explained the\npurpose and scope of each patch in the respective commit messages.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 7 Feb 2022 10:56:17 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Mon, Feb 7, 2022 at 12:26 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I have splitted the patch into multiple patches which can be\n> independently committable and easy to review. I have explained the\n> purpose and scope of each patch in the respective commit messages.\n\nHmm. The parts of this I've looked at seem reasonably clean, but I\ndon't think I like the design choice. You're inventing\nRelFileNodeSetFork(), but at present the RelFileNode struct doesn't\ninclude a fork number. I feel like we should leave that alone, and\nonly change the definition of a BufferTag. What about adding accessors\nfor all of the BufferTag fields in 0001, and then in 0002 change it to\nlook like something this:\n\ntypedef struct BufferTag\n{\n Oid dbOid;\n Oid tablespaceOid;\n uint32 fileNode_low;\n uint32 fileNode_hi:24;\n uint32 forkNumber:8;\n BlockNumber blockNumber;\n} BufferTag;\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 7 Feb 2022 11:11:53 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Mon, Feb 7, 2022 at 9:42 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Feb 7, 2022 at 12:26 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I have splitted the patch into multiple patches which can be\n> > independently committable and easy to review. I have explained the\n> > purpose and scope of each patch in the respective commit messages.\n>\n> Hmm. The parts of this I've looked at seem reasonably clean, but I\n> don't think I like the design choice. You're inventing\n> RelFileNodeSetFork(), but at present the RelFileNode struct doesn't\n> include a fork number. I feel like we should leave that alone, and\n> only change the definition of a BufferTag. What about adding accessors\n> for all of the BufferTag fields in 0001, and then in 0002 change it to\n> look like something this:\n>\n> typedef struct BufferTag\n> {\n> Oid dbOid;\n> Oid tablespaceOid;\n> uint32 fileNode_low;\n> uint32 fileNode_hi:24;\n> uint32 forkNumber:8;\n> BlockNumber blockNumber;\n> } BufferTag;\n\nOkay, we can do that. But we can not leave RelFileNode untouched I\nmean inside RelFileNode also we will have to change the relNode as 2\n32 bit integers, I mean like below.\n\n> typedef struct RelFileNode\n> {\n> Oid spcNode;\n> Oid dbNode;\n> uint32 relNode_low;\n> uint32 relNode_hi;\n} RelFileNode;\n\nFor RelFileNode also we need to use 2, 32-bit integers so that we do\nnot add extra alignment padding because there are a few more\nstructures that include RelFileNode e.g. xl_xact_relfilenodes,\nRelFileNodeBackend, and many other structures.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 7 Feb 2022 22:01:01 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Mon, Feb 7, 2022 at 11:31 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> For RelFileNode also we need to use 2, 32-bit integers so that we do\n> not add extra alignment padding because there are a few more\n> structures that include RelFileNode e.g. xl_xact_relfilenodes,\n> RelFileNodeBackend, and many other structures.\n\nAre you sure that kind of stuff is really important enough to justify\nthe code churn? I don't think RelFileNodeBackend is used widely enough\nor in sufficiently performance-critical places that we really need to\ncare about a few bytes of alignment padding. xl_xact_relfilenodes is\nmore concerning because that goes into the WAL format, but I don't\nknow that we use it often enough for an extra 4 bytes per record to\nreally matter, especially considering that this proposal also adds 4\nbytes *per relfilenode* which has to be a much bigger deal than a few\npadding bytes after 'nrels'. The reason why BufferTag matters a lot is\nbecause (1) we have an array of this struct that can easily contain a\nmillion or eight entries, so the alignment padding adds up a lot more\nand (2) access to that array is one of the most performance-critical\nparts of PostgreSQL.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 7 Feb 2022 11:43:37 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Mon, Feb 7, 2022 at 10:13 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Feb 7, 2022 at 11:31 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > For RelFileNode also we need to use 2, 32-bit integers so that we do\n> > not add extra alignment padding because there are a few more\n> > structures that include RelFileNode e.g. xl_xact_relfilenodes,\n> > RelFileNodeBackend, and many other structures.\n>\n> Are you sure that kind of stuff is really important enough to justify\n> the code churn? I don't think RelFileNodeBackend is used widely enough\n> or in sufficiently performance-critical places that we really need to\n> care about a few bytes of alignment padding. xl_xact_relfilenodes is\n> more concerning because that goes into the WAL format, but I don't\n> know that we use it often enough for an extra 4 bytes per record to\n> really matter, especially considering that this proposal also adds 4\n> bytes *per relfilenode* which has to be a much bigger deal than a few\n> padding bytes after 'nrels'. The reason why BufferTag matters a lot is\n> because (1) we have an array of this struct that can easily contain a\n> million or eight entries, so the alignment padding adds up a lot more\n> and (2) access to that array is one of the most performance-critical\n> parts of PostgreSQL.\n\nI agree with you that adding 4 extra bytes to these structures might\nnot be really critical. I will make the changes based on this idea\nand see how the changes look.\n\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 9 Feb 2022 15:55:20 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Thu, Jan 6, 2022 at 1:43 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n 2) GetNewRelFileNode() will not loop for checking the file existence\n> and retry with other relfilenode.\n\nWhile working on this I realized that even if we make the relfilenode\n56 bits we can not remove the loop inside GetNewRelFileNode() for\nchecking the file existence. Because it is always possible that the\nfile reaches to the disk even before the WAL for advancing the next\nrelfilenode and if the system crashes in between that then we might\ngenerate the duplicate relfilenode right?\n\nI think the second paragraph in XLogPutNextOid() function explain this\nissue and now even after we get the wider relfilenode we will have\nthis issue. Correct?\n\nI am also attaching the latest set of patches for reference, these\npatches fix the review comments given by Robert about moving the\ndbOid, tbsOid and RelNode directly into the buffer tag.\n\nOpen Issues- there are currently 2 open issues in the patch 1) Issue\nas discussed above about removing the loop, so currently in this patch\nthe loop is removed. 2) During upgrade from the previous version we\nneed to advance the nextrelfilenode to the current relfilenode we are\nsetting for the object in order to avoid the conflict.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 21 Feb 2022 13:21:31 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Mon, Feb 21, 2022 at 1:21 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Jan 6, 2022 at 1:43 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> 2) GetNewRelFileNode() will not loop for checking the file existence\n> > and retry with other relfilenode.\n>\n>\n> Open Issues- there are currently 2 open issues in the patch 1) Issue\n> as discussed above about removing the loop, so currently in this patch\n> the loop is removed. 2) During upgrade from the previous version we\n> need to advance the nextrelfilenode to the current relfilenode we are\n> setting for the object in order to avoid the conflict.\n\n\nIn this version I have fixed both of these issues. Thanks Robert for\nsuggesting the solution for both of these problems in our offlist\ndiscussion. Basically, for the first problem we can flush the xlog\nimmediately because we are actually logging the WAL every time after\nwe allocate 64 relfilenode so this should not have much impact on the\nperformance and I have added the same in the comments. And during\npg_upgrade, whenever we are assigning the relfilenode as part of the\nupgrade we will set that relfilenode + 1 as nextRelFileNode to be\nassigned so that we never generate the conflicting relfilenode.\n\nThe only part I do not like in the patch is that before this patch we\ncould directly access the buftag->rnode. But since now we are not\nhaving directly relfilenode as part of the buffertag and instead of\nthat we are keeping individual fields (i.e. dbOid, tbsOid and relNode)\nin the buffer tag. So if we have to directly get the relfilenode we\nneed to generate it. However those changes are very limited to just 1\nor 2 file so maybe not that bad.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 4 Mar 2022 11:07:19 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Mon, Feb 21, 2022 at 2:51 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> While working on this I realized that even if we make the relfilenode\n> 56 bits we can not remove the loop inside GetNewRelFileNode() for\n> checking the file existence. Because it is always possible that the\n> file reaches to the disk even before the WAL for advancing the next\n> relfilenode and if the system crashes in between that then we might\n> generate the duplicate relfilenode right?\n\nI agree.\n\n> I think the second paragraph in XLogPutNextOid() function explain this\n> issue and now even after we get the wider relfilenode we will have\n> this issue. Correct?\n\nI think you are correct.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 7 Mar 2022 14:54:07 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Fri, Mar 4, 2022 at 12:37 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> In this version I have fixed both of these issues. Thanks Robert for\n> suggesting the solution for both of these problems in our offlist\n> discussion. Basically, for the first problem we can flush the xlog\n> immediately because we are actually logging the WAL every time after\n> we allocate 64 relfilenode so this should not have much impact on the\n> performance and I have added the same in the comments. And during\n> pg_upgrade, whenever we are assigning the relfilenode as part of the\n> upgrade we will set that relfilenode + 1 as nextRelFileNode to be\n> assigned so that we never generate the conflicting relfilenode.\n\nAnyone else have an opinion on this?\n\n> The only part I do not like in the patch is that before this patch we\n> could directly access the buftag->rnode. But since now we are not\n> having directly relfilenode as part of the buffertag and instead of\n> that we are keeping individual fields (i.e. dbOid, tbsOid and relNode)\n> in the buffer tag. So if we have to directly get the relfilenode we\n> need to generate it. However those changes are very limited to just 1\n> or 2 file so maybe not that bad.\n\nYou're talking here about just needing to introduce BufTagGetFileNode\nand BufTagSetFileNode, or something else? I don't find those macros to\nbe problematic.\n\nBufTagSetFileNode could maybe assert that the OID isn't too big,\nthough. We should ereport() before we get to this point if we somehow\nrun out of values, but it might be nice to have a check here as a\nbackup.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 7 Mar 2022 14:57:44 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Tue, Mar 8, 2022 at 1:27 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> > The only part I do not like in the patch is that before this patch we\n> > could directly access the buftag->rnode. But since now we are not\n> > having directly relfilenode as part of the buffertag and instead of\n> > that we are keeping individual fields (i.e. dbOid, tbsOid and relNode)\n> > in the buffer tag. So if we have to directly get the relfilenode we\n> > need to generate it. However those changes are very limited to just 1\n> > or 2 file so maybe not that bad.\n>\n> You're talking here about just needing to introduce BufTagGetFileNode\n> and BufTagSetFileNode, or something else? I don't find those macros to\n> be problematic.\n\nYeah, I was talking about BufTagGetFileNode macro only. The reason I\ndid not like it is that earlier we could directly use buftag->rnode,\nbut now whenever we wanted to use rnode first we need to use a\nseparate variable for preparing the rnode using BufTagGetFileNode\nmacro. But these changes are very localized and a very few places so\nI don't have much problem with those.\n\n>\n> BufTagSetFileNode could maybe assert that the OID isn't too big,\n> though. We should ereport() before we get to this point if we somehow\n> run out of values, but it might be nice to have a check here as a\n> backup.\n\nYeah, we could do that, I will do that in the next version. Thanks.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 8 Mar 2022 10:32:24 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Fri, Mar 4, 2022 at 12:37 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> In this version I have fixed both of these issues.\n\nHere's a bit of review for these patches:\n\n- The whole relnode vs. relfilenode thing is really confusing. I\nrealize that there is some precedent for calling the number that\npertains to the file on disk \"relnode\" and that value when combined\nwith the database and tablespace OIDs \"relfilenode,\" but it's\ndefinitely not the most obvious thing, especially since\npg_class.relfilenode is a prominent case where we don't even adhere to\nthat convention. I'm kind of tempted to think that we should go the\nother way and rename the RelFileNode struct to something like\nRelFileLocator, and then maybe call the new data type RelFileNumber.\nAnd then we could work toward removing references to \"filenode\" and\n\"relfilenode\" in favor of either (rel)filelocator or (rel)filenumber.\nNow the question (even assuming other people like this general\ndirection) is how far do we go with it? Renaming pg_class.relfilenode\nitself wouldn't be the worst compatibility break we've ever had, but\nit would definitely cause some pain. I'd be inclined to leave the\nuser-visible catalog column alone and just push in this direction for\ninternal stuff.\n\n- What you're doing to pg_buffercache here is completely unacceptable.\nYou can't change the definition of an already-released version of the\nextension. Please study how such issues have been handled in the past.\n\n- It looks to me like you need to give significantly more thought to\nthe proper way of adjusting the relfilenode-related test cases in\nalter_table.out.\n\n- I think BufTagGetFileNode and BufTagGetSetFileNode should be\nintroduced in 0001 and then just update the definition in 0002 as\nrequired. Note that as things stand you end up with both\nBufTagGetFileNode and BuffTagGetRelFileNode which is an artifact of\nthe relnode/filenode/relfilenode confusion I mention above, and just\nto make matters worse, one returns a value while the other produces an\nout parameter. I think the renaming I'm talking about up above might\nhelp somewhat here, but it seems like it might also be good to change\nthe one that uses an out parameter by doing Get -> Copy, just to help\nthe reader get a clue a little more easily.\n\n- GetNewRelNode() needs to error out if we would wrap around, not wrap\naround. Probably similar to what happens if we exhaust 2^64 bytes of\nWAL.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 8 Mar 2022 11:40:49 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "Hi Dilip,\n\nOn Fri, Mar 4, 2022 at 11:07 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Feb 21, 2022 at 1:21 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Thu, Jan 6, 2022 at 1:43 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > 2) GetNewRelFileNode() will not loop for checking the file existence\n> > > and retry with other relfilenode.\n> >\n> >\n> > Open Issues- there are currently 2 open issues in the patch 1) Issue\n> > as discussed above about removing the loop, so currently in this patch\n> > the loop is removed. 2) During upgrade from the previous version we\n> > need to advance the nextrelfilenode to the current relfilenode we are\n> > setting for the object in order to avoid the conflict.\n>\n>\n> In this version I have fixed both of these issues. Thanks Robert for\n> suggesting the solution for both of these problems in our offlist\n> discussion. Basically, for the first problem we can flush the xlog\n> immediately because we are actually logging the WAL every time after\n> we allocate 64 relfilenode so this should not have much impact on the\n> performance and I have added the same in the comments. And during\n> pg_upgrade, whenever we are assigning the relfilenode as part of the\n> upgrade we will set that relfilenode + 1 as nextRelFileNode to be\n> assigned so that we never generate the conflicting relfilenode.\n>\n> The only part I do not like in the patch is that before this patch we\n> could directly access the buftag->rnode. But since now we are not\n> having directly relfilenode as part of the buffertag and instead of\n> that we are keeping individual fields (i.e. dbOid, tbsOid and relNode)\n> in the buffer tag. So if we have to directly get the relfilenode we\n> need to generate it. However those changes are very limited to just 1\n> or 2 file so maybe not that bad.\n>\n\nv5 patch needs a rebase and here are a few comments for 0002, I found\nwhile reading that, hope that helps:\n\n+/* Number of RelFileNode to prefetch (preallocate) per XLOG write */\n+#define VAR_RFN_PREFETCH 8192\n+\n\nShould it be 64, as per comment in XLogPutNextRelFileNode for XLogFlush() ?\n---\n\n+ /*\n+ * Check for the wraparound for the relnode counter.\n+ *\n+ * XXX Actually the relnode is 56 bits wide so we don't need to worry about\n+ * the wraparound case.\n+ */\n+ if (ShmemVariableCache->nextRelNode > MAX_RELFILENODE)\n\nVery rare case, should use unlikely()?\n---\n\n+/*\n+ * Max value of the relfilnode. Relfilenode will be of 56bits wide for more\n+ * details refer comments atop BufferTag.\n+ */\n+#define MAX_RELFILENODE ((((uint64) 1) << 56) - 1)\n\nShould there be 57-bit shifts here? Instead, I think we should use\nINT64CONST(0xFFFFFFFFFFFFFF) to be consistent with PG_*_MAX\ndeclarations, thoughts?\n---\n\n+ /* If we run out of logged for use RelNode then we must log more */\n+ if (ShmemVariableCache->relnodecount == 0)\n\nMight relnodecount never go below, but just to be safer should check\n<= 0 instead.\n---\n\nFew typos:\nSimmialr\nSimmilar\nagains\nidealy\n\nRegards,\nAmul\n\n\n",
"msg_date": "Thu, 12 May 2022 16:27:20 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Thu, May 12, 2022 at 4:27 PM Amul Sul <sulamul@gmail.com> wrote:\n>\nHi Amul,\n\nThanks for the review, actually based on some comments from Robert we\nhave planned to make some design changes. So I am planning to work on\nthat for the July commitfest. I will try to incorporate all your\nreview comments in the new version.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 16 May 2022 11:05:55 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "I think you can get rid of SYNC_UNLINK_REQUEST, sync_unlinkfiletag,\nmdunlinkfiletag as these are all now unused.\nAre there any special hazards here if the plan in [1] goes ahead? If\nthe relfilenode allocation is logged and replayed then it should be\nfine to crash and recover multiple times in a row while creating and\ndropping tables, with wal_level=minimal, I think. It would be bad if\nthe allocator restarted from a value from the checkpoint, though.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BTgmoYmw%3D%3DTOJ6EzYb_vcjyS09NkzrVKSyBKUUyo1zBEaJASA%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 16 May 2022 21:53:23 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Mon, May 16, 2022 at 3:24 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> I think you can get rid of SYNC_UNLINK_REQUEST, sync_unlinkfiletag,\n> mdunlinkfiletag as these are all now unused.\n\nCorrect.\n\n> Are there any special hazards here if the plan in [1] goes ahead?\n\nIMHO we should not have any problem. In fact, we need this for [1]\nright? Otherwise, there is a risk of reusing the same relfilenode\nwithin the same checkpoint cycle as discussed in [2].\n\n> [1] https://www.postgresql.org/message-id/flat/CA%2BTgmoYmw%3D%3DTOJ6EzYb_vcjyS09NkzrVKSyBKUUyo1zBEaJASA%40mail.gmail.com\n\n[2] https://www.postgresql.org/message-id/CA+TgmoZZDL_2E_zuahqpJ-WmkuxmUi8+g7=dLEny=18r-+c-iQ@mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 16 May 2022 16:27:28 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "On Tue, Mar 8, 2022 at 10:11 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Mar 4, 2022 at 12:37 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > In this version I have fixed both of these issues.\n>\n> Here's a bit of review for these patches:\n>\n> - The whole relnode vs. relfilenode thing is really confusing. I\n> realize that there is some precedent for calling the number that\n> pertains to the file on disk \"relnode\" and that value when combined\n> with the database and tablespace OIDs \"relfilenode,\" but it's\n> definitely not the most obvious thing, especially since\n> pg_class.relfilenode is a prominent case where we don't even adhere to\n> that convention. I'm kind of tempted to think that we should go the\n> other way and rename the RelFileNode struct to something like\n> RelFileLocator, and then maybe call the new data type RelFileNumber.\n> And then we could work toward removing references to \"filenode\" and\n> \"relfilenode\" in favor of either (rel)filelocator or (rel)filenumber.\n> Now the question (even assuming other people like this general\n> direction) is how far do we go with it? Renaming pg_class.relfilenode\n> itself wouldn't be the worst compatibility break we've ever had, but\n> it would definitely cause some pain. I'd be inclined to leave the\n> user-visible catalog column alone and just push in this direction for\n> internal stuff.\n\nI have worked on this renaming stuff first and once we agree with that\nthen I will rebase the other patches on top of this and will also work\non the other review comments for those patches.\nSo basically in this patch\n- The \"RelFileNode\" structure to \"RelFileLocator\" and also renamed\nother internal member as below\ntypedef struct RelFileLocator\n{\n Oid spcOid; /* tablespace */\n Oid dbOid; /* database */\n Oid relNumber; /* relation */\n} RelFileLocator;\n- All variables and internal functions which are using name as\nrelfilenode/rnode and referring to this structure are renamed to\nrelfilelocator/rlocator.\n- relNode/relfilenode which are referring to the actual file name on\ndisk is renamed to relNumber/relfilenumber.\n- Based on the new terminology, I have renamed the file names as well, e.g.\nrelfilenode.h -> relfilelocator.h\nrelfilenodemap.h -> relfilenumbermap.h\n\nI haven't renamed the exposed catalog variable and exposed function\nhere is the high level list\n- pg_class.relfilenode\n- pg_catalog.pg_relation_filenode()\n- All test cases variables referring to pg_class.relfilenode.\n- exposed option for tool which are w.r.t pg_class relfilenode (e.g.\n-f, --filenode=FILENODE)\n- exposed functions\npg_catalog.binary_upgrade_set_next_heap_relfilenode() and friends\n- pg_filenode.map file name, maybe we can rename this but this is used\nby other tools so I left this alone.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 23 Jun 2022 13:25:29 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make relfile tombstone files conditional on WAL level"
},
{
"msg_contents": "[ changing subject line so nobody misses what's under discussion ]\n\nFor a quick summary of the overall idea being discussed here and some\ndiscussion of the problems it solves, see\nhttp://postgr.es/m/CA+TgmobM5FN5x0u3tSpoNvk_TZPFCdbcHxsXCoY1ytn1dXROvg@mail.gmail.com\n\nFor discussion of the proposed renaming of non-user-visible references\nto relfilenode to either RelFileLocator or RelFileNumber as\npreparatory refactoring work for that change, see\nhttp://postgr.es/m/CA+TgmoamOtXbVAQf9hWFzonUo6bhhjS6toZQd7HZ-pmojtAmag@mail.gmail.com\n\nOn Thu, Jun 23, 2022 at 3:55 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I have worked on this renaming stuff first and once we agree with that\n> then I will rebase the other patches on top of this and will also work\n> on the other review comments for those patches.\n> So basically in this patch\n> - The \"RelFileNode\" structure to \"RelFileLocator\" and also renamed\n> other internal member as below\n> typedef struct RelFileLocator\n> {\n> Oid spcOid; /* tablespace */\n> Oid dbOid; /* database */\n> Oid relNumber; /* relation */\n> } RelFileLocator;\n\nI like those structure member names fine, but I'd like to see this\npreliminary patch also introduce the RelFileNumber typedef as an alias\nfor Oid. Then the main patch can change it to be uint64.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 23 Jun 2022 16:06:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "making relfilenodes 56 bits"
},
{
"msg_contents": "On Fri, Jun 24, 2022 at 1:36 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> [ changing subject line so nobody misses what's under discussion ]\n>\n> For a quick summary of the overall idea being discussed here and some\n> discussion of the problems it solves, see\n> http://postgr.es/m/CA+TgmobM5FN5x0u3tSpoNvk_TZPFCdbcHxsXCoY1ytn1dXROvg@mail.gmail.com\n>\n> For discussion of the proposed renaming of non-user-visible references\n> to relfilenode to either RelFileLocator or RelFileNumber as\n> preparatory refactoring work for that change, see\n> http://postgr.es/m/CA+TgmoamOtXbVAQf9hWFzonUo6bhhjS6toZQd7HZ-pmojtAmag@mail.gmail.com\n>\n> On Thu, Jun 23, 2022 at 3:55 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I have worked on this renaming stuff first and once we agree with that\n> > then I will rebase the other patches on top of this and will also work\n> > on the other review comments for those patches.\n> > So basically in this patch\n> > - The \"RelFileNode\" structure to \"RelFileLocator\" and also renamed\n> > other internal member as below\n> > typedef struct RelFileLocator\n> > {\n> > Oid spcOid; /* tablespace */\n> > Oid dbOid; /* database */\n> > Oid relNumber; /* relation */\n> > } RelFileLocator;\n>\n> I like those structure member names fine, but I'd like to see this\n> preliminary patch also introduce the RelFileNumber typedef as an alias\n> for Oid. Then the main patch can change it to be uint64.\n\nI have changed that. PFA, the updated patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 24 Jun 2022 16:38:13 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Fri, Jun 24, 2022 at 7:08 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I have changed that. PFA, the updated patch.\n\nApart from one minor nitpick (see below) I don't see a problem with\nthis in isolation. It seems like a pretty clean renaming. So I think\nwe need to move onto the question of how clean the rest of the patch\nseries looks with this as a base.\n\nA preliminary refactoring that was discussed in the past and was\noriginally in 0001 was to move the fields included in BufferTag via\nRelFileNode/Locator directly into the struct. I think maybe it doesn't\nmake sense to include that in 0001 as you have it here, but maybe that\ncould be 0002 with the main patch to follow as 0003, or something like\nthat. I wonder if we can get by with redefining RelFileNode like this\nin 0002:\n\ntypedef struct buftag\n{\n Oid spcOid;\n Oid dbOid;\n RelFileNumber fileNumber;\n ForkNumber forkNum;\n} BufferTag;\n\nAnd then like this in 0003:\n\ntypedef struct buftag\n{\n Oid spcOid;\n Oid dbOid;\n RelFileNumber fileNumber:56;\n ForkNumber forkNum:8;\n} BufferTag;\n\n- * from catalog OIDs to filenode numbers. Each database has a map file for\n+ * from catalog OIDs to filenumber. Each database has a map file for\n\nshould be filenumbers\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 24 Jun 2022 10:59:25 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "Hi,\n\nOn 2022-06-24 10:59:25 -0400, Robert Haas wrote:\n> A preliminary refactoring that was discussed in the past and was\n> originally in 0001 was to move the fields included in BufferTag via\n> RelFileNode/Locator directly into the struct. I think maybe it doesn't\n> make sense to include that in 0001 as you have it here, but maybe that\n> could be 0002 with the main patch to follow as 0003, or something like\n> that. I wonder if we can get by with redefining RelFileNode like this\n> in 0002:\n> \n> typedef struct buftag\n> {\n> Oid spcOid;\n> Oid dbOid;\n> RelFileNumber fileNumber;\n> ForkNumber forkNum;\n> } BufferTag;\n\nIf we \"inline\" RelFileNumber, it's probably worth reorder the members so that\nthe most distinguishing elements come first, to make it quicker to detect hash\ncollisions. It shows up in profiles today...\n\nI guess it should be blockNum, fileNumber, forkNumber, dbOid, spcOid? I think\nas long as blockNum, fileNumber are first, the rest doesn't matter much.\n\n\n> And then like this in 0003:\n> \n> typedef struct buftag\n> {\n> Oid spcOid;\n> Oid dbOid;\n> RelFileNumber fileNumber:56;\n> ForkNumber forkNum:8;\n> } BufferTag;\n\nProbably worth checking the generated code / the performance effects of using\nbitfields (vs manual maskery). I've seen some awful cases, but here it's at a\nbyte boundary, so it might be ok.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 24 Jun 2022 18:30:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Fri, Jun 24, 2022 at 9:30 PM Andres Freund <andres@anarazel.de> wrote:\n> If we \"inline\" RelFileNumber, it's probably worth reorder the members so that\n> the most distinguishing elements come first, to make it quicker to detect hash\n> collisions. It shows up in profiles today...\n>\n> I guess it should be blockNum, fileNumber, forkNumber, dbOid, spcOid? I think\n> as long as blockNum, fileNumber are first, the rest doesn't matter much.\n\nHmm, I guess we could do that. Possibly as a separate, very small patch.\n\n> > And then like this in 0003:\n> >\n> > typedef struct buftag\n> > {\n> > Oid spcOid;\n> > Oid dbOid;\n> > RelFileNumber fileNumber:56;\n> > ForkNumber forkNum:8;\n> > } BufferTag;\n>\n> Probably worth checking the generated code / the performance effects of using\n> bitfields (vs manual maskery). I've seen some awful cases, but here it's at a\n> byte boundary, so it might be ok.\n\nOne advantage of using bitfields is that it might mean we don't need\nto introduce accessor macros. Now, if that's going to lead to terrible\nperformance I guess we should go ahead and add the accessor macros -\nDilip had those in an earlier patch anyway. But it'd be nice if it\nweren't necessary.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 25 Jun 2022 08:47:37 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Sat, 25 Jun 2022 at 02:30, Andres Freund <andres@anarazel.de> wrote:\n\n> > And then like this in 0003:\n> >\n> > typedef struct buftag\n> > {\n> > Oid spcOid;\n> > Oid dbOid;\n> > RelFileNumber fileNumber:56;\n> > ForkNumber forkNum:8;\n> > } BufferTag;\n>\n> Probably worth checking the generated code / the performance effects of using\n> bitfields (vs manual maskery). I've seen some awful cases, but here it's at a\n> byte boundary, so it might be ok.\n\nAnother approach would be to condense spcOid and dbOid into a single\n4-byte Oid-like number, since in most cases they are associated with\neach other, and not often many of them anyway. So this new number\nwould indicate both the database and the tablespace. I know that we\nwant to be able to make file changes without doing catalog lookups,\nbut since the number of combinations is usually 1, but even then, low,\nit can be cached easily in a smgr array and included in the checkpoint\nrecord (or nearby) for ease of use.\n\ntypedef struct buftag\n{\n Oid db_spcOid;\n ForkNumber uint32;\n RelFileNumber uint64;\n} BufferTag;\n\nThat way we could just have a simple 64-bit RelFileNumber, without\nrestriction, and probably some spare bytes on the ForkNumber, if we\nneeded them later.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 28 Jun 2022 12:45:16 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Jun 28, 2022 at 7:45 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> Another approach would be to condense spcOid and dbOid into a single\n> 4-byte Oid-like number, since in most cases they are associated with\n> each other, and not often many of them anyway. So this new number\n> would indicate both the database and the tablespace. I know that we\n> want to be able to make file changes without doing catalog lookups,\n> but since the number of combinations is usually 1, but even then, low,\n> it can be cached easily in a smgr array and included in the checkpoint\n> record (or nearby) for ease of use.\n>\n> typedef struct buftag\n> {\n> Oid db_spcOid;\n> ForkNumber uint32;\n> RelFileNumber uint64;\n> } BufferTag;\n\nI've thought about this before too, because it does seem like the DB\nOID and tablespace OID are a poor use of bit space. You might not even\nneed to keep the db_spcOid value in any persistent place, because it\ncould just be an alias for buffer mapping lookups that might change on\nevery restart. That does have the problem that you now need a\nsecondary hash table - in theory of unbounded size - to store mappings\nfrom <dboid,tsoid> to db_spcOid, and that seems complicated and hard\nto get right. It might be possible, though. Alternatively, you could\nimagine a durable mapping that also affects the on-disk structure, but\nI don't quite see how to make that work: for example, pg_basebackup\nwants to produce a tar file for each tablespace directory, and if the\npathnames no longer contain the tablespace OID but only the db_spcOid,\nthen that doesn't work any more.\n\nBut the primary problem we're trying to solve here is that right now\nwe sometimes reuse the same filename for a whole new file, and that\nresults in bugs that only manifest themselves in obscure\ncircumstances, e.g. see 4eb2176318d0561846c1f9fb3c68bede799d640f.\nThere are residual failure modes even now related to the \"tombstone\"\nfiles that are created when you drop a relation: remove everything but\nthe first file from the main fork but then keep that file (only)\naround until after the next checkpoint. OID wraparound is another\nannoyance that has influenced the design of quite a bit of code over\nthe years and where we probably still have bugs. If we don't reuse\nrelfilenodes, we can avoid a lot of that pain. Combining the DB OID\nand TS OID fields doesn't solve that problem.\n\n> That way we could just have a simple 64-bit RelFileNumber, without\n> restriction, and probably some spare bytes on the ForkNumber, if we\n> needed them later.\n\nIn my personal opinion, the ForkNumber system is an ugly wart which\nhas nothing to recommend it except that the VM and FSM forks are\nawesome. But if we could have those things without needing forks, I\nthink that would be way better. Forks add code complexity in tons of\nplaces, and it's barely possible to scale it to the 4 forks we have\nalready, let alone any larger number. Furthermore, there are really\nnegative performance effects from creating 3 files per small relation\nrather than 1, and we sure can't afford to have that number get any\nbigger. I'd rather kill the ForkNumber system with fire that expand it\nfurther, but even if we do expand it, we're not close to being able to\ncope with more than 256 forks per relation.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 28 Jun 2022 11:25:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Jun 28, 2022 at 11:25 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> But the primary problem we're trying to solve here is that right now\n> we sometimes reuse the same filename for a whole new file, and that\n> results in bugs that only manifest themselves in obscure\n> circumstances, e.g. see 4eb2176318d0561846c1f9fb3c68bede799d640f.\n> There are residual failure modes even now related to the \"tombstone\"\n> files that are created when you drop a relation: remove everything but\n> the first file from the main fork but then keep that file (only)\n> around until after the next checkpoint. OID wraparound is another\n> annoyance that has influenced the design of quite a bit of code over\n> the years and where we probably still have bugs. If we don't reuse\n> relfilenodes, we can avoid a lot of that pain. Combining the DB OID\n> and TS OID fields doesn't solve that problem.\n\nOh wait, I'm being stupid. You were going to combine those fields but\nthen also widen the relfilenode, so that would solve this problem\nafter all. Oops, I'm dumb.\n\nI still think this is a lot more complicated though, to the point\nwhere I'm not sure we can really make it work at all.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 28 Jun 2022 13:01:11 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, 28 Jun 2022 at 13:45, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> but since the number of combinations is usually 1, but even then, low,\n> it can be cached easily in a smgr array and included in the checkpoint\n> record (or nearby) for ease of use.\n\nI was reading the thread to keep up with storage-related prototypes\nand patches, and this specifically doesn't sound quite right to me. I\ndo not know what values you considered to be 'low' or what 'can be\ncached easily', so here's some field data:\n\nI have seen PostgreSQL clusters that utilized the relative isolation\nof seperate databases within the same cluster (instance / postmaster)\nto provide higher guarantees of data access isolation while still\nbeing able to share a resource pool, which resulted in several\nclusters containing upwards of 100 databases.\n\nI will be the first to admit that it is quite unlikely to be common\npractise, but this workload increases the number of dbOid+spcOid\ncombinations to 100s (even while using only a single tablespace),\nwhich in my opinion requires some more thought than just handwaving it\ninto an smgr array and/or checkpoint records.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Tue, 28 Jun 2022 20:18:46 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Fri, Jun 24, 2022 at 8:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Jun 24, 2022 at 7:08 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I have changed that. PFA, the updated patch.\n>\n> Apart from one minor nitpick (see below) I don't see a problem with\n> this in isolation. It seems like a pretty clean renaming. So I think\n> we need to move onto the question of how clean the rest of the patch\n> series looks with this as a base.\n>\n\nPFA, the remaining set of patches. It might need to fix some\nindentation but lets first see how is the overall idea then we can\nwork on it. I have fixed all the open review comment from the\nprevious thread except this comment from Robert.\n\n>- It looks to me like you need to give significantly more thought to\n> the proper way of adjusting the relfilenode-related test cases in\n> alter_table.out.\n\nIt seems to me that this test case is just testing whether the\ntable/child table are rewritten or not after the alter table. And for\nthat it is comparing the oid with the relfilenode, now that is not\npossible so I think it's quite reasonable to just compare the current\nrelfilenode with the old relfilenode and if they are same the table is\nnot rewritten. So I am not sure why the original test case had two\ncases 'own' and 'orig'. With respect to this test case they both have\nthe same meaning, in fact comparing old relfilenode with current\nrelfilenode is better way of testing than comparing the oid with\nrelfilenode.\n\ndiff --git a/src/test/regress/expected/alter_table.out\nb/src/test/regress/expected/alter_table.out\nindex 5ede56d..80af97e 100644\n--- a/src/test/regress/expected/alter_table.out\n+++ b/src/test/regress/expected/alter_table.out\n@@ -2164,7 +2164,6 @@ select relname,\n c.oid = oldoid as orig_oid,\n case relfilenode\n when 0 then 'none'\n- when c.oid then 'own'\n when oldfilenode then 'orig'\n else 'OTHER'\n end as storage,\n@@ -2175,10 +2174,10 @@ select relname,\n relname | orig_oid | storage | desc\n ------------------------------+----------+---------+---------------\n at_partitioned | t | none |\n- at_partitioned_0 | t | own |\n- at_partitioned_0_id_name_key | t | own | child 0 index\n- at_partitioned_1 | t | own |\n- at_partitioned_1_id_name_key | t | own | child 1 index\n+ at_partitioned_0 | t | orig |\n+ at_partitioned_0_id_name_key | t | orig | child 0 index\n+ at_partitioned_1 | t | orig |\n+ at_partitioned_1_id_name_key | t | orig | child 1 index\n at_partitioned_id_name_key | t | none | parent index\n (6 rows)\n\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 29 Jun 2022 14:45:09 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, 28 Jun 2022 at 19:18, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n\n> I will be the first to admit that it is quite unlikely to be common\n> practise, but this workload increases the number of dbOid+spcOid\n> combinations to 100s (even while using only a single tablespace),\n\nWhich should still fit nicely in 32bits then. Why does that present a\nproblem to this idea?\n\nThe reason to mention this now is that it would give more space than\n56bit limit being suggested here. I am not opposed to the current\npatch, just finding ways to remove some objections mentioned by\nothers, if those became blockers.\n\n> which in my opinion requires some more thought than just handwaving it\n> into an smgr array and/or checkpoint records.\n\nThe idea is that we would store the mapping as an array, with the\nvalue in the RelFileNode as the offset in the array. The array would\nbe mostly static, so would cache nicely.\n\nFor convenience, I imagine that the mapping could be included in WAL\nin or near the checkpoint record, to ensure that the mapping was\navailable in all backups.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 29 Jun 2022 13:41:17 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, 29 Jun 2022 at 14:41, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Tue, 28 Jun 2022 at 19:18, Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n>\n> > I will be the first to admit that it is quite unlikely to be common\n> > practise, but this workload increases the number of dbOid+spcOid\n> > combinations to 100s (even while using only a single tablespace),\n>\n> Which should still fit nicely in 32bits then. Why does that present a\n> problem to this idea?\n\nIt doesn't, or at least not the bitspace part. I think it is indeed\nquite unlikely anyone will try to build as many tablespaces as the 100\nmillion tables project, which utilized 1000 tablespaces to get around\nfile system limitations [0].\n\nThe potential problem is 'where to store such mapping efficiently'.\nEspecially considering that this mapping might (and likely: will)\nchange across restarts and when database churn (create + drop\ndatabase) happens in e.g. testing workloads.\n\n> The reason to mention this now is that it would give more space than\n> 56bit limit being suggested here. I am not opposed to the current\n> patch, just finding ways to remove some objections mentioned by\n> others, if those became blockers.\n>\n> > which in my opinion requires some more thought than just handwaving it\n> > into an smgr array and/or checkpoint records.\n>\n> The idea is that we would store the mapping as an array, with the\n> value in the RelFileNode as the offset in the array. The array would\n> be mostly static, so would cache nicely.\n\nThat part is not quite clear to me. Any cluster may have anywhere\nbetween 3 and hundreds or thousands of entries in that mapping. Do you\nsuggest to dynamically grow that (presumably shared, considering the\naddressing is shared) array, or have a runtime parameter limiting the\namount of those entries (similar to max_connections)?\n\n> For convenience, I imagine that the mapping could be included in WAL\n> in or near the checkpoint record, to ensure that the mapping was\n> available in all backups.\n\nWhy would we need this mapping in backups, considering that it seems\nto be transient state that is lost on restart? Won't we still use full\ndbOid and spcOid in anything we communicate or store on disk (file\nnames, WAL, pg_class rows, etc.), or did I misunderstand your\nproposal?\n\nKind regards,\n\nMatthias van de Meent\n\n\n[0] https://www.pgcon.org/2013/schedule/attachments/283_Billion_Tables_Project-PgCon2013.pdf\n\n\n",
"msg_date": "Thu, 30 Jun 2022 00:12:46 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Thu, Jun 30, 2022 at 12:41 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> The reason to mention this now is that it would give more space than\n> 56bit limit being suggested here.\n\nIsn't 2^56 enough, though? Remembering that cluster time runs out\nwhen we've generated 2^64 bytes of WAL, if you want to run out of 56\nbit relfile numbers before the end of time you'll need to find a way\nto allocate them in less than 2^8 bytes of WAL. That's technically\npossible, since SMgr CREATE records are only 42 bytes long, so you\ncould craft some C code to do nothing but create (and leak)\nrelfilenodes, but real usage is always accompanied by catalogue\ninsertions to connect the new relfilenode to a database object,\nwithout which they are utterly useless. So in real life, it takes\nmany hundreds or typically thousands of bytes, much more than 256.\n\n\n",
"msg_date": "Thu, 30 Jun 2022 14:42:25 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Jun 28, 2022 at 5:15 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n>\n> On Sat, 25 Jun 2022 at 02:30, Andres Freund <andres@anarazel.de> wrote:\n>\n> > > And then like this in 0003:\n> > >\n> > > typedef struct buftag\n> > > {\n> > > Oid spcOid;\n> > > Oid dbOid;\n> > > RelFileNumber fileNumber:56;\n> > > ForkNumber forkNum:8;\n> > > } BufferTag;\n> >\n> > Probably worth checking the generated code / the performance effects of using\n> > bitfields (vs manual maskery). I've seen some awful cases, but here it's at a\n> > byte boundary, so it might be ok.\n>\n> Another approach would be to condense spcOid and dbOid into a single\n> 4-byte Oid-like number, since in most cases they are associated with\n> each other, and not often many of them anyway. So this new number\n> would indicate both the database and the tablespace. I know that we\n> want to be able to make file changes without doing catalog lookups,\n> but since the number of combinations is usually 1, but even then, low,\n> it can be cached easily in a smgr array and included in the checkpoint\n> record (or nearby) for ease of use.\n>\n> typedef struct buftag\n> {\n> Oid db_spcOid;\n> ForkNumber uint32;\n> RelFileNumber uint64;\n> } BufferTag;\n>\n> That way we could just have a simple 64-bit RelFileNumber, without\n> restriction, and probably some spare bytes on the ForkNumber, if we\n> needed them later.\n\nYeah this is possible but I am not seeing the clear advantage. Of\nCourse we can widen the RelFileNumber to 64 instead of 56 but with the\nadded complexity of storing the mapping. I am not sure if it is\nreally worth it?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 30 Jun 2022 12:05:59 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Thu, 30 Jun 2022 at 03:43, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Thu, Jun 30, 2022 at 12:41 AM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> > The reason to mention this now is that it would give more space than\n> > 56bit limit being suggested here.\n>\n> Isn't 2^56 enough, though?\n\nFor me, yes.\n\nTo the above comment, I followed with:\n\n> I am not opposed to the current\n> patch, just finding ways to remove some objections mentioned by\n> others, if those became blockers.\n\nSo it seems we can continue with the patch.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 30 Jun 2022 09:08:23 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, Jun 29, 2022 at 5:15 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >- It looks to me like you need to give significantly more thought to\n> > the proper way of adjusting the relfilenode-related test cases in\n> > alter_table.out.\n>\n> It seems to me that this test case is just testing whether the\n> table/child table are rewritten or not after the alter table. And for\n> that it is comparing the oid with the relfilenode, now that is not\n> possible so I think it's quite reasonable to just compare the current\n> relfilenode with the old relfilenode and if they are same the table is\n> not rewritten. So I am not sure why the original test case had two\n> cases 'own' and 'orig'. With respect to this test case they both have\n> the same meaning, in fact comparing old relfilenode with current\n> relfilenode is better way of testing than comparing the oid with\n> relfilenode.\n\nI think you're right. However, I don't really like OTHER showing up in\nthe output, because that looks like a string that was chosen to be\nslightly alarming, especially given that it's in ALL CAPS. How about\nif we change 'ORIG' to 'new'?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 30 Jun 2022 13:27:13 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, Jun 29, 2022 at 5:15 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> PFA, the remaining set of patches. It might need to fix some\n> indentation but lets first see how is the overall idea then we can\n> work on it\n\nSo just playing around with this patch set, and also looking at the\ncode a bit, here are a few random observations:\n\n- The patch assigns relfilenumbers starting with 1. I don't see any\nspecific problem with that, but I wonder if it would be a good idea to\nstart with a random larger value just in case we ever need some fixed\nvalues for some purpose or other. Maybe we should start with 100000 or\nsomething?\n\n- If I use ALTER TABLE .. SET TABLESPACE to move a table around, then\nthe relfilenode changes each time, but if I use ALTER DATABASE .. SET\nTABLESPACE to move a database around, the relfilenodes don't change.\nSo, what this guarantees is that if the same filename is used twice,\nit will be for the same relation and not some unrelated relation.\nThat's enough to avoid the hazard described in the comments for\nmdunlink(), because that scenario intrinsically involves confusion\ncaused by two relations using the same filename after an OID\nwraparound. And it also means that if we pursue the idea of using an\nend-of-recovery record in all cases, we don't need to start creating\ntombstones during crash recovery. The forced checkpoint at the end of\ncrash recovery means we don't currently need to do that, but if we\nchange that, then the same hazard would exist there as we already have\nin normal running, and this fixes it. However, I don't find it\nentirely obvious that there are no hazards of any kind stemming from\nrepeated use of ALTER DATABASE .. SET TABLESPACE resulting in\nfilenames getting reused. On the other hand avoiding filename reuse\ncompletely would be more work, not closely related to what the rest of\nthe patch set does, probably somewhat controversial in terms of what\nit would have to do, and I'm not sure that we really need it. It does\nseem like it would be quite a bit easier to reason about, though,\nbecause the current guarantee is suspiciously similar to \"we don't do\nX, except when we do.\" This is not really so much a review comment for\nDilip as a request for input from others ... thoughts?\n\n- Again, not a review comment for this patch specifically, but I'm\nwondering if we could use this as infrastructure for a tool to clean\norphaned files out of the data directory. Suppose we create a file for\na new relation and then crash, leaving a potentially large file on\ndisk that will never be removed. Well, if the relfilenumber as it\nexists on disk is not in pg_class and old enough that a transaction\ninserting into pg_class can't still be running, then it must be safe\nto remove that file. Maybe that's safe even today, but it's a little\nhard to reason about it in the face of a possible OID wraparound that\nmight result in reusing the same numbers over again. It feels like\nthis makes easier to identify which files are old stuff that can never\nagain be touched.\n\n- I might be missing something here, but this isn't actually making\nthe relfilenode 56 bits, is it? The reason to do that is to make the\nBufferTag smaller, so I expected to see that BufferTag either used\nbitfields like RelFileNumber relNumber:56 and ForkNumber forkNum:8, or\nelse that it just declared a single field for both as uint64 and used\naccessor macros or static inlines to separate them out. But it doesn't\nseem to do either of those things, which seems like it can't be right.\nOn a related note, I think it would be better to declare RelFileNumber\nas an unsigned type even though we have no use for the high bit; we\nhave, equally, no use for negative values. It's easier to reason about\nbit-shifting operations with unsigned types.\n\n- I also think that the cross-version compatibility stuff in\npg_buffercache isn't quite right. It does values[1] =\nObjectIdGetDatum(fctx->record[i].relfilenumber). But I think what it\nought to do is dependent on the output type. If the output type is\nint8, then it ought to do values[1] = Int64GetDatum((int64)\nfctx->record[i].relfilenumber), and if it's OID, then it ought to do\nvalues[1] = ObjectIdGetDatum((Oid) fctx->record[i].relfilenumber)).\nThe macro that you use needs to be based on the output SQL type, not\nthe C data type.\n\n- I think it might be a good idea to allocate RelFileNumbers in much\nsmaller batches than we do OIDs. 8192 feels wasteful to me. It\nshouldn't practically matter, because if we have 56 bits of bit space\nand so even if we repeatedly allocate 2^13 RelFileNumbers and then\ncrash, we can still crash 2^41 times before we completely run out of\nnumbers, and 2 trillion crashes ought to be enough for anyone. But I\nsee little benefit from being so profligate. You can allocate an OID\nas an identifier for a catalog tuple or a TOAST chunk, but a\nRelFileNumber requires a filesystem operation, so the amount of work\nthat is needed to use up 8192 RelFileNumbers is a lot bigger than the\namount of work required to use up 8192 OIDs. If we dropped this down\nto 128, or 64, or 256, would anything bad happen?\n\n- Do we really want GetNewRelFileNumber() to call access() just for a\ncan't-happen scenario? Can't we catch this problem later when we\nactually go to create the files on disk?\n\n- The patch updates the comments in XLogPrefetcherNextBlock to talk\nabout relfilenumbers being reused rather than relfilenodes being\nreused, which is fine except that we're sorta kinda not doing that any\nmore as noted above. I don't really know what these comments ought to\nsay instead but perhaps more than a mechanical update is in order.\nThis applies, even more, to the comments above mdunlink(). Apart from\nupdating the existing comments, I think that the patch needs a good\nexplanation of the new scheme someplace, and what it does and doesn't\nguarantee, which relates to the point above about making sure we know\nexactly what we're guaranteeing and why. I don't know where exactly\nthis text should be positioned yet, or what it should say, but it\nneeds to go someplace. This is a fairly significant change and needs\nto be talked about somewhere.\n\n- I think there's still a bit of a terminology problem here. With the\npatch set, we use RelFileNumber to refer to a single, 56-bit integer\nand RelFileLocator to refer to that integer combined with the DB and\nTS OIDs. But sometimes in the comments we want to talk about the\nlogical sequence of files that is identified by a RelFileLocator, and\nthat's not quite the same as either of those things. For example, in\ntableam.h we currently say \"This callback needs to create a new\nrelation filenode for `rel`\" and how should that be changed in this\nnew naming? We're not creating a new RelFileNumber - those would need\nto be allocated, not created, as all the numbers in the universe exist\nalready. Neither are we creating a new locator; that sounds like it\nmeans assembling it from pieces. What we're doing is creating the\nfirst of what may end up being a series of similarly-named files on\ndisk. I'm not exactly sure how we can refer to that in a way that is\nclear, but it's a problem that arises here and here throughout the\npatch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 30 Jun 2022 15:24:04 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Thu, Jun 30, 2022 at 10:57 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jun 29, 2022 at 5:15 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >- It looks to me like you need to give significantly more thought to\n> > > the proper way of adjusting the relfilenode-related test cases in\n> > > alter_table.out.\n> >\n> > It seems to me that this test case is just testing whether the\n> > table/child table are rewritten or not after the alter table. And for\n> > that it is comparing the oid with the relfilenode, now that is not\n> > possible so I think it's quite reasonable to just compare the current\n> > relfilenode with the old relfilenode and if they are same the table is\n> > not rewritten. So I am not sure why the original test case had two\n> > cases 'own' and 'orig'. With respect to this test case they both have\n> > the same meaning, in fact comparing old relfilenode with current\n> > relfilenode is better way of testing than comparing the oid with\n> > relfilenode.\n>\n> I think you're right. However, I don't really like OTHER showing up in\n> the output, because that looks like a string that was chosen to be\n> slightly alarming, especially given that it's in ALL CAPS. How about\n> if we change 'ORIG' to 'new'?\n\nI think you meant, rename 'OTHER' to 'new', yeah that makes sense.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 1 Jul 2022 11:41:37 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Fri, Jul 1, 2022 at 12:54 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jun 29, 2022 at 5:15 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > PFA, the remaining set of patches. It might need to fix some\n> > indentation but lets first see how is the overall idea then we can\n> > work on it\n>\n> So just playing around with this patch set, and also looking at the\n> code a bit, here are a few random observations:\n>\n> - The patch assigns relfilenumbers starting with 1. I don't see any\n> specific problem with that, but I wonder if it would be a good idea to\n> start with a random larger value just in case we ever need some fixed\n> values for some purpose or other. Maybe we should start with 100000 or\n> something?\n\nYeah we can do that, I have changed to 100000.\n\n> - If I use ALTER TABLE .. SET TABLESPACE to move a table around, then\n> the relfilenode changes each time, but if I use ALTER DATABASE .. SET\n> TABLESPACE to move a database around, the relfilenodes don't change.\n> So, what this guarantees is that if the same filename is used twice,\n> it will be for the same relation and not some unrelated relation.\n> That's enough to avoid the hazard described in the comments for\n> mdunlink(), because that scenario intrinsically involves confusion\n> caused by two relations using the same filename after an OID\n> wraparound. And it also means that if we pursue the idea of using an\n> end-of-recovery record in all cases, we don't need to start creating\n> tombstones during crash recovery. The forced checkpoint at the end of\n> crash recovery means we don't currently need to do that, but if we\n> change that, then the same hazard would exist there as we already have\n> in normal running, and this fixes it. However, I don't find it\n> entirely obvious that there are no hazards of any kind stemming from\n> repeated use of ALTER DATABASE .. SET TABLESPACE resulting in\n> filenames getting reused. On the other hand avoiding filename reuse\n> completely would be more work, not closely related to what the rest of\n> the patch set does, probably somewhat controversial in terms of what\n> it would have to do, and I'm not sure that we really need it. It does\n> seem like it would be quite a bit easier to reason about, though,\n> because the current guarantee is suspiciously similar to \"we don't do\n> X, except when we do.\" This is not really so much a review comment for\n> Dilip as a request for input from others ... thoughts?\n\nYeah that can be done, but maybe as a separate patch. One option is\nthat when we will support the WAL method for the ALTER TABLE .. SET\nTABLESPACE like we did for CREATE DATABASE, as part of that we will\ngenerate the new relfilenumber.\n\n> - Again, not a review comment for this patch specifically, but I'm\n> wondering if we could use this as infrastructure for a tool to clean\n> orphaned files out of the data directory. Suppose we create a file for\n> a new relation and then crash, leaving a potentially large file on\n> disk that will never be removed. Well, if the relfilenumber as it\n> exists on disk is not in pg_class and old enough that a transaction\n> inserting into pg_class can't still be running, then it must be safe\n> to remove that file. Maybe that's safe even today, but it's a little\n> hard to reason about it in the face of a possible OID wraparound that\n> might result in reusing the same numbers over again. It feels like\n> this makes easier to identify which files are old stuff that can never\n> again be touched.\n\nCorrect.\n\n> - I might be missing something here, but this isn't actually making\n> the relfilenode 56 bits, is it? The reason to do that is to make the\n> BufferTag smaller, so I expected to see that BufferTag either used\n> bitfields like RelFileNumber relNumber:56 and ForkNumber forkNum:8, or\n> else that it just declared a single field for both as uint64 and used\n> accessor macros or static inlines to separate them out. But it doesn't\n> seem to do either of those things, which seems like it can't be right.\n> On a related note, I think it would be better to declare RelFileNumber\n> as an unsigned type even though we have no use for the high bit; we\n> have, equally, no use for negative values. It's easier to reason about\n> bit-shifting operations with unsigned types.\n\nOpps, somehow missed to merge that change in the patch. Changed that\nlike below and adjusted the macros.\ntypedef struct buftag\n{\nOid spcOid; /* tablespace oid. */\nOid dbOid; /* database oid. */\nuint32 relNumber_low; /* relfilenumber 32 lower bits */\nuint32 relNumber_hi:24; /* relfilenumber 24 high bits */\nuint32 forkNum:8; /* fork number */\nBlockNumber blockNum; /* blknum relative to begin of reln */\n} BufferTag;\n\nI think we need to break like this to keep the BufferTag 4 byte\naligned otherwise the size of the structure will be increased.\n\n> - I also think that the cross-version compatibility stuff in\n> pg_buffercache isn't quite right. It does values[1] =\n> ObjectIdGetDatum(fctx->record[i].relfilenumber). But I think what it\n> ought to do is dependent on the output type. If the output type is\n> int8, then it ought to do values[1] = Int64GetDatum((int64)\n> fctx->record[i].relfilenumber), and if it's OID, then it ought to do\n> values[1] = ObjectIdGetDatum((Oid) fctx->record[i].relfilenumber)).\n> The macro that you use needs to be based on the output SQL type, not\n> the C data type.\n\nFixed\n\n> - I think it might be a good idea to allocate RelFileNumbers in much\n> smaller batches than we do OIDs. 8192 feels wasteful to me. It\n> shouldn't practically matter, because if we have 56 bits of bit space\n> and so even if we repeatedly allocate 2^13 RelFileNumbers and then\n> crash, we can still crash 2^41 times before we completely run out of\n> numbers, and 2 trillion crashes ought to be enough for anyone. But I\n> see little benefit from being so profligate. You can allocate an OID\n> as an identifier for a catalog tuple or a TOAST chunk, but a\n> RelFileNumber requires a filesystem operation, so the amount of work\n> that is needed to use up 8192 RelFileNumbers is a lot bigger than the\n> amount of work required to use up 8192 OIDs. If we dropped this down\n> to 128, or 64, or 256, would anything bad happen?\n\nThis makes sense so I have changed to 64.\n\n> - Do we really want GetNewRelFileNumber() to call access() just for a\n> can't-happen scenario? Can't we catch this problem later when we\n> actually go to create the files on disk?\n\nYeah we don't need to, actually we can completely get rid of\nGetNewRelFileNumber() function and we can directly call\nGenerateNewRelFileNumber() and in fact we can rename\nGenerateNewRelFileNumber() to GetNewRelFileNumber(). So I have done\nthese changes.\n\n> - The patch updates the comments in XLogPrefetcherNextBlock to talk\n> about relfilenumbers being reused rather than relfilenodes being\n> reused, which is fine except that we're sorta kinda not doing that any\n> more as noted above. I don't really know what these comments ought to\n> say instead but perhaps more than a mechanical update is in order.\n\nChanged\n\n> This applies, even more, to the comments above mdunlink(). Apart from\n> updating the existing comments, I think that the patch needs a good\n> explanation of the new scheme someplace, and what it does and doesn't\n> guarantee, which relates to the point above about making sure we know\n> exactly what we're guaranteeing and why. I don't know where exactly\n> this text should be positioned yet, or what it should say, but it\n> needs to go someplace. This is a fairly significant change and needs\n> to be talked about somewhere.\n\nFor now, in v4_0004**, I have removed the comment which is explaining\nwhy we need to keep the Tombstone file and added some note that why we\ndo not need to keep those files from PG16 onwards.\n\n> - I think there's still a bit of a terminology problem here. With the\n> patch set, we use RelFileNumber to refer to a single, 56-bit integer\n> and RelFileLocator to refer to that integer combined with the DB and\n> TS OIDs. But sometimes in the comments we want to talk about the\n> logical sequence of files that is identified by a RelFileLocator, and\n> that's not quite the same as either of those things. For example, in\n> tableam.h we currently say \"This callback needs to create a new\n> relation filenode for `rel`\" and how should that be changed in this\n> new naming? We're not creating a new RelFileNumber - those would need\n> to be allocated, not created, as all the numbers in the universe exist\n> already. Neither are we creating a new locator; that sounds like it\n> means assembling it from pieces. What we're doing is creating the\n> first of what may end up being a series of similarly-named files on\n> disk. I'm not exactly sure how we can refer to that in a way that is\n> clear, but it's a problem that arises here and here throughout the\n> patch.\n\nI think the comment can say\n\"This callback needs to create a new relnumber file for 'rel' \" ?\n\nI have not modified this yet, I will check other places where we have\nsuch terminology issues.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 1 Jul 2022 16:12:01 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "Hi,\n\nI'm not feeling inspired by \"locator\", tbh. But I don't really have a great\nalternative, so ...\n\n\nOn 2022-07-01 16:12:01 +0530, Dilip Kumar wrote:\n> From f07ca9ef19e64922c6ee410707e93773d1a01d7c Mon Sep 17 00:00:00 2001\n> From: dilip kumar <dilipbalaut@localhost.localdomain>\n> Date: Sat, 25 Jun 2022 10:43:12 +0530\n> Subject: [PATCH v4 2/4] Preliminary refactoring for supporting larger\n> relfilenumber\n\nI don't think we have abbreviated buffer as 'buff' in many places? I think we\nshould either spell buffer out or use 'buf'. This is in regard to BuffTag etc.\n\n\n\n> Subject: [PATCH v4 3/4] Use 56 bits for relfilenumber to avoid wraparound\n\n> /*\n> + * GenerateNewRelFileNumber\n> + *\n> + * Similar to GetNewObjectId but instead of new Oid it generates new\n> + * relfilenumber.\n> + */\n> +RelFileNumber\n> +GetNewRelFileNumber(void)\n> +{\n> +\tRelFileNumber\t\tresult;\n> +\n> +\t/* Safety check, we should never get this far in a HS standby */\n\nNormally we don't capitalize the first character of a comment that's not a\nfull sentence (i.e. ending with a punctuation mark).\n\n> +\tif (RecoveryInProgress())\n> +\t\telog(ERROR, \"cannot assign RelFileNumber during recovery\");\n> +\n> +\tLWLockAcquire(RelFileNumberGenLock, LW_EXCLUSIVE);\n> +\n> +\t/* Check for the wraparound for the relfilenumber counter */\n> +\tif (unlikely (ShmemVariableCache->nextRelFileNumber > MAX_RELFILENUMBER))\n> +\t\telog(ERROR, \"relfilenumber is out of bound\");\n> +\n> +\t/* If we run out of logged for use RelFileNumber then we must log more */\n\n\"logged for use\" - looks like you reformulated the sentence incompletely.\n\n\n> +\tif (ShmemVariableCache->relnumbercount == 0)\n> +\t{\n> +\t\tXLogPutNextRelFileNumber(ShmemVariableCache->nextRelFileNumber +\n> +\t\t\t\t\t\t\t\t VAR_RFN_PREFETCH);\n\nI know this is just copied, but I find \"XLogPut\" as a prefix pretty unhelpful.\n\n\n> diff --git a/src/include/catalog/pg_class.h b/src/include/catalog/pg_class.h\n> index e1f4eef..1cf039c 100644\n> --- a/src/include/catalog/pg_class.h\n> +++ b/src/include/catalog/pg_class.h\n> @@ -31,6 +31,10 @@\n> */\n> CATALOG(pg_class,1259,RelationRelationId) BKI_BOOTSTRAP BKI_ROWTYPE_OID(83,RelationRelation_Rowtype_Id) BKI_SCHEMA_MACRO\n> {\n> +\t/* identifier of physical storage file */\n> +\t/* relfilenode == 0 means it is a \"mapped\" relation, see relmapper.c */\n> +\tint64\t\trelfilenode BKI_DEFAULT(0);\n> +\n> \t/* oid */\n> \tOid\t\t\toid;\n> \n> @@ -52,10 +56,6 @@ CATALOG(pg_class,1259,RelationRelationId) BKI_BOOTSTRAP BKI_ROWTYPE_OID(83,Relat\n> \t/* access method; 0 if not a table / index */\n> \tOid\t\t\trelam BKI_DEFAULT(heap) BKI_LOOKUP_OPT(pg_am);\n> \n> -\t/* identifier of physical storage file */\n> -\t/* relfilenode == 0 means it is a \"mapped\" relation, see relmapper.c */\n> -\tOid\t\t\trelfilenode BKI_DEFAULT(0);\n> -\n> \t/* identifier of table space for relation (0 means default for database) */\n> \tOid\t\t\treltablespace BKI_DEFAULT(0) BKI_LOOKUP_OPT(pg_tablespace);\n>\n\nWhat's the story behind moving relfilenode to the front? Alignment\nconsideration? Seems odd to move the relfilenode before the oid. If there's an\nalignment issue, can't you just swap it with reltablespace or such to resolve\nit?\n\n\n\n> From f6e8e0e7412198b02671e67d1859a7448fe83f38 Mon Sep 17 00:00:00 2001\n> From: dilip kumar <dilipbalaut@localhost.localdomain>\n> Date: Wed, 29 Jun 2022 13:24:32 +0530\n> Subject: [PATCH v4 4/4] Don't delay removing Tombstone file until next\n> checkpoint\n> \n> Currently, we can not remove the unused relfilenode until the\n> next checkpoint because if we remove them immediately then\n> there is a risk of reusing the same relfilenode for two\n> different relations during single checkpoint due to Oid\n> wraparound.\n\nWell, not quite \"currently\", because at this point we've fixed that in a prior\ncommit ;)\n\n\n> Now as part of the previous patch set we have made relfilenode\n> 56 bit wider and removed the risk of wraparound so now we don't\n> need to wait till the next checkpoint for removing the unused\n> relation file and we can clean them up on commit.\n\nHm. Wasn't there also some issue around crash-restarts benefiting from having\nthose files around until later? I think what I'm remembering is what is\nreferenced in this comment:\n\n\n> - * For regular relations, we don't unlink the first segment file of the rel,\n> - * but just truncate it to zero length, and record a request to unlink it after\n> - * the next checkpoint. Additional segments can be unlinked immediately,\n> - * however. Leaving the empty file in place prevents that relfilenumber\n> - * from being reused. The scenario this protects us from is:\n> - * 1. We delete a relation (and commit, and actually remove its file).\n> - * 2. We create a new relation, which by chance gets the same relfilenumber as\n> - *\t the just-deleted one (OIDs must've wrapped around for that to happen).\n> - * 3. We crash before another checkpoint occurs.\n> - * During replay, we would delete the file and then recreate it, which is fine\n> - * if the contents of the file were repopulated by subsequent WAL entries.\n> - * But if we didn't WAL-log insertions, but instead relied on fsyncing the\n> - * file after populating it (as we do at wal_level=minimal), the contents of\n> - * the file would be lost forever. By leaving the empty file until after the\n> - * next checkpoint, we prevent reassignment of the relfilenumber until it's\n> - * safe, because relfilenumber assignment skips over any existing file.\n\nThis isn't related to oid wraparound, just crashes. It's possible that the\nXLogFlush() in XLogPutNextRelFileNumber() prevents such a scenario, but if so\nit still ought to be explained here, I think.\n\n\n\n> + * Note that now we can immediately unlink the first segment of the regular\n> + * relation as well because the relfilenumber is 56 bits wide since PG 16. So\n> + * we don't have to worry about relfilenumber getting reused for some unrelated\n> + * relation file.\n\nI'm doubtful it's a good idea to start dropping at the first segment. I'm\nfairly certain that there's smgrexists() checks in some places, and they'll\nnow stop working, even if there are later segments that remained, e.g. because\nof an error in the middle of removing later segments.\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 1 Jul 2022 21:08:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Sat, Jul 2, 2022 at 9:38 AM Andres Freund <andres@anarazel.de> wrote:\n\nThanks for the review,\n\n> I'm not feeling inspired by \"locator\", tbh. But I don't really have a great\n> alternative, so ...\n>\n>\n> On 2022-07-01 16:12:01 +0530, Dilip Kumar wrote:\n> > From f07ca9ef19e64922c6ee410707e93773d1a01d7c Mon Sep 17 00:00:00 2001\n> > From: dilip kumar <dilipbalaut@localhost.localdomain>\n> > Date: Sat, 25 Jun 2022 10:43:12 +0530\n> > Subject: [PATCH v4 2/4] Preliminary refactoring for supporting larger\n> > relfilenumber\n>\n> I don't think we have abbreviated buffer as 'buff' in many places? I think we\n> should either spell buffer out or use 'buf'. This is in regard to BuffTag etc.\n\nOkay, I will change it to 'buf'\n\n> > Subject: [PATCH v4 3/4] Use 56 bits for relfilenumber to avoid wraparound\n\n> Normally we don't capitalize the first character of a comment that's not a\n> full sentence (i.e. ending with a punctuation mark).\n\nOkay.\n\n> \"logged for use\" - looks like you reformulated the sentence incompletely.\n\nRight, I will fix it.\n\n> > + if (ShmemVariableCache->relnumbercount == 0)\n> > + {\n> > + XLogPutNextRelFileNumber(ShmemVariableCache->nextRelFileNumber +\n> > + VAR_RFN_PREFETCH);\n>\n> I know this is just copied, but I find \"XLogPut\" as a prefix pretty unhelpful.\n\nMaybe we can change to LogNextRelFileNumber()?\n\n> What's the story behind moving relfilenode to the front? Alignment\n> consideration? Seems odd to move the relfilenode before the oid. If there's an\n> alignment issue, can't you just swap it with reltablespace or such to resolve\n> it?\n\nBecause of a test case added in this commit\n(79b716cfb7a1be2a61ebb4418099db1258f35e30). Even I did not like to\nmove relfilenode before oid, but under this commit it is expected this\nto aligned as well as before any NameData check this comments\n\n===\n+--\n+-- Keep such columns before the first NameData column of the\n+-- catalog, since packagers can override NAMEDATALEN to an odd number.\n+--\n===\n\n>\n> > From f6e8e0e7412198b02671e67d1859a7448fe83f38 Mon Sep 17 00:00:00 2001\n> > From: dilip kumar <dilipbalaut@localhost.localdomain>\n> > Date: Wed, 29 Jun 2022 13:24:32 +0530\n> > Subject: [PATCH v4 4/4] Don't delay removing Tombstone file until next\n> > checkpoint\n> >\n> > Currently, we can not remove the unused relfilenode until the\n> > next checkpoint because if we remove them immediately then\n> > there is a risk of reusing the same relfilenode for two\n> > different relations during single checkpoint due to Oid\n> > wraparound.\n>\n> Well, not quite \"currently\", because at this point we've fixed that in a prior\n> commit ;)\n\nRight, I will change, but I'm not sure whether we want to commit 0003\nand 0004 as an independent patch or as a simple patch.\n\n> > Now as part of the previous patch set we have made relfilenode\n> > 56 bit wider and removed the risk of wraparound so now we don't\n> > need to wait till the next checkpoint for removing the unused\n> > relation file and we can clean them up on commit.\n>\n> Hm. Wasn't there also some issue around crash-restarts benefiting from having\n> those files around until later? I think what I'm remembering is what is\n> referenced in this comment:\n\nI think due to wraparound if relfilenode gets reused by another\nrelation in the same checkpoint then there was an issue in crash\nrecovery with wal level minimal. But the root of the issue is a\nwraparound, right?\n\n>\n> > - * For regular relations, we don't unlink the first segment file of the rel,\n> > - * but just truncate it to zero length, and record a request to unlink it after\n> > - * the next checkpoint. Additional segments can be unlinked immediately,\n> > - * however. Leaving the empty file in place prevents that relfilenumber\n> > - * from being reused. The scenario this protects us from is:\n> > - * 1. We delete a relation (and commit, and actually remove its file).\n> > - * 2. We create a new relation, which by chance gets the same relfilenumber as\n> > - * the just-deleted one (OIDs must've wrapped around for that to happen).\n> > - * 3. We crash before another checkpoint occurs.\n> > - * During replay, we would delete the file and then recreate it, which is fine\n> > - * if the contents of the file were repopulated by subsequent WAL entries.\n> > - * But if we didn't WAL-log insertions, but instead relied on fsyncing the\n> > - * file after populating it (as we do at wal_level=minimal), the contents of\n> > - * the file would be lost forever. By leaving the empty file until after the\n> > - * next checkpoint, we prevent reassignment of the relfilenumber until it's\n> > - * safe, because relfilenumber assignment skips over any existing file.\n>\n> This isn't related to oid wraparound, just crashes. It's possible that the\n> XLogFlush() in XLogPutNextRelFileNumber() prevents such a scenario, but if so\n> it still ought to be explained here, I think.\n\nI think the root cause of the problem is oid reuse which is due to\nrelfilenode wraparound, and the problem is created if there is a crash\nafter that. Now, we have removed the wraparound so there won't be any\nmore reuse of the relfilenode so there is no problem during crash\nrecovery.\n\n In XLogPutNextRelFileNumber() we need XLogFlush() to just ensure that\nafter the crash recovery we do not reuse the relfilenode because now\nwe are not checking for the existing disk file as we have completely\nremoved the wraparound.\n\nSo in short the problem this comment was explaining is related to if\nrelfilenode get reused in same checkpoint due to wraparound then crash\nrecovery will loose the content of the new related which has reused\nthe relfilenode at wal level minimal. But, by adding XLogFlush() in\nXLogPutNextRelFileNumber() we are ensuring that after crash recovery\nwe do not reuse the same relfilenode so we ensure this wal go to disk\nfirst before we create the relation file on the disk.\n\n>\n> > + * Note that now we can immediately unlink the first segment of the regular\n> > + * relation as well because the relfilenumber is 56 bits wide since PG 16. So\n> > + * we don't have to worry about relfilenumber getting reused for some unrelated\n> > + * relation file.\n>\n> I'm doubtful it's a good idea to start dropping at the first segment. I'm\n> fairly certain that there's smgrexists() checks in some places, and they'll\n> now stop working, even if there are later segments that remained, e.g. because\n> of an error in the middle of removing later segments.\n\nOkay, so you mean to say that we can first drop the remaining segment\nand at last we drop the segment 0 right?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 2 Jul 2022 14:23:08 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Sat, Jul 2, 2022 at 4:53 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I'm doubtful it's a good idea to start dropping at the first segment. I'm\n> > fairly certain that there's smgrexists() checks in some places, and they'll\n> > now stop working, even if there are later segments that remained, e.g. because\n> > of an error in the middle of removing later segments.\n>\n> Okay, so you mean to say that we can first drop the remaining segment\n> and at last we drop the segment 0 right?\n\nI think we need to do it in descending order, starting with the\nhighest-numbered segment and working down. md.c isn't smart about gaps\nin the sequence of files, so it's better if we don't create any gaps.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 2 Jul 2022 08:27:55 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-02 14:23:08 +0530, Dilip Kumar wrote:\n> > > + if (ShmemVariableCache->relnumbercount == 0)\n> > > + {\n> > > + XLogPutNextRelFileNumber(ShmemVariableCache->nextRelFileNumber +\n> > > + VAR_RFN_PREFETCH);\n> >\n> > I know this is just copied, but I find \"XLogPut\" as a prefix pretty unhelpful.\n> \n> Maybe we can change to LogNextRelFileNumber()?\n\nMuch better.\n\n\nHm. Now that I think about it, isn't the XlogFlush() in\nXLogPutNextRelFileNumber() problematic performance wise? Yes, we'll spread the\ncost across a number of GetNewRelFileNumber() calls, but still, an additional\nf[data]sync for every 64 relfilenodes assigned isn't cheap - today there's\nzero fsyncs when creating a sequence or table inside a transaction (there are\nsome for indexes, but there's patches to fix that).\n\nNot that I really see an obvious alternative.\n\nI guess we could try to invent a flush-log-before-write type logic for\nrelfilenodes somehow? So that the first block actually written to a file needs\nto ensure the WAL record that created the relation is flushed. But getting\nthat to work reliably seems nontrivial.\n\n\nOne thing that would be good is to add an assertion to a few places ensuring\nthat relfilenodes aren't above ->nextRelFileNumber, most importantly somewhere\nin the recovery path.\n\n\nWhy did you choose a quite small value for VAR_RFN_PREFETCH? VAR_OID_PREFETCH\nis 8192, but you chose 64 for VAR_RFN_PREFETCH?\n\nI'd spell out RFN in VAR_RFN_PREFETCH btw, it took me a bit to expand RFN to\nrelfilenode.\n\n\n> > What's the story behind moving relfilenode to the front? Alignment\n> > consideration? Seems odd to move the relfilenode before the oid. If there's an\n> > alignment issue, can't you just swap it with reltablespace or such to resolve\n> > it?\n> \n> Because of a test case added in this commit\n> (79b716cfb7a1be2a61ebb4418099db1258f35e30). Even I did not like to\n> move relfilenode before oid, but under this commit it is expected this\n> to aligned as well as before any NameData check this comments\n> \n> ===\n> +--\n> +-- Keep such columns before the first NameData column of the\n> +-- catalog, since packagers can override NAMEDATALEN to an odd number.\n> +--\n> ===\n\nThis is embarassing. Trying to keep alignment match between C and catalog\nalignment on AIX, without actually making the system understand the alignment\nrules, is a remarkably shortsighted approach.\n\nI started a separate thread about it, since it's not really relevant to this thread:\nhttps://postgr.es/m/20220702183354.a6uhja35wta7agew%40alap3.anarazel.de\n\nMaybe we could at least make the field order to be something like\n oid, relam, relfilenode, relname\n\nthat should be ok alignment wise, keep oid first, and seems to make sense from\nan \"importance\" POV? Can't really interpret later fields without knowing relam\netc.\n\n\n\n> > > Now as part of the previous patch set we have made relfilenode\n> > > 56 bit wider and removed the risk of wraparound so now we don't\n> > > need to wait till the next checkpoint for removing the unused\n> > > relation file and we can clean them up on commit.\n> >\n> > Hm. Wasn't there also some issue around crash-restarts benefiting from having\n> > those files around until later? I think what I'm remembering is what is\n> > referenced in this comment:\n> \n> I think due to wraparound if relfilenode gets reused by another\n> relation in the same checkpoint then there was an issue in crash\n> recovery with wal level minimal. But the root of the issue is a\n> wraparound, right?\n\nI'm not convinced the tombstones were required solely in the oid wraparound\ncase before, despite the comment saying so, with wal_level=minimal. I gotta do\nsome non-work stuff for a bit, so I need to stop pondering this now :)\n\nI think it might be a good idea to have a few weeks in which we do *not*\nremove the tombstones, but have assertion checks against such files existing\nwhen we don't expect them to. I.e. commit 1-3, add the asserts, then commit 4\na bit later.\n\n\n> > I'm doubtful it's a good idea to start dropping at the first segment. I'm\n> > fairly certain that there's smgrexists() checks in some places, and they'll\n> > now stop working, even if there are later segments that remained, e.g. because\n> > of an error in the middle of removing later segments.\n> \n> Okay, so you mean to say that we can first drop the remaining segment\n> and at last we drop the segment 0 right?\n\nI'd use the approach Robert suggested and delete from the end, going down.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 2 Jul 2022 12:29:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Sun, Jul 3, 2022 at 12:59 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hm. Now that I think about it, isn't the XlogFlush() in\n> XLogPutNextRelFileNumber() problematic performance wise? Yes, we'll spread the\n> cost across a number of GetNewRelFileNumber() calls, but still, an additional\n> f[data]sync for every 64 relfilenodes assigned isn't cheap - today there's\n> zero fsyncs when creating a sequence or table inside a transaction (there are\n> some for indexes, but there's patches to fix that).\n>\n> Not that I really see an obvious alternative.\n\nI think to see the impact we need a workload which frequently creates\nthe relfilenode, maybe we can test where parallel to pgbench we are\nfrequently creating the relation/indexes and check how much\nperformance hit we see. And if we see the impact then increasing\nVAR_RFN_PREFETCH value can help in resolving that.\n\n> I guess we could try to invent a flush-log-before-write type logic for\n> relfilenodes somehow? So that the first block actually written to a file needs\n> to ensure the WAL record that created the relation is flushed. But getting\n> that to work reliably seems nontrivial.\n\n>\n> One thing that would be good is to add an assertion to a few places ensuring\n> that relfilenodes aren't above ->nextRelFileNumber, most importantly somewhere\n> in the recovery path.\n\nYes, it makes sense.\n\n> Why did you choose a quite small value for VAR_RFN_PREFETCH? VAR_OID_PREFETCH\n> is 8192, but you chose 64 for VAR_RFN_PREFETCH?\n\nEarlier it was 8192, then there was a comment from Robert that we can\nuse Oid for many other things like an identifier for a catalog tuple\nor a TOAST chunk, but a RelFileNumber requires a filesystem operation,\nso the amount of work that is needed to use up 8192 RelFileNumbers is\na lot bigger than the amount of work required to use up 8192 OIDs.\n\nI think that make sense so I reduced it to 64, but now I tends to\nthink that we also need to consider the point that after consuming\nVAR_RFN_PREFETCH we are going to do XlogFlush(), so it's better that\nwe keep it high. And as Robert told upthread, even with keeping it\n8192 we can still crash 2^41 (2 trillian) times before we completely\nrun out of the number. So I think we can easily keep it up to 8192\nand I don't think that we really need to worry much about the\nperformance impact by XlogFlush()?\n\n> I'd spell out RFN in VAR_RFN_PREFETCH btw, it took me a bit to expand RFN to\n> relfilenode.\n\nOkay.\n\n> This is embarassing. Trying to keep alignment match between C and catalog\n> alignment on AIX, without actually making the system understand the alignment\n> rules, is a remarkably shortsighted approach.\n>\n> I started a separate thread about it, since it's not really relevant to this thread:\n> https://postgr.es/m/20220702183354.a6uhja35wta7agew%40alap3.anarazel.de\n>\n> Maybe we could at least make the field order to be something like\n> oid, relam, relfilenode, relname\n\nYeah that we can do.\n\n> that should be ok alignment wise, keep oid first, and seems to make sense from\n> an \"importance\" POV? Can't really interpret later fields without knowing relam\n> etc.\n\nRight.\n\n> > I think due to wraparound if relfilenode gets reused by another\n> > relation in the same checkpoint then there was an issue in crash\n> > recovery with wal level minimal. But the root of the issue is a\n> > wraparound, right?\n>\n> I'm not convinced the tombstones were required solely in the oid wraparound\n> case before, despite the comment saying so, with wal_level=minimal. I gotta do\n> some non-work stuff for a bit, so I need to stop pondering this now :)\n>\n> I think it might be a good idea to have a few weeks in which we do *not*\n> remove the tombstones, but have assertion checks against such files existing\n> when we don't expect them to. I.e. commit 1-3, add the asserts, then commit 4\n> a bit later.\n\nI think this is a good idea.\n\n> > Okay, so you mean to say that we can first drop the remaining segment\n> > and at last we drop the segment 0 right?\n>\n> I'd use the approach Robert suggested and delete from the end, going down.\n\nYeah, I got it, thanks.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 3 Jul 2022 16:02:27 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Sat, Jul 2, 2022 at 3:29 PM Andres Freund <andres@anarazel.de> wrote:\n> Why did you choose a quite small value for VAR_RFN_PREFETCH? VAR_OID_PREFETCH\n> is 8192, but you chose 64 for VAR_RFN_PREFETCH?\n\nAs Dilip mentioned, I suggested a lower value. If that's too low, we\ncan go higher, but I think there is value in not making this\nexcessively large. Somebody somewhere is going to have a database\nthat's crash-restarting like mad, and I don't want that person to run\nthrough an insane number of relfilenodes for no reason. I don't think\nthere are going to be a lot of people creating thousands upon\nthousands of relations in a short period of time, and I'm not sure\nthat it's a big deal if those who do end up having to wait for a few\nextra xlog flushes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 3 Jul 2022 10:32:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Sun, Jul 3, 2022 at 8:02 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sat, Jul 2, 2022 at 3:29 PM Andres Freund <andres@anarazel.de> wrote:\n> > Why did you choose a quite small value for VAR_RFN_PREFETCH? VAR_OID_PREFETCH\n> > is 8192, but you chose 64 for VAR_RFN_PREFETCH?\n>\n> As Dilip mentioned, I suggested a lower value. If that's too low, we\n> can go higher, but I think there is value in not making this\n> excessively large. Somebody somewhere is going to have a database\n> that's crash-restarting like mad, and I don't want that person to run\n> through an insane number of relfilenodes for no reason. I don't think\n> there are going to be a lot of people creating thousands upon\n> thousands of relations in a short period of time, and I'm not sure\n> that it's a big deal if those who do end up having to wait for a few\n> extra xlog flushes.\n\nHere is the updated version of the patch.\n\nPatch 0001-0003 are the same with review comments fixes given by\nAndres, 0004 as an extra assert patch suggested by Andres, this can be\nmerged with 0003. Basically, during recovery we add asserts checking\n\"relfilenumbers aren't above ->nextRelFileNumber,\" and also the assert\nfor checking that after we allocate a new relfile number the file\nshould not already exist on the disk so that once we are sure that\nthis assertion is not hitting then maybe we are safe for removing the\nTombStone files immediately what we were doing in 0005.\n\nIn 0005 I fixed the file delete order so now we are deleting in\ndescending order, for that first we need to count the number of\nsegments by doing stat() on each file and after that we need to go\nahead and unlink in the descending order.\n\nThe VAR_RELFILENUMBER_PREFETCH is still 64 as we have not yet\nconcluded on that, and as discussed I will test some performance to\nsee whether we have some obvious impact with different values of this.\nMaybe I will start with some very small numbers so that we have some\nimpact.\n\nI thought about this comment from Robert\n> that's not quite the same as either of those things. For example, in\n> tableam.h we currently say \"This callback needs to create a new\n> relation filenode for `rel`\" and how should that be changed in this\n> new naming? We're not creating a new RelFileNumber - those would need\n> to be allocated, not created, as all the numbers in the universe exist\n> already. Neither are we creating a new locator; that sounds like it\n> means assembling it from pieces.\n\nI think that \"This callback needs to create a new relation storage\nfor `rel`\" looks better.\n\nI have again reviewed 0001 and 0003 and found some discrepancies in\nusage of relfilenumber vs relfilelocator and fixed those, also some\nplaces InvalidOid were use instead of InvalidRelFileNumber.\n\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 5 Jul 2022 14:02:42 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Fri, Jul 1, 2022 at 6:42 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > - I might be missing something here, but this isn't actually making\n> > the relfilenode 56 bits, is it? The reason to do that is to make the\n> > BufferTag smaller, so I expected to see that BufferTag either used\n> > bitfields like RelFileNumber relNumber:56 and ForkNumber forkNum:8, or\n> > else that it just declared a single field for both as uint64 and used\n> > accessor macros or static inlines to separate them out. But it doesn't\n> > seem to do either of those things, which seems like it can't be right.\n> > On a related note, I think it would be better to declare RelFileNumber\n> > as an unsigned type even though we have no use for the high bit; we\n> > have, equally, no use for negative values. It's easier to reason about\n> > bit-shifting operations with unsigned types.\n>\n> Opps, somehow missed to merge that change in the patch. Changed that\n> like below and adjusted the macros.\n> typedef struct buftag\n> {\n> Oid spcOid; /* tablespace oid. */\n> Oid dbOid; /* database oid. */\n> uint32 relNumber_low; /* relfilenumber 32 lower bits */\n> uint32 relNumber_hi:24; /* relfilenumber 24 high bits */\n> uint32 forkNum:8; /* fork number */\n> BlockNumber blockNum; /* blknum relative to begin of reln */\n> } BufferTag;\n>\n> I think we need to break like this to keep the BufferTag 4 byte\n> aligned otherwise the size of the structure will be increased.\n\nWell, I guess you're right. That's a bummer. In that case I'm a little\nunsure whether it's worth using bit fields at all. Maybe we should\njust write uint32 something[2] and use macros after that.\n\nAnother approach could be to accept the padding and define a constant\nSizeOfBufferTag and use that as the hash table element size, like we\ndo for the sizes of xlog records.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Jul 2022 16:43:15 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Jul 5, 2022 at 4:33 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I thought about this comment from Robert\n> > that's not quite the same as either of those things. For example, in\n> > tableam.h we currently say \"This callback needs to create a new\n> > relation filenode for `rel`\" and how should that be changed in this\n> > new naming? We're not creating a new RelFileNumber - those would need\n> > to be allocated, not created, as all the numbers in the universe exist\n> > already. Neither are we creating a new locator; that sounds like it\n> > means assembling it from pieces.\n>\n> I think that \"This callback needs to create a new relation storage\n> for `rel`\" looks better.\n\nI like the idea, but it would sound better to say \"create new relation\nstorage\" rather than \"create a new relation storage.\"\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Jul 2022 17:02:37 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, Jul 6, 2022 at 2:32 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Jul 5, 2022 at 4:33 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I thought about this comment from Robert\n> > > that's not quite the same as either of those things. For example, in\n> > > tableam.h we currently say \"This callback needs to create a new\n> > > relation filenode for `rel`\" and how should that be changed in this\n> > > new naming? We're not creating a new RelFileNumber - those would need\n> > > to be allocated, not created, as all the numbers in the universe exist\n> > > already. Neither are we creating a new locator; that sounds like it\n> > > means assembling it from pieces.\n> >\n> > I think that \"This callback needs to create a new relation storage\n> > for `rel`\" looks better.\n>\n> I like the idea, but it would sound better to say \"create new relation\n> storage\" rather than \"create a new relation storage.\"\n\nOkay, changed that and changed a few more occurrences in 0001 which\nwere on similar lines. I also tested the performance of pg_bench\nwhere concurrently I am running the script which creates/drops\nrelation but I do not see any regression with fairly small values of\nVAR_RELNUMBER_PREFETCH, the smallest value I tried was 8. That\ndoesn't mean I am suggesting this small value but I think we can keep\nthe value something like 512 or 1024 without worrying much about the\nperformance, so changed to 512 in the latest patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 6 Jul 2022 17:24:43 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, Jul 6, 2022 at 7:55 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Okay, changed that and changed a few more occurrences in 0001 which\n> were on similar lines. I also tested the performance of pg_bench\n> where concurrently I am running the script which creates/drops\n> relation but I do not see any regression with fairly small values of\n> VAR_RELNUMBER_PREFETCH, the smallest value I tried was 8. That\n> doesn't mean I am suggesting this small value but I think we can keep\n> the value something like 512 or 1024 without worrying much about the\n> performance, so changed to 512 in the latest patch.\n\nOK, I have committed 0001 now with a few changes. pgindent did not\nagree with some of your whitespace changes, and I also cleaned up a\nfew long lines. I replaced one instance of InvalidOid with\nInvalidRelFileNumber also, and changed a word in a comment.\n\nI think 0002 and 0003 need more work yet; I'll try to write a review\nof those soon.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Jul 2022 11:57:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, Jul 6, 2022 at 11:57 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think 0002 and 0003 need more work yet; I'll try to write a review\n> of those soon.\n\nRegarding 0002:\n\nI don't particularly like the names BufTagCopyRelFileLocator and\nBufTagRelFileLocatorEquals. My suggestion is to rename\nBufTagRelFileLocatorEquals to BufTagMatchesRelFileLocator, because it\ndoesn't really make sense to me to talk about equality between values\nof different data types. Instead of BufTagCopyRelFileLocator I would\nprefer BufTagGetRelFileLocator. That would make it more similar to\nBufTagGetFileNumber and BufTagSetFileNumber, which I think would be a\ngood thing.\n\nOther than that I think 0002 seems fine.\n\nRegarding 0003:\n\n /*\n * Don't try to prefetch\nanything in this database until\n- * it has been created, or we\nmight confuse the blocks of\n- * different generations, if a\ndatabase OID or\n- * relfilenumber is reused.\nIt's also more efficient than\n+ * it has been created,\nbecause it's more efficient than\n * discovering that relations\ndon't exist on disk yet with\n * ENOENT errors.\n */\n\nI'm worried that this might not be correct. The comment changes here\n(and I think also in some other plces) imply that we've eliminated\nrelfilenode ruse, but I think that's not true. createdb() and movedb()\ndon't seem to be modified, so I think it's possible to just copy a\ntemplate database over without change, which means that relfilenumbers\nand even relfilelocators could be reused. So I feel like maybe this\nand similar places shouldn't be modified in this way. Am I\nmisunderstanding?\n\n /*\n- * Relfilenumbers are not unique in databases across\ntablespaces, so we need\n- * to allocate a new one in the new tablespace.\n+ * Generate a new relfilenumber. Although relfilenumber are\nunique within a\n+ * cluster, we are unable to use the old relfilenumber since unused\n+ * relfilenumber are not unlinked until commit. So if within a\n+ * transaction, if we set the old tablespace again, we will\nget conflicting\n+ * relfilenumber file.\n */\n- newrelfilenumber = GetNewRelFileNumber(newTableSpace, NULL,\n-\n rel->rd_rel->relpersistence);\n+ newrelfilenumber = GetNewRelFileNumber();\n\nI can't clearly understand this comment. Is it saying that the code\nwhich follows is broken and needs to be fixed by a future patch before\nthings are OK again? If so, that's not good.\n\n- * callers should be GetNewOidWithIndex() and GetNewRelFileNumber() in\n- * catalog/catalog.c.\n+ * callers should be GetNewOidWithIndex() in catalog/catalog.c.\n\nIf there is only one, it should say \"caller\", not \"callers\".\n\n Orphan files are harmless --- at worst they waste a bit of disk space ---\n-because we check for on-disk collisions when allocating new relfilenumber\n-OIDs. So cleaning up isn't really necessary.\n+because relfilenumber is 56 bit wide so logically there should not be any\n+collisions. So cleaning up isn't really necessary.\n\nI don't agree that orphaned files are harmless, but changing that is\nbeyond the scope of this patch. I think that the way you've ended the\nsentence isn't sufficiently clear and correct even if we accept the\nprinciple that orphaned files are harmless. What I think we should\nstay instead is \"because the relfilenode counter is monotonically\nincreasing. The maximum value is 2^56-1, and there is no provision for\nwraparound.\"\n\n+ /*\n+ * Check if we set the new relfilenumber then do we run out of\nthe logged\n+ * relnumber, if so then we need to WAL log again. Otherwise,\njust adjust\n+ * the relnumbercount.\n+ */\n+ relnumbercount = relnumber - ShmemVariableCache->nextRelFileNumber;\n+ if (ShmemVariableCache->relnumbercount <= relnumbercount)\n+ {\n+ LogNextRelFileNumber(relnumber + VAR_RELNUMBER_PREFETCH);\n+ ShmemVariableCache->relnumbercount = VAR_RELNUMBER_PREFETCH;\n+ }\n+ else\n+ ShmemVariableCache->relnumbercount -= relnumbercount;\n\nWould it be clearer, here and elsewhere, if VariableCacheData tracked\nnextRelFileNumber and nextUnloggedRelFileNumber instead of\nnextRelFileNumber and relnumbercount? I'm not 100% sure, but the idea\nseems worth considering.\n\n+ * Flush xlog record to disk before returning. To protect against file\n+ * system changes reaching the disk before the\nXLOG_NEXT_RELFILENUMBER log.\n\nThe way this is worded, you would need it to be just one sentence,\nlike \"Flush xlog record to disk before returning to protect\nagainst...\". Or else add \"this is,\" like \"This is to protect\nagainst...\"\n\nBut I'm thinking maybe we could reword it a little more, perhaps\nsomething like this: \"Flush xlog record to disk before returning. We\nwant to be sure that the in-memory nextRelFileNumber value is always\nlarger than any relfilenumber that is already in use on disk. To\nmaintain that invariant, we must make sure that the record we just\nlogged reaches the disk before any new files are created.\"\n\nThis isn't a full review, I think, but I'm kind of out of time and\nenergy for today.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Jul 2022 17:24:29 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Thu, Jul 7, 2022 at 2:54 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\nThanks for committing the 0001.\n\n> On Wed, Jul 6, 2022 at 11:57 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I think 0002 and 0003 need more work yet; I'll try to write a review\n> > of those soon.\n>\n> Regarding 0002:\n>\n> I don't particularly like the names BufTagCopyRelFileLocator and\n> BufTagRelFileLocatorEquals. My suggestion is to rename\n> BufTagRelFileLocatorEquals to BufTagMatchesRelFileLocator, because it\n> doesn't really make sense to me to talk about equality between values\n> of different data types. Instead of BufTagCopyRelFileLocator I would\n> prefer BufTagGetRelFileLocator. That would make it more similar to\n> BufTagGetFileNumber and BufTagSetFileNumber, which I think would be a\n> good thing.\n>\n> Other than that I think 0002 seems fine.\n\nChanged as suggested. Although I feel BufTagCopyRelFileLocator is\nactually copying the relfilelocator from buffer tag to an input\nvariable, I am fine with BufTagGetRelFileLocator so that it is similar\nto the other names.\n\nChanged some other macro names as below because field name they are\ngetting/setting is relNumber\nBufTagSetFileNumber -> BufTagSetRelNumber\nBufTagGetFileNumber -> BufTagGetRelNumber\n\n> Regarding 0003:\n\n> I'm worried that this might not be correct. The comment changes here\n> (and I think also in some other plces) imply that we've eliminated\n> relfilenode ruse, but I think that's not true. createdb() and movedb()\n> don't seem to be modified, so I think it's possible to just copy a\n> template database over without change, which means that relfilenumbers\n> and even relfilelocators could be reused. So I feel like maybe this\n> and similar places shouldn't be modified in this way. Am I\n> misunderstanding?\n\nI think you are right, so I changed it.\n\n> /*\n> - * Relfilenumbers are not unique in databases across\n> tablespaces, so we need\n> - * to allocate a new one in the new tablespace.\n> + * Generate a new relfilenumber. Although relfilenumber are\n> unique within a\n> + * cluster, we are unable to use the old relfilenumber since unused\n> + * relfilenumber are not unlinked until commit. So if within a\n> + * transaction, if we set the old tablespace again, we will\n> get conflicting\n> + * relfilenumber file.\n> */\n> - newrelfilenumber = GetNewRelFileNumber(newTableSpace, NULL,\n> -\n> rel->rd_rel->relpersistence);\n> + newrelfilenumber = GetNewRelFileNumber();\n>\n> I can't clearly understand this comment. Is it saying that the code\n> which follows is broken and needs to be fixed by a future patch before\n> things are OK again? If so, that's not good.\n\nNo it is not broken in this patch. Basically, before our patch the\nreason for allocating the new relfilenumber was that if we create the\nfile with oldrelfilenumber in new tablespace then it is possible that\nin the new tablespace file with same name exist because relfilenumber\nwas unique in databases across tablespaces so there could be conflict.\nBut now that is not the case but still we can not reuse the old\nrelfilenumber because from the old tablespace the old relfilenumber\nfile is not removed until the next checkpoint so if we move the table\nback to the old tablespace again then there could be conflict. And\neven after we get the final patch of removing the tombstone file on\ncommit then also we can not reuse the old relfilenumber because within\na transaction we can switch between the tablespaces multiple times and\nthe relfilenumber file from the old tablespace will be removed only on\ncommit. This is what I am trying to explain in the comment.\n\nNow I have modified the comment slightly, such that in 0002 I am\nsaying files are not removed until the next checkpoint and in 0004 I\nam modifying that and saying not removed until commit.\n\n> - * callers should be GetNewOidWithIndex() and GetNewRelFileNumber() in\n> - * catalog/catalog.c.\n> + * callers should be GetNewOidWithIndex() in catalog/catalog.c.\n>\n> If there is only one, it should say \"caller\", not \"callers\".\n>\n> Orphan files are harmless --- at worst they waste a bit of disk space ---\n> -because we check for on-disk collisions when allocating new relfilenumber\n> -OIDs. So cleaning up isn't really necessary.\n> +because relfilenumber is 56 bit wide so logically there should not be any\n> +collisions. So cleaning up isn't really necessary.\n>\n> I don't agree that orphaned files are harmless, but changing that is\n> beyond the scope of this patch. I think that the way you've ended the\n> sentence isn't sufficiently clear and correct even if we accept the\n> principle that orphaned files are harmless. What I think we should\n> stay instead is \"because the relfilenode counter is monotonically\n> increasing. The maximum value is 2^56-1, and there is no provision for\n> wraparound.\"\n\nDone\n\n> + /*\n> + * Check if we set the new relfilenumber then do we run out of\n> the logged\n> + * relnumber, if so then we need to WAL log again. Otherwise,\n> just adjust\n> + * the relnumbercount.\n> + */\n> + relnumbercount = relnumber - ShmemVariableCache->nextRelFileNumber;\n> + if (ShmemVariableCache->relnumbercount <= relnumbercount)\n> + {\n> + LogNextRelFileNumber(relnumber + VAR_RELNUMBER_PREFETCH);\n> + ShmemVariableCache->relnumbercount = VAR_RELNUMBER_PREFETCH;\n> + }\n> + else\n> + ShmemVariableCache->relnumbercount -= relnumbercount;\n>\n> Would it be clearer, here and elsewhere, if VariableCacheData tracked\n> nextRelFileNumber and nextUnloggedRelFileNumber instead of\n> nextRelFileNumber and relnumbercount? I'm not 100% sure, but the idea\n> seems worth considering.\n\nI think it is in line with oidCount, what do you think?\n\n>\n> + * Flush xlog record to disk before returning. To protect against file\n> + * system changes reaching the disk before the\n> XLOG_NEXT_RELFILENUMBER log.\n>\n> The way this is worded, you would need it to be just one sentence,\n> like \"Flush xlog record to disk before returning to protect\n> against...\". Or else add \"this is,\" like \"This is to protect\n> against...\"\n>\n> But I'm thinking maybe we could reword it a little more, perhaps\n> something like this: \"Flush xlog record to disk before returning. We\n> want to be sure that the in-memory nextRelFileNumber value is always\n> larger than any relfilenumber that is already in use on disk. To\n> maintain that invariant, we must make sure that the record we just\n> logged reaches the disk before any new files are created.\"\n\nDone\n\n> This isn't a full review, I think, but I'm kind of out of time and\n> energy for today.\n\nI have updated some other comments as well.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 7 Jul 2022 17:14:48 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "Trying to compile with 0001 and 0002 applied and -Wall -Werror in use, I get:\n\nbuf_init.c:119:4: error: implicit truncation from 'int' to bit-field\nchanges value from -1 to 255 [-Werror,-Wbitfield-constant-conversion]\n CLEAR_BUFFERTAG(buf->tag);\n ^~~~~~~~~~~~~~~~~~~~~~~~~\n../../../../src/include/storage/buf_internals.h:122:14: note: expanded\nfrom macro 'CLEAR_BUFFERTAG'\n (a).forkNum = InvalidForkNumber, \\\n ^ ~~~~~~~~~~~~~~~~~\n1 error generated.\n\nMore review comments:\n\nIn pg_buffercache_pages_internal(), I suggest that we add an error\ncheck. If fctx->record[i].relfilenumber is greater than the largest\nvalue that can be represented as an OID, then let's do something like:\n\nERROR: relfilenode is too large to be represented as an OID\nHINT: Upgrade the extension using ALTER EXTENSION pg_buffercache UPDATE\n\nThat way, instead of confusing people by giving them an incorrect\nanswer, we'll push them toward a step that they may have overlooked.\n\nIn src/backend/access/transam/README, I think the sentence \"So\ncleaning up isn't really necessary.\" isn't too helpful. I suggest\nreplacing it with \"Thus, on-disk collisions aren't possible.\"\n\n> I think it is in line with oidCount, what do you think?\n\nOh it definitely is, and maybe it's OK the way you have it. But the\nOID stuff has wraparound to worry about, and this doesn't; and this\nhas the SetNextRelFileNumber and that doesn't; so it is not\nnecessarily the case that the design which is best for that case is\nalso best for this case.\n\nI believe that the persistence model for SetNextRelFileNumber needs\nmore thought. Right now I believe it's relying on the fact that, after\nwe try to restore the dump, we'll try to perform a clean shutdown of\nthe server before doing anything important, and that will persist the\nfinal value, whatever it ends up being. However, there's no comment\nexplaining that theory of operation, and it seems pretty fragile\nanyway. What if things don't go as planned? Suppose the power goes out\nhalfway through restoring the dump, and the user for some reason then\ngives up on running pg_upgrade and just tries to do random things with\nthat server? Then I think there will be trouble, because nothing has\nupdated the nextrelfilenumber value and yet there are potentially new\nfiles on disk. Maybe that's a stretch since I think other things might\nalso break if you do that, but I'm also not sure that's the only\nscenario to worry about, especially if you factor in the possibility\nof future code changes, like changes to the timing of when we shut\ndown and restart the server during pg_upgrade, or other uses of\nbinary-upgrade mode, or whatever. I don't know. Perhaps it's not\nactually broken but I'm inclined to think it should be logging its\nchanges.\n\nA related thought is that I don't think this patch has as many\ncross-checks as it could have. For instance, suppose that when we\nreplay a WAL record that creates relation storage, we cross-check that\nthe value is less than the counter. I think you have a check in there\nsomeplace that will error out if there is an actual collision --\nalthough I can't find it at the moment, and possibly we want to add\nsome comments there even if it's in existing code -- but this kind of\nthing would detect bugs that could lead to collisions even if no\ncollision actually occurs, e.g. because a duplicate relfilenumber is\nused but in a different database or tablespace. It might be worth\nspending some time thinking about other possible cross-checks too.\nWe're trying to create a system where the relfilenumber counter is\nalways ahead of all the relfilenumbers used on disk, but the coupling\nbetween the relfilenumber-advancement machinery and the\nmake-files-on-disk machinery is pretty loose, and so there is a risk\nthat bugs could escape detection. Whatever we can do to increase the\nprobability of noticing when things have gone wrong, and/or to notice\nit quicker, will be good.\n\n+ if (!IsBinaryUpgrade)\n+ elog(ERROR, \"the RelFileNumber can be set only during\nbinary upgrade\");\n\nI think you should remove the word \"the\". Primary error messages are\nwritten telegram-style and \"the\" is usually omitted, especially at the\nbeginning of the message.\n\n+ * This should not impact the performance, since we are not WAL logging\n+ * it for every allocation, but only after allocating 512 RelFileNumber.\n\nI think this claim is overly bold, and it would be better if the\ncurrent value of the constant weren't encoded in the comment. I'm not\nsure we really need this part of the comment at all, but if we do,\nmaybe it should be reworded to something like: This is potentially a\nsomewhat expensive operation, but fortunately we only need to do it\nfor every VAR_RELNUMBER_PREFETCH new relfilenodes. Or maybe it's\nbetter to put this explanation in GetNewRelFileNumber instead, e.g.\n\"If we run out of logged RelFileNumbers, then we must log more, and\nalso wait for the xlog record to be flushed to disk. This is somewhat\nexpensive, but hopefully VAR_RELNUMBER_PREFETCH is large enough that\nthis doesn't slow things down too much.\"\n\nOne thing that isn't great about this whole scheme is that it can lead\nto lock pile-ups. Once somebody is waiting for an\nXLOG_NEXT_RELFILENUMBER record to reach the disk, any other backend\nthat tries to get a new relfilenumber is going to block waiting for\nRelFileNumberGenLock. I wonder whether this effect is observable in\npractice: suppose we just create relations in a tight loop from inside\na stored procedure, and do that simultaneously in multiple backends?\nWhat does the wait event distribution look like? Can we observe a lot\nof RelFileNumberGenLock events or not really? I guess if we reduce\nVAR_RELNUMBER_PREFETCH enough we can probably create a problem, but\nhow small a value is needed?\n\nOne thing we could think about doing here is try to stagger the xlog\nand the flush. When we've used VAR_RELNUMBER_PREFETCH/2\nrelfilenumbers, log a record reserving VAR_RELNUMBER_PREFETCH from\nwhere we are now, and remember the LSN. When we've used up our entire\nprevious allocation, XLogFlush() that record before allowing the\nadditional values to be used. The bookkeeping would be a bit more\ncomplicated than currently, but I don't think it would be too bad. I'm\nnot sure how much it would actually help, though, or whether we need\nit. If new relfilenumbers are being used up really quickly, then maybe\nthe record won't get flushed into the background before we run out of\navailable numbers anyway, and if they aren't, then maybe it doesn't\nmatter. On the other hand, even one transaction commit between when\nthe record is logged and when we run out of the previous allocation is\nenough to force a flush, at least with synchronous_commit=on, so maybe\nthe chances of being able to piggyback on an existing flush are not so\nbad after all. I'm not sure.\n\n+ * Generate a new relfilenumber. We can not reuse the old relfilenumber\n+ * because the unused relfilenumber files are not unlinked\nuntil the next\n+ * checkpoint. So if move the relation to the old tablespace again, we\n+ * will get the conflicting relfilenumber file.\n\nThis is much clearer now but the grammar has some issues, e.g. \"the\nunused relfilenumber\" should be just \"unused relfilenumber\" and \"So if\nmove\" is not right either. I suggest: We cannot reuse the old\nrelfilenumber because of the possibility that that relation will be\nmoved back to the original tablespace before the next checkpoint. At\nthat point, the first segment of the main fork won't have been\nunlinked yet, and an attempt to create new relation storage with that\nsame relfilenumber will fail.\"\n\nIn theory I suppose there's another way we could solve this problem:\nkeep using the same relfilenumber, and if the scenario described here\noccurs, just reuse the old file. The reason why we can't do that today\nis because we could be running with wal_level=minimal and replace a\nrelation with one whose contents aren't logged. If WAL replay then\nreplays the drop, we're in trouble. But if the only time we reuse a\nrelfilenumber for new relation storage is when relations are moved\naround, then I think that scenario can't happen. However, I think\nassigning a new relfilenumber is probably better, because it gets us\ncloser to a world in which relfilenumbers are never reused at all. It\ndoesn't get us all the way there because of createdb() and movedb(),\nbut it gets us closer and I prefer that.\n\n+ * XXX although this all was true when the relfilenumbers were 32 bits wide but\n+ * now the relfilenumbers are 56 bits wide so we don't have risk of\n+ * relfilenumber being reused so in future we can immediately unlink the first\n+ * segment as well. Although we can reuse the relfilenumber during createdb()\n+ * using file copy method or during movedb() but the above scenario is only\n+ * applicable when we create a new relation.\n\nHere is an edited version:\n\nXXX. Although all of this was true when relfilenumbers were 32 bits wide, they\nare now 56 bits wide and do not wrap around, so in the future we can change\nthe code to immediately unlink the first segment of the relation along\nwith all the\nothers. We still do reuse relfilenumbers when createdb() is performed using the\nfile-copy method or during movedb(), but the scenario described above can only\nhappen when creating a new relation.\n\nI think that pg_filenode_relation,\nbinary_upgrade_set_next_heap_relfilenode, and other functions that are\nnow going to be accepting a RelFileNode using the SQL int8 datatype\nshould bounds-check the argument. It could be <0 or >2^56, and I\nbelieve it'd be best to throw an error for that straight off. The\nthree functions in pg_upgrade_support.c could share a static\nsubroutine for this, to avoid duplicating code.\n\nThis bounds-checking issue also applies to the -f argument to pg_checksums.\n\nI notice that the patch makes no changes to relmapper.c, and I think\nthat's a problem. Notice in particular:\n\n#define MAX_MAPPINGS 62 /* 62 * 8 + 16 = 512 */\n\nI believe that making RelFileNumber into a 64-bit value will cause the\n8 in the calculation above to change to 16, defeating the intention\nthat the size of the file ought to be the smallest imaginable size of\na disk sector. It does seem like it would have been smart to include a\nStaticAssertStmt in this file someplace that checks that the data\nstructure has the expected size, and now might be a good time, perhaps\nin a separate patch, to add one. If we do nothing fancy here, the\nmaximum number of mappings will have to be reduced from 62 to 31,\nwhich is a problem because global/pg_filenode.map currently has 48\nentries. We could try to arrange to squeeze padding out of the\nRelMapping struct, which would let us use just 12 bytes per mapping,\nwhich would increase the limit to 41, but that's still less than we're\nusing already, never mind leaving room for future growth.\n\nI don't know what to do about this exactly. I believe it's been\npreviously suggested that the actual minimum sector size on reasonably\nmodern hardware is never as small as 512 bytes, so maybe the file size\ncan just be increased to 1kB or something. If that idea is judged\nunsafe, I can think of two other possible approaches offhand. One is\nthat we could move away from the idea of storing the OIDs in the file\nalong with the RelFileNodes, and instead store the offset for a given\nRelFileNode at a fixed offset in the file. That would require either\nhard-wiring offset tables into the code someplace, or generating them\nas part of the build process, with separate tables for shared and\ndatabase-local relation map files. The other is that we could have\nmultiple 512-byte sectors and try to arrange for each relation to be\nin the same sector with the indexes of that relation, since the\ncomments in relmapper.c say this:\n\n * aborts. An important factor here is that the indexes and toast table of\n * a mapped catalog must also be mapped, so that the rewrites/relocations of\n * all these files commit in a single map file update rather than being tied\n * to transaction commit.\n\nThis suggests that atomicity is required across a table and its\nindexes, but that it's needed across arbitrary sets of entries in the\nfile.\n\nWhatever we do, we shouldn't forget to bump RELMAPPER_FILEMAGIC.\n\n--- a/src/include/catalog/pg_class.h\n+++ b/src/include/catalog/pg_class.h\n@@ -34,6 +34,13 @@ CATALOG(pg_class,1259,RelationRelationId)\nBKI_BOOTSTRAP BKI_ROWTYPE_OID(83,Relat\n /* oid */\n Oid oid;\n\n+ /* access method; 0 if not a table / index */\n+ Oid relam BKI_DEFAULT(heap) BKI_LOOKUP_OPT(pg_am);\n+\n+ /* identifier of physical storage file */\n+ /* relfilenode == 0 means it is a \"mapped\" relation, see relmapper.c */\n+ int64 relfilenode BKI_DEFAULT(0);\n+\n /* class name */\n\n NameData relname;\n\n@@ -49,13 +56,6 @@ CATALOG(pg_class,1259,RelationRelationId)\nBKI_BOOTSTRAP BKI_ROWTYPE_OID(83,Relat\n /* class owner */\n Oid relowner BKI_DEFAULT(POSTGRES)\nBKI_LOOKUP(pg_authid);\n\n- /* access method; 0 if not a table / index */\n- Oid relam BKI_DEFAULT(heap) BKI_LOOKUP_OPT(pg_am);\n-\n- /* identifier of physical storage file */\n- /* relfilenode == 0 means it is a \"mapped\" relation, see relmapper.c */\n- Oid relfilenode BKI_DEFAULT(0);\n-\n /* identifier of table space for relation (0 means default for\ndatabase) */\n Oid reltablespace BKI_DEFAULT(0)\nBKI_LOOKUP_OPT(pg_tablespace);\n\nAs Andres said elsewhere, this stinks. Not sure what the resolution of\nthe discussion over on the \"AIX support\" thread is going to be yet,\nbut hopefully not this.\n\n+ uint32 relNumber_low; /* relfilenumber 32 lower bits */\n+ uint32 relNumber_hi:24; /* relfilenumber 24 high bits */\n+ uint32 forkNum:8; /* fork number */\n\nI still think we'd be better off with something like uint32\nrelForkDetails[2]. The bitfields would be nice if they meant that we\ndidn't have to do bit-shifting and masking operations ourselves, but\nwith the field split this way, we do anyway. So what's the point in\nmixing the approaches?\n\n * relNumber identifies the specific relation. relNumber corresponds to\n * pg_class.relfilenode (NOT pg_class.oid, because we need to be able\n * to assign new physical files to relations in some situations).\n- * Notice that relNumber is only unique within a database in a particular\n- * tablespace.\n+ * Notice that relNumber is unique within a cluster.\n\nI think this paragraph would benefit from more revision. I think that\nwe should just nuke the parenthesized part altogether, since we'll now\nnever use pg_class.oid as relNumber, and to suggest otherwise is just\nconfusing. As for the last sentence, \"Notice that relNumber is unique\nwithin a cluster.\" isn't wrong, but I think we could be more precise\nand informative. Perhaps: \"relNumber values are assigned by\nGetNewRelFileNumber(), which will only ever assign the same value once\nduring the lifetime of a cluster. However, since CREATE DATABASE\nduplicates the relfilenumbers of the template database, the values are\nin practice only unique within a database, not globally.\"\n\nThat's all I've got for now.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Jul 2022 13:26:29 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Thu, Jul 7, 2022 at 10:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\nI have accepted all the suggestion, find my inline replies where we\nneed more thoughts.\n\n> buf_init.c:119:4: error: implicit truncation from 'int' to bit-field\n> changes value from -1 to 255 [-Werror,-Wbitfield-constant-conversion]\n> CLEAR_BUFFERTAG(buf->tag);\n> ^~~~~~~~~~~~~~~~~~~~~~~~~\n> ../../../../src/include/storage/buf_internals.h:122:14: note: expanded\n> from macro 'CLEAR_BUFFERTAG'\n> (a).forkNum = InvalidForkNumber, \\\n> ^ ~~~~~~~~~~~~~~~~~\n> 1 error generated.\n\nHmm so now we are using an unsigned int field so IMHO we can make\nInvalidForkNumber to 255 instead of -1?\n\n\n> > I think it is in line with oidCount, what do you think?\n>\n> Oh it definitely is, and maybe it's OK the way you have it. But the\n> OID stuff has wraparound to worry about, and this doesn't; and this\n> has the SetNextRelFileNumber and that doesn't; so it is not\n> necessarily the case that the design which is best for that case is\n> also best for this case.\n\nYeah right, but now with the latest changes for piggybacking the\nXlogFlush I think it is cleaner to have the count.\n\n> I believe that the persistence model for SetNextRelFileNumber needs\n> more thought. Right now I believe it's relying on the fact that, after\n> we try to restore the dump, we'll try to perform a clean shutdown of\n> the server before doing anything important, and that will persist the\n> final value, whatever it ends up being. However, there's no comment\n> explaining that theory of operation, and it seems pretty fragile\n> anyway. What if things don't go as planned? Suppose the power goes out\n> halfway through restoring the dump, and the user for some reason then\n> gives up on running pg_upgrade and just tries to do random things with\n> that server? Then I think there will be trouble, because nothing has\n> updated the nextrelfilenumber value and yet there are potentially new\n> files on disk. Maybe that's a stretch since I think other things might\n> also break if you do that, but I'm also not sure that's the only\n> scenario to worry about, especially if you factor in the possibility\n> of future code changes, like changes to the timing of when we shut\n> down and restart the server during pg_upgrade, or other uses of\n> binary-upgrade mode, or whatever. I don't know. Perhaps it's not\n> actually broken but I'm inclined to think it should be logging its\n> changes.\n\nBut we are already logging this if we are setting the relfilenumber\nwhich is out of the already logged range, am I missing something?\nCheck this change.\n+ relnumbercount = relnumber - ShmemVariableCache->nextRelFileNumber;\n+ if (ShmemVariableCache->relnumbercount <= relnumbercount)\n+ {\n+ LogNextRelFileNumber(relnumber + VAR_RELNUMBER_PREFETCH, NULL);\n+ ShmemVariableCache->relnumbercount = VAR_RELNUMBER_PREFETCH;\n+ }\n+ else\n+ ShmemVariableCache->relnumbercount -= relnumbercount;\n\n> A related thought is that I don't think this patch has as many\n> cross-checks as it could have. For instance, suppose that when we\n> replay a WAL record that creates relation storage, we cross-check that\n> the value is less than the counter. I think you have a check in there\n> someplace that will error out if there is an actual collision --\n> although I can't find it at the moment, and possibly we want to add\n> some comments there even if it's in existing code -- but this kind of\n> thing would detect bugs that could lead to collisions even if no\n> collision actually occurs, e.g. because a duplicate relfilenumber is\n> used but in a different database or tablespace. It might be worth\n> spending some time thinking about other possible cross-checks too.\n> We're trying to create a system where the relfilenumber counter is\n> always ahead of all the relfilenumbers used on disk, but the coupling\n> between the relfilenumber-advancement machinery and the\n> make-files-on-disk machinery is pretty loose, and so there is a risk\n> that bugs could escape detection. Whatever we can do to increase the\n> probability of noticing when things have gone wrong, and/or to notice\n> it quicker, will be good.\n\nI had those changes in v7-0003, now I have merged with 0002. This has\nassert check while replaying the WAL for smgr create and smgr\ntruncate, and while during normal path when allocating the new\nrelfilenumber we are asserting for any existing file.\n\n> One thing that isn't great about this whole scheme is that it can lead\n> to lock pile-ups. Once somebody is waiting for an\n> XLOG_NEXT_RELFILENUMBER record to reach the disk, any other backend\n> that tries to get a new relfilenumber is going to block waiting for\n> RelFileNumberGenLock. I wonder whether this effect is observable in\n> practice: suppose we just create relations in a tight loop from inside\n> a stored procedure, and do that simultaneously in multiple backends?\n> What does the wait event distribution look like? Can we observe a lot\n> of RelFileNumberGenLock events or not really? I guess if we reduce\n> VAR_RELNUMBER_PREFETCH enough we can probably create a problem, but\n> how small a value is needed?\n\nI have done some performance tests, with very small values I can see a\nlot of wait events for RelFileNumberGen but with bigger numbers like\n256 or 512 it is not really bad. See results at the end of the\nmail[1]\n\n> One thing we could think about doing here is try to stagger the xlog\n> and the flush. When we've used VAR_RELNUMBER_PREFETCH/2\n> relfilenumbers, log a record reserving VAR_RELNUMBER_PREFETCH from\n> where we are now, and remember the LSN. When we've used up our entire\n> previous allocation, XLogFlush() that record before allowing the\n> additional values to be used. The bookkeeping would be a bit more\n> complicated than currently, but I don't think it would be too bad. I'm\n> not sure how much it would actually help, though, or whether we need\n> it. If new relfilenumbers are being used up really quickly, then maybe\n> the record won't get flushed into the background before we run out of\n> available numbers anyway, and if they aren't, then maybe it doesn't\n> matter. On the other hand, even one transaction commit between when\n> the record is logged and when we run out of the previous allocation is\n> enough to force a flush, at least with synchronous_commit=on, so maybe\n> the chances of being able to piggyback on an existing flush are not so\n> bad after all. I'm not sure.\n\nI have done these changes during GetNewRelFileNumber() this required\nto track the last logged record pointer as well but I think this looks\nclean. With this I can see some reduction in RelFileNumberGen wait\nevent[1]\n\n> In theory I suppose there's another way we could solve this problem:\n> keep using the same relfilenumber, and if the scenario described here\n> occurs, just reuse the old file. The reason why we can't do that today\n> is because we could be running with wal_level=minimal and replace a\n> relation with one whose contents aren't logged. If WAL replay then\n> replays the drop, we're in trouble. But if the only time we reuse a\n> relfilenumber for new relation storage is when relations are moved\n> around, then I think that scenario can't happen. However, I think\n> assigning a new relfilenumber is probably better, because it gets us\n> closer to a world in which relfilenumbers are never reused at all. It\n> doesn't get us all the way there because of createdb() and movedb(),\n> but it gets us closer and I prefer that.\n\nI agree with you.\n\n> I notice that the patch makes no changes to relmapper.c, and I think\n> that's a problem. Notice in particular:\n>\n> #define MAX_MAPPINGS 62 /* 62 * 8 + 16 = 512 */\n>\n> I believe that making RelFileNumber into a 64-bit value will cause the\n> 8 in the calculation above to change to 16, defeating the intention\n> that the size of the file ought to be the smallest imaginable size of\n> a disk sector. It does seem like it would have been smart to include a\n> StaticAssertStmt in this file someplace that checks that the data\n> structure has the expected size, and now might be a good time, perhaps\n> in a separate patch, to add one. If we do nothing fancy here, the\n> maximum number of mappings will have to be reduced from 62 to 31,\n> which is a problem because global/pg_filenode.map currently has 48\n> entries. We could try to arrange to squeeze padding out of the\n> RelMapping struct, which would let us use just 12 bytes per mapping,\n> which would increase the limit to 41, but that's still less than we're\n> using already, never mind leaving room for future growth.\n>\n> I don't know what to do about this exactly. I believe it's been\n> previously suggested that the actual minimum sector size on reasonably\n> modern hardware is never as small as 512 bytes, so maybe the file size\n> can just be increased to 1kB or something. If that idea is judged\n> unsafe, I can think of two other possible approaches offhand. One is\n> that we could move away from the idea of storing the OIDs in the file\n> along with the RelFileNodes, and instead store the offset for a given\n> RelFileNode at a fixed offset in the file. That would require either\n> hard-wiring offset tables into the code someplace, or generating them\n> as part of the build process, with separate tables for shared and\n> database-local relation map files. The other is that we could have\n> multiple 512-byte sectors and try to arrange for each relation to be\n> in the same sector with the indexes of that relation, since the\n> comments in relmapper.c say this:\n>\n> * aborts. An important factor here is that the indexes and toast table of\n> * a mapped catalog must also be mapped, so that the rewrites/relocations of\n> * all these files commit in a single map file update rather than being tied\n> * to transaction commit.\n>\n> This suggests that atomicity is required across a table and its\n> indexes, but that it's needed across arbitrary sets of entries in the\n> file.\n>\n> Whatever we do, we shouldn't forget to bump RELMAPPER_FILEMAGIC.\n\nI am not sure what is the best solution here, but I agree that most of\nthe modern hardware will have bigger sector size than 512 so we can\njust change file size of 1024.\n\nThe current value of RELMAPPER_FILEMAGIC is 0x592717, I am not sure\nhow this version ID is decide is this some random magic number or\nbased on some logic?\n\n>\n> + uint32 relNumber_low; /* relfilenumber 32 lower bits */\n> + uint32 relNumber_hi:24; /* relfilenumber 24 high bits */\n> + uint32 forkNum:8; /* fork number */\n>\n> I still think we'd be better off with something like uint32\n> relForkDetails[2]. The bitfields would be nice if they meant that we\n> didn't have to do bit-shifting and masking operations ourselves, but\n> with the field split this way, we do anyway. So what's the point in\n> mixing the approaches?\n\nActually with this we were able to access the forkNum directly, but I\nalso think changing as relForkDetails[2] is cleaner so done that. And\nas part of the related changes in 0001 I have removed the direct\naccess to the forkNum.\n\n[1] Wait event details\n\nProcedure:\nCREATE OR REPLACE FUNCTION create_table(count int) RETURNS void AS $$\nDECLARE\n relname varchar;\n pid int;\n i int;\nBEGIN\n SELECT pg_backend_pid() INTO pid;\n relname := 'test_' || pid;\n FOR i IN 1..count LOOP\n EXECUTE format('CREATE TABLE %s(a int)', relname);\n\n EXECUTE format('DROP TABLE %s', relname);\n END LOOP;\nEND;\n\nTarget test: Executed \"select create_table(100);\" query from pgbench\nwith 32 concurrent backends.\n\nVAR_RELNUMBER_PREFETCH = 8\n\n 905 LWLock | LockManager\n 346 LWLock | RelFileNumberGen\n 192\n 190 Activity | WalWriterMain\n\nVAR_RELNUMBER_PREFETCH=128\n 1187 LWLock | LockManager\n 247 LWLock | RelFileNumberGen\n 139 Activity | CheckpointerMain\n\nVAR_RELNUMBER_PREFETCH=256\n\n 1029 LWLock | LockManager\n 158 LWLock | BufferContent\n 134 Activity | CheckpointerMain\n 134 Activity | AutoVacuumMain\n 133 Activity | BgWriterMain\n 132 Activity | WalWriterMain\n 130 Activity | LogicalLauncherMain\n 123 LWLock | RelFileNumberGen\n\nVAR_RELNUMBER_PREFETCH=512\n\n 1174 LWLock | LockManager\n 136 Activity | CheckpointerMain\n 136 Activity | BgWriterMain\n 136 Activity | AutoVacuumMain\n 134 Activity | WalWriterMain\n 134 Activity | LogicalLauncherMain\n 99 LWLock | BufferContent\n 35 LWLock | RelFileNumberGen\n\nVAR_RELNUMBER_PREFETCH=2048\n 1070 LWLock | LockManager\n 160 LWLock | BufferContent\n 156 Activity | CheckpointerMain\n 156\n 155 Activity | BgWriterMain\n 154 Activity | AutoVacuumMain\n 153 Activity | WalWriterMain\n 149 Activity | LogicalLauncherMain\n 31 LWLock | RelFileNumberGen\n 28 Timeout | VacuumDelay\n\n\nVAR_RELNUMBER_PREFETCH=4096\nNote, no wait event for RelFileNumberGen at value 4096\n\nNew patch with piggybacking XLogFlush()\n\nVAR_RELNUMBER_PREFETCH = 8\n\n 1105 LWLock | LockManager\n 143 LWLock | BufferContent\n 140 Activity | CheckpointerMain\n 140 Activity | BgWriterMain\n 139 Activity | WalWriterMain\n 138 Activity | AutoVacuumMain\n 137 Activity | LogicalLauncherMain\n 115 LWLock | RelFileNumberGen\n\nVAR_RELNUMBER_PREFETCH = 256\n 1130 LWLock | LockManager\n 141 Activity | CheckpointerMain\n 139 Activity | BgWriterMain\n 137 Activity | AutoVacuumMain\n 136 Activity | LogicalLauncherMain\n 135 Activity | WalWriterMain\n 69 LWLock | BufferContent\n 31 LWLock | RelFileNumberGen\n\nVAR_RELNUMBER_PREFETCH = 1024\nNote: no wait event for RelFileNumberGen at value 1024\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 11 Jul 2022 17:09:11 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Mon, Jul 11, 2022 at 7:39 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > buf_init.c:119:4: error: implicit truncation from 'int' to bit-field\n> > changes value from -1 to 255 [-Werror,-Wbitfield-constant-conversion]\n> > CLEAR_BUFFERTAG(buf->tag);\n> > ^~~~~~~~~~~~~~~~~~~~~~~~~\n> > ../../../../src/include/storage/buf_internals.h:122:14: note: expanded\n> > from macro 'CLEAR_BUFFERTAG'\n> > (a).forkNum = InvalidForkNumber, \\\n> > ^ ~~~~~~~~~~~~~~~~~\n> > 1 error generated.\n>\n> Hmm so now we are using an unsigned int field so IMHO we can make\n> InvalidForkNumber to 255 instead of -1?\n\nIf we're going to do that I think we had better do it as a separate,\npreparatory patch.\n\nIt also makes me wonder why we're using macros rather than static\ninline functions in buf_internals.h. I wonder whether we could do\nsomething like this, for example, and keep InvalidForkNumber as -1:\n\nstatic inline ForkNumber\nBufTagGetForkNum(BufferTag *tagPtr)\n{\n int8 ret;\n\n StaticAssertStmt(MAX_FORKNUM <= INT8_MAX);\n ret = (int8) ((tagPtr->relForkDetails[0] >> BUFFERTAG_RELNUMBER_BITS);\n return (ForkNumber) ret;\n}\n\nEven if we don't use that particular trick, I think we've generally\nbeen moving toward using static inline functions rather than macros,\nbecause it provides better type-safety and the code is often easier to\nread. Maybe we should also approach it that way here. Or even commit a\npreparatory patch replacing the existing macros with inline functions.\nOr maybe it's best to leave it alone, not sure.\n\nIt feels like some of the changes to buf_internals.h in 0002 could be\nmoved into 0001. If we're going to introduce a combined method to set\nthe relnumber and fork, I think we could do that in 0001 rather than\nmaking 0001 introduce a macro to set just the relfilenumber and then\nhaving 0002 change it around again.\n\nBUFFERTAG_RELNUMBER_BITS feels like a lie. It's defined to be 24, but\nbased on the name you'd expect it to be 56.\n\n> But we are already logging this if we are setting the relfilenumber\n> which is out of the already logged range, am I missing something?\n> Check this change.\n> + relnumbercount = relnumber - ShmemVariableCache->nextRelFileNumber;\n> + if (ShmemVariableCache->relnumbercount <= relnumbercount)\n> + {\n> + LogNextRelFileNumber(relnumber + VAR_RELNUMBER_PREFETCH, NULL);\n> + ShmemVariableCache->relnumbercount = VAR_RELNUMBER_PREFETCH;\n> + }\n> + else\n> + ShmemVariableCache->relnumbercount -= relnumbercount;\n\nOh, I guess I missed that.\n\n> I had those changes in v7-0003, now I have merged with 0002. This has\n> assert check while replaying the WAL for smgr create and smgr\n> truncate, and while during normal path when allocating the new\n> relfilenumber we are asserting for any existing file.\n\nI think a test-and-elog might be better. Most users won't be running\nassert-enabled builds, but this seems worth checking regardless.\n\n> I have done some performance tests, with very small values I can see a\n> lot of wait events for RelFileNumberGen but with bigger numbers like\n> 256 or 512 it is not really bad. See results at the end of the\n> mail[1]\n\nIt's a little hard to interpret these results because you don't say\nhow often you were checking the wait events, or how often the\noperation took to complete. I suppose we can guess the relative time\nscale from the number of Activity events: if there were 190\nWalWriterMain events observed, then the time to complete the operation\nis probably 190 times how often you were checking the wait events, but\nwas that every second or every half second or every tenth of a second?\n\n> I have done these changes during GetNewRelFileNumber() this required\n> to track the last logged record pointer as well but I think this looks\n> clean. With this I can see some reduction in RelFileNumberGen wait\n> event[1]\n\nI find the code you wrote here a little bit magical. I believe it\ndepends heavily on choosing to issue the new WAL record when we've\nexhausted exactly 50% of the available space. I suggest having two\nconstants, one of which is the number of relfilenumber values per WAL\nrecord, and the other of which is the threshold for issuing a new WAL\nrecord. Maybe something like RFN_VALUES_PER_XLOG and\nRFN_NEW_XLOG_THRESHOLD, or something. And then work code that works\ncorrectly for any value of RFN_NEW_XLOG_THRESHOLD between 0 (don't log\nnew RFNs until old allocation is completely exhausted) and\nRFN_VALUES_PER_XLOG - 1 (log new RFNs after using just 1 item from the\nprevious allocation). That way, if in the future someone decides to\nchange the constant values, they can do that and the code still works.\n\n> I am not sure what is the best solution here, but I agree that most of\n> the modern hardware will have bigger sector size than 512 so we can\n> just change file size of 1024.\n\nI went looking for previous discussion of this topic. Here's Heikki\ndoubting whether even 512 is too big:\n\nhttp://postgr.es/m/f03d9166-ad12-2a3c-f605-c1873ee86ae4@iki.fi\n\nHere's Thomas saying that he thinks it's probably mostly 4kB these\ndays, except when it isn't:\n\nhttp://postgr.es/m/CAEepm=1e91zMk-vZszCOGDtKd=DhMLQjgENRSxcbSEhxuEPpfA@mail.gmail.com\n\nHere's Tom with another idea how to reduce space usage:\n\nhttp://postgr.es/m/7235.1566626302@sss.pgh.pa.us\n\nIt doesn't look to me like there's a consensus that some bigger number is safe.\n\n> The current value of RELMAPPER_FILEMAGIC is 0x592717, I am not sure\n> how this version ID is decide is this some random magic number or\n> based on some logic?\n\nHmm, maybe we're not supposed to bump this value after all. I guess\nmaybe it's intended strictly as a magic number, rather than as a\nversion indicator. At least, we've never changed it up until now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 11 Jul 2022 12:19:31 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-07 13:26:29 -0400, Robert Haas wrote:\n> We're trying to create a system where the relfilenumber counter is\n> always ahead of all the relfilenumbers used on disk, but the coupling\n> between the relfilenumber-advancement machinery and the\n> make-files-on-disk machinery is pretty loose, and so there is a risk\n> that bugs could escape detection. Whatever we can do to increase the\n> probability of noticing when things have gone wrong, and/or to notice\n> it quicker, will be good.\n\nISTM that we should redefine pg_class_tblspc_relfilenode_index to only cover\nrelfilenode - afaics there's no real connection to the tablespace\nanymore. That'd a) reduce the size of the index b) guarantee uniqueness across\ntablespaces.\n\nI don't know where we could fit a sanity check that connects to all databases\nand detects duplicates across all the pg_class instances. Perhaps pg_amcheck?\n\n\nIt may be worth changing RelidByRelfilenumber() / its infrastructure to not\nuse reltablespace anymore.\n\n\n> One thing we could think about doing here is try to stagger the xlog\n> and the flush. When we've used VAR_RELNUMBER_PREFETCH/2\n> relfilenumbers, log a record reserving VAR_RELNUMBER_PREFETCH from\n> where we are now, and remember the LSN. When we've used up our entire\n> previous allocation, XLogFlush() that record before allowing the\n> additional values to be used. The bookkeeping would be a bit more\n> complicated than currently, but I don't think it would be too bad. I'm\n> not sure how much it would actually help, though, or whether we need\n> it.\n\nI think that's a very good idea. My concern around doing an XLogFlush() is\nthat it could lead to a lot of tiny f[data]syncs(), because not much else\nneeds to be written out. But the scheme you describe would likely lead the\nXLogFlush() flushing plenty other WAL writes out, addressing that.\n\n\n> If new relfilenumbers are being used up really quickly, then maybe\n> the record won't get flushed into the background before we run out of\n> available numbers anyway, and if they aren't, then maybe it doesn't\n> matter. On the other hand, even one transaction commit between when\n> the record is logged and when we run out of the previous allocation is\n> enough to force a flush, at least with synchronous_commit=on, so maybe\n> the chances of being able to piggyback on an existing flush are not so\n> bad after all. I'm not sure.\n\nEven if the record isn't yet flushed out by the time we need to, the\ndeferred-ness means that there's a good chance more useful records can also be\nflushed out at the same time...\n\n\n> I notice that the patch makes no changes to relmapper.c, and I think\n> that's a problem. Notice in particular:\n> \n> #define MAX_MAPPINGS 62 /* 62 * 8 + 16 = 512 */\n> \n> I believe that making RelFileNumber into a 64-bit value will cause the\n> 8 in the calculation above to change to 16, defeating the intention\n> that the size of the file ought to be the smallest imaginable size of\n> a disk sector. It does seem like it would have been smart to include a\n> StaticAssertStmt in this file someplace that checks that the data\n> structure has the expected size, and now might be a good time, perhaps\n> in a separate patch, to add one.\n\n+1\n\nPerhaps MAX_MAPPINGS should be at least partially computed instead of doing\nthe math in a comment? sizeof(RelMapping) could directly be used, and we could\ndefine SIZEOF_RELMAPFILE_START with a StaticAssert() enforcing it to be equal\nto offsetof(RelMapFile, mappings), if we move crc & pad to *before* mappings -\nafaics that should be entirely doable.\n\n\n> If we do nothing fancy here, the maximum number of mappings will have to be\n> reduced from 62 to 31, which is a problem because global/pg_filenode.map\n> currently has 48 entries. We could try to arrange to squeeze padding out of\n> the RelMapping struct, which would let us use just 12 bytes per mapping,\n> which would increase the limit to 41, but that's still less than we're using\n> already, never mind leaving room for future growth.\n\nUgh.\n\n\n> I don't know what to do about this exactly. I believe it's been\n> previously suggested that the actual minimum sector size on reasonably\n> modern hardware is never as small as 512 bytes, so maybe the file size\n> can just be increased to 1kB or something.\n\nI'm not so sure that's a good idea - while the hardware sector size likely\nisn't 512 on much storage anymore, it's still the size that most storage\nprotocols use. Which then means you need to be confident that you not just\nrely on storage atomicity, but also that nothing in the\n filesystem <-> block layer <-> driver\nstack somehow causes a single larger write to be split up into two.\n\nAnd if you use a filesystem with a smaller filesystem block size, there might\nnot even be a choice for the write to be split into two writes. E.g. XFS still\nsupports 512 byte blocks (although I think it's deprecating block size < 1024).\n\n\nMaybe the easiest fix here would be to replace the file atomically. Then we\ndon't need this <= 512 byte stuff. These are done rarely enough that I don't\nthink the overhead of creating a separate file, fsyncing that, renaming,\nfsyncing, would be a problem?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 11 Jul 2022 11:57:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Mon, Jul 11, 2022 at 2:57 PM Andres Freund <andres@anarazel.de> wrote:\n> ISTM that we should redefine pg_class_tblspc_relfilenode_index to only cover\n> relfilenode - afaics there's no real connection to the tablespace\n> anymore. That'd a) reduce the size of the index b) guarantee uniqueness across\n> tablespaces.\n\nSounds like a good idea.\n\n> I don't know where we could fit a sanity check that connects to all databases\n> and detects duplicates across all the pg_class instances. Perhaps pg_amcheck?\n\nUnless we're going to change the way CREATE DATABASE works, uniqueness\nacross databases is not guaranteed.\n\n> I think that's a very good idea. My concern around doing an XLogFlush() is\n> that it could lead to a lot of tiny f[data]syncs(), because not much else\n> needs to be written out. But the scheme you describe would likely lead the\n> XLogFlush() flushing plenty other WAL writes out, addressing that.\n\nOh, interesting. I hadn't considered that angle.\n\n> Maybe the easiest fix here would be to replace the file atomically. Then we\n> don't need this <= 512 byte stuff. These are done rarely enough that I don't\n> think the overhead of creating a separate file, fsyncing that, renaming,\n> fsyncing, would be a problem?\n\nAnything we can reasonably do to reduce the number of places where\nwe're relying on things being <= 512 bytes seems like a step in the\nright direction to me. It's very difficult to know whether such code\nis correct, or what the probability is that crossing the 512-byte\nboundary would break anything.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 11 Jul 2022 15:08:57 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-11 15:08:57 -0400, Robert Haas wrote:\n> On Mon, Jul 11, 2022 at 2:57 PM Andres Freund <andres@anarazel.de> wrote:\n> > I don't know where we could fit a sanity check that connects to all databases\n> > and detects duplicates across all the pg_class instances. Perhaps pg_amcheck?\n> \n> Unless we're going to change the way CREATE DATABASE works, uniqueness\n> across databases is not guaranteed.\n\nYou could likely address that by not flagging conflicts iff oid also matches?\nNot sure if worth it, but ...\n\n\n> > Maybe the easiest fix here would be to replace the file atomically. Then we\n> > don't need this <= 512 byte stuff. These are done rarely enough that I don't\n> > think the overhead of creating a separate file, fsyncing that, renaming,\n> > fsyncing, would be a problem?\n> \n> Anything we can reasonably do to reduce the number of places where\n> we're relying on things being <= 512 bytes seems like a step in the\n> right direction to me. It's very difficult to know whether such code\n> is correct, or what the probability is that crossing the 512-byte\n> boundary would break anything.\n\nSeems pretty simple to do. Have write_relmapper_file() write to a .tmp file\nfirst (likely adding O_TRUNC to flags), use durable_rename() to rename it into\nplace. The tempfile should probably be written out before the XLogInsert(),\nthe durable_rename() after, although I think it'd also be correct to more\nclosely approximate the current sequence.\n\nIt's a lot more problematic to do this for the control file, because we can\nend up updating that at a high frequency on standbys, due to minRecoveryPoint.\n\nI have wondered about maintaining that in a dedicated file instead, and\nperhaps even doing so on a primary.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 11 Jul 2022 12:34:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Mon, Jul 11, 2022 at 3:34 PM Andres Freund <andres@anarazel.de> wrote:\n> Seems pretty simple to do. Have write_relmapper_file() write to a .tmp file\n> first (likely adding O_TRUNC to flags), use durable_rename() to rename it into\n> place. The tempfile should probably be written out before the XLogInsert(),\n> the durable_rename() after, although I think it'd also be correct to more\n> closely approximate the current sequence.\n\nSomething like this?\n\nI chose not to use durable_rename() here, because that allowed me to\ndo more of the work before starting the critical section, and it's\nprobably slightly more efficient this way, too. That could be changed,\nthough, if you really want to stick with durable_rename().\n\nI haven't done anything about actually making the file variable-length\nhere, either, which I think is what we would want to do. If this seems\nmore or less all right, I can work on that next.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 11 Jul 2022 16:11:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On 2022-07-11 16:11:53 -0400, Robert Haas wrote:\n> On Mon, Jul 11, 2022 at 3:34 PM Andres Freund <andres@anarazel.de> wrote:\n> > Seems pretty simple to do. Have write_relmapper_file() write to a .tmp file\n> > first (likely adding O_TRUNC to flags), use durable_rename() to rename it into\n> > place. The tempfile should probably be written out before the XLogInsert(),\n> > the durable_rename() after, although I think it'd also be correct to more\n> > closely approximate the current sequence.\n> \n> Something like this?\n\nYea. I've not looked carefully, but on a quick skim it looks good.\n\n\n> I chose not to use durable_rename() here, because that allowed me to\n> do more of the work before starting the critical section, and it's\n> probably slightly more efficient this way, too. That could be changed,\n> though, if you really want to stick with durable_rename().\n\nI guess I'm not enthused in duplicating the necessary knowledge in evermore\nplaces. We've forgotten one of the magic incantations in the past, and needing\nto find all the places that need to be patched is a bit bothersome.\n\nPerhaps we could add extract helpers out of durable_rename()?\n\nOTOH, I don't really see what we gain by keeping things out of the critical\nsection? It does seem good to have the temp-file creation/truncation and write\nseparately, but after that I don't think it's worth much to avoid a\nPANIC. What legitimate issue does it avoid?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 11 Jul 2022 16:21:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Mon, Jul 11, 2022 at 7:22 PM Andres Freund <andres@anarazel.de> wrote:\n> I guess I'm not enthused in duplicating the necessary knowledge in evermore\n> places. We've forgotten one of the magic incantations in the past, and needing\n> to find all the places that need to be patched is a bit bothersome.\n>\n> Perhaps we could add extract helpers out of durable_rename()?\n>\n> OTOH, I don't really see what we gain by keeping things out of the critical\n> section? It does seem good to have the temp-file creation/truncation and write\n> separately, but after that I don't think it's worth much to avoid a\n> PANIC. What legitimate issue does it avoid?\n\nOK, so then I think we should just use durable_rename(). Here's a\npatch that does it that way. I briefly considered the idea of\nextracting helpers, but it doesn't seem worthwhile to me. There's not\nthat much code in durable_rename() in the first place.\n\nIn this version, I also removed the struct padding, changed the limit\non the number of entries to a nice round 64, and made some comment\nupdates. I considered trying to go further and actually make the file\nvariable-size, so that we never again need to worry about the limit on\nthe number of entries, but I don't actually think that's a good idea.\nIt would require substantially more changes to the code in this file,\nand that means there's more risk of introducing bugs, and I don't see\nthat there's much value anyway, because if we ever do hit the current\nlimit, we can just raise the limit.\n\nIf we were going to split up durable_rename(), the only intelligible\nsplit I can see would be to have a second version of the function, or\na flag to the existing function, that caters to the situation where\nthe old file is already known to have been fsync()'d.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 12 Jul 2022 09:51:12 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Mon, Jul 11, 2022 at 9:49 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n\n> It also makes me wonder why we're using macros rather than static\n> inline functions in buf_internals.h. I wonder whether we could do\n> something like this, for example, and keep InvalidForkNumber as -1:\n>\n> static inline ForkNumber\n> BufTagGetForkNum(BufferTag *tagPtr)\n> {\n> int8 ret;\n>\n> StaticAssertStmt(MAX_FORKNUM <= INT8_MAX);\n> ret = (int8) ((tagPtr->relForkDetails[0] >> BUFFERTAG_RELNUMBER_BITS);\n> return (ForkNumber) ret;\n> }\n>\n> Even if we don't use that particular trick, I think we've generally\n> been moving toward using static inline functions rather than macros,\n> because it provides better type-safety and the code is often easier to\n> read. Maybe we should also approach it that way here. Or even commit a\n> preparatory patch replacing the existing macros with inline functions.\n> Or maybe it's best to leave it alone, not sure.\n\nI think it make sense to convert existing macros as well, I have\nattached a patch for the same,\n>\n> > I had those changes in v7-0003, now I have merged with 0002. This has\n> > assert check while replaying the WAL for smgr create and smgr\n> > truncate, and while during normal path when allocating the new\n> > relfilenumber we are asserting for any existing file.\n>\n> I think a test-and-elog might be better. Most users won't be running\n> assert-enabled builds, but this seems worth checking regardless.\n\nIMHO the recovery time asserts we can convert to elog but one which we\nare doing after each GetNewRelFileNumber is better to keep as an\nassert as we are doing the file access so it can be costly?\n\n> > I have done some performance tests, with very small values I can see a\n> > lot of wait events for RelFileNumberGen but with bigger numbers like\n> > 256 or 512 it is not really bad. See results at the end of the\n> > mail[1]\n>\n> It's a little hard to interpret these results because you don't say\n> how often you were checking the wait events, or how often the\n> operation took to complete. I suppose we can guess the relative time\n> scale from the number of Activity events: if there were 190\n> WalWriterMain events observed, then the time to complete the operation\n> is probably 190 times how often you were checking the wait events, but\n> was that every second or every half second or every tenth of a second?\n\nI am executing it after every 0.5 sec using below script in psql\n\\t\nselect wait_event_type, wait_event from pg_stat_activity where pid !=\npg_backend_pid()\n\\watch 0.5\n\nAnd running test for 60 sec\n./pgbench -c 32 -j 32 -T 60 -f create_script.sql -p 54321 postgres\n\n$ cat create_script.sql\nselect create_table(100);\n\n// function body 'create_table'\nCREATE OR REPLACE FUNCTION create_table(count int) RETURNS void AS $$\nDECLARE\n relname varchar;\n pid int;\n i int;\nBEGIN\n SELECT pg_backend_pid() INTO pid;\n relname := 'test_' || pid;\n FOR i IN 1..count LOOP\n EXECUTE format('CREATE TABLE %s(a int)', relname);\n\n EXECUTE format('DROP TABLE %s', relname);\n END LOOP;\nEND;\n$$ LANGUAGE plpgsql;\n\n\n\n> > I have done these changes during GetNewRelFileNumber() this required\n> > to track the last logged record pointer as well but I think this looks\n> > clean. With this I can see some reduction in RelFileNumberGen wait\n> > event[1]\n>\n> I find the code you wrote here a little bit magical. I believe it\n> depends heavily on choosing to issue the new WAL record when we've\n> exhausted exactly 50% of the available space. I suggest having two\n> constants, one of which is the number of relfilenumber values per WAL\n> record, and the other of which is the threshold for issuing a new WAL\n> record. Maybe something like RFN_VALUES_PER_XLOG and\n> RFN_NEW_XLOG_THRESHOLD, or something. And then work code that works\n> correctly for any value of RFN_NEW_XLOG_THRESHOLD between 0 (don't log\n> new RFNs until old allocation is completely exhausted) and\n> RFN_VALUES_PER_XLOG - 1 (log new RFNs after using just 1 item from the\n> previous allocation). That way, if in the future someone decides to\n> change the constant values, they can do that and the code still works.\n\nok\n\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 12 Jul 2022 20:56:08 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "Hi,\n\nOn 2022-07-12 09:51:12 -0400, Robert Haas wrote:\n> On Mon, Jul 11, 2022 at 7:22 PM Andres Freund <andres@anarazel.de> wrote:\n> > I guess I'm not enthused in duplicating the necessary knowledge in evermore\n> > places. We've forgotten one of the magic incantations in the past, and needing\n> > to find all the places that need to be patched is a bit bothersome.\n> >\n> > Perhaps we could add extract helpers out of durable_rename()?\n> >\n> > OTOH, I don't really see what we gain by keeping things out of the critical\n> > section? It does seem good to have the temp-file creation/truncation and write\n> > separately, but after that I don't think it's worth much to avoid a\n> > PANIC. What legitimate issue does it avoid?\n> \n> OK, so then I think we should just use durable_rename(). Here's a\n> patch that does it that way. I briefly considered the idea of\n> extracting helpers, but it doesn't seem worthwhile to me. There's not\n> that much code in durable_rename() in the first place.\n\nCool.\n\n\n> In this version, I also removed the struct padding, changed the limit\n> on the number of entries to a nice round 64, and made some comment\n> updates.\n\nWhat does currently happen if we exceed that?\n\nI wonder if we should just reference a new define generated by genbki.pl\ndocumenting the number of relations that need to be tracked. Then we don't\nneed to maintain this manually going forward.\n\n\n> I considered trying to go further and actually make the file\n> variable-size, so that we never again need to worry about the limit on\n> the number of entries, but I don't actually think that's a good idea.\n\nYea, I don't really see what we'd gain. For this stuff to change we need to\nrecompile anyway.\n\n\n> If we were going to split up durable_rename(), the only intelligible\n> split I can see would be to have a second version of the function, or\n> a flag to the existing function, that caters to the situation where\n> the old file is already known to have been fsync()'d.\n\nI was thinking of something like durable_rename_prep() that'd fsync the\nfile/directories under their old names, and then durable_rename_exec() that\nactually renames and then fsyncs. But without a clear usecase...\n\n\n> +\t/* Write new data to the file. */\n> +\tpgstat_report_wait_start(WAIT_EVENT_RELATION_MAP_WRITE);\n> +\tif (write(fd, newmap, sizeof(RelMapFile)) != sizeof(RelMapFile))\n...\n> +\tpgstat_report_wait_end();\n> +\n\nNot for this patch, but we eventually should move this sequence into a\nwrapper. Perhaps combined with retry handling for short writes, the ENOSPC\nstuff and an error message when the write fails. It's a bit insane how many\ncopies of this we have.\n\n\n> diff --git a/src/include/utils/wait_event.h b/src/include/utils/wait_event.h\n> index b578e2ec75..5d3775ccde 100644\n> --- a/src/include/utils/wait_event.h\n> +++ b/src/include/utils/wait_event.h\n> @@ -193,7 +193,7 @@ typedef enum\n> \tWAIT_EVENT_LOGICAL_REWRITE_TRUNCATE,\n> \tWAIT_EVENT_LOGICAL_REWRITE_WRITE,\n> \tWAIT_EVENT_RELATION_MAP_READ,\n> -\tWAIT_EVENT_RELATION_MAP_SYNC,\n> +\tWAIT_EVENT_RELATION_MAP_RENAME,\n\nVery minor nitpick: To me REPLACE would be a bit more accurate than RENAME,\nsince it includes fsync etc?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 12 Jul 2022 10:09:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Jul 12, 2022 at 1:09 PM Andres Freund <andres@anarazel.de> wrote:\n> What does currently happen if we exceed that?\n\nelog\n\n> > diff --git a/src/include/utils/wait_event.h b/src/include/utils/wait_event.h\n> > index b578e2ec75..5d3775ccde 100644\n> > --- a/src/include/utils/wait_event.h\n> > +++ b/src/include/utils/wait_event.h\n> > @@ -193,7 +193,7 @@ typedef enum\n> > WAIT_EVENT_LOGICAL_REWRITE_TRUNCATE,\n> > WAIT_EVENT_LOGICAL_REWRITE_WRITE,\n> > WAIT_EVENT_RELATION_MAP_READ,\n> > - WAIT_EVENT_RELATION_MAP_SYNC,\n> > + WAIT_EVENT_RELATION_MAP_RENAME,\n>\n> Very minor nitpick: To me REPLACE would be a bit more accurate than RENAME,\n> since it includes fsync etc?\n\nSure, I had it that way for a while and changed it at the last minute.\nI can change it back.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 12 Jul 2022 16:35:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "Re: staticAssertStmt(MAX_FORKNUM <= INT8_MAX);\n\nHave you really thought through making the ForkNum 8-bit ?\n\nFor example this would limit a columnar storage with each column\nstored in it's own fork (which I'd say is not entirely unreasonable)\nto having just about ~250 columns.\n\nAnd there can easily be other use cases where we do not want to limit\nnumber of forks so much\n\nCheers\nHannu\n\nOn Tue, Jul 12, 2022 at 10:36 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Jul 12, 2022 at 1:09 PM Andres Freund <andres@anarazel.de> wrote:\n> > What does currently happen if we exceed that?\n>\n> elog\n>\n> > > diff --git a/src/include/utils/wait_event.h b/src/include/utils/wait_event.h\n> > > index b578e2ec75..5d3775ccde 100644\n> > > --- a/src/include/utils/wait_event.h\n> > > +++ b/src/include/utils/wait_event.h\n> > > @@ -193,7 +193,7 @@ typedef enum\n> > > WAIT_EVENT_LOGICAL_REWRITE_TRUNCATE,\n> > > WAIT_EVENT_LOGICAL_REWRITE_WRITE,\n> > > WAIT_EVENT_RELATION_MAP_READ,\n> > > - WAIT_EVENT_RELATION_MAP_SYNC,\n> > > + WAIT_EVENT_RELATION_MAP_RENAME,\n> >\n> > Very minor nitpick: To me REPLACE would be a bit more accurate than RENAME,\n> > since it includes fsync etc?\n>\n> Sure, I had it that way for a while and changed it at the last minute.\n> I can change it back.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n>\n\n\n",
"msg_date": "Tue, 12 Jul 2022 23:00:22 +0200",
"msg_from": "Hannu Krosing <hannuk@google.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "Hi,\n\nPlease don't top quote - as mentioned a couple times recently.\n\nOn 2022-07-12 23:00:22 +0200, Hannu Krosing wrote:\n> Re: staticAssertStmt(MAX_FORKNUM <= INT8_MAX);\n> \n> Have you really thought through making the ForkNum 8-bit ?\n\nMAX_FORKNUM is way lower right now. And hardcoded. So this doesn't imply a new\nrestriction. As we iterate over 0..MAX_FORKNUM in a bunch of places (with\nfilesystem access each time), it's not feasible to make that number large.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 12 Jul 2022 15:02:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Jul 12, 2022 at 6:02 PM Andres Freund <andres@anarazel.de> wrote:\n> MAX_FORKNUM is way lower right now. And hardcoded. So this doesn't imply a new\n> restriction. As we iterate over 0..MAX_FORKNUM in a bunch of places (with\n> filesystem access each time), it's not feasible to make that number large.\n\nYeah. TBH, what I'd really like to do is kill the entire fork system\nwith fire and replace it with something more scalable, which would\nmaybe permit the sort of thing Hannu suggests here. With the current\nsystem, forget it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 12 Jul 2022 18:30:58 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Jul 12, 2022 at 7:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n\n> In this version, I also removed the struct padding, changed the limit\n> on the number of entries to a nice round 64, and made some comment\n> updates. I considered trying to go further and actually make the file\n> variable-size, so that we never again need to worry about the limit on\n> the number of entries, but I don't actually think that's a good idea.\n> It would require substantially more changes to the code in this file,\n> and that means there's more risk of introducing bugs, and I don't see\n> that there's much value anyway, because if we ever do hit the current\n> limit, we can just raise the limit.\n>\n> If we were going to split up durable_rename(), the only intelligible\n> split I can see would be to have a second version of the function, or\n> a flag to the existing function, that caters to the situation where\n> the old file is already known to have been fsync()'d.\n\nThe patch looks good except one minor comment\n\n+ * corruption. Since the file might be more tha none standard-size disk\n+ * sector in size, we cannot rely on overwrite-in-place. Instead, we generate\n\ntypo \"more tha none\" -> \"more than one\"\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 13 Jul 2022 09:35:54 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, Jul 13, 2022 at 9:35 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Jul 12, 2022 at 7:21 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n>\n> > In this version, I also removed the struct padding, changed the limit\n> > on the number of entries to a nice round 64, and made some comment\n> > updates. I considered trying to go further and actually make the file\n> > variable-size, so that we never again need to worry about the limit on\n> > the number of entries, but I don't actually think that's a good idea.\n> > It would require substantially more changes to the code in this file,\n> > and that means there's more risk of introducing bugs, and I don't see\n> > that there's much value anyway, because if we ever do hit the current\n> > limit, we can just raise the limit.\n> >\n> > If we were going to split up durable_rename(), the only intelligible\n> > split I can see would be to have a second version of the function, or\n> > a flag to the existing function, that caters to the situation where\n> > the old file is already known to have been fsync()'d.\n>\n> The patch looks good except one minor comment\n>\n> + * corruption. Since the file might be more tha none standard-size disk\n> + * sector in size, we cannot rely on overwrite-in-place. Instead, we generate\n>\n> typo \"more tha none\" -> \"more than one\"\n>\nI have fixed this and included this change in the new patch series.\n\nApart from this I have fixed all the pending issues that includes\n\n- Change existing macros to inline functions done in 0001.\n- Change pg_class index from (tbspc, relfilenode) to relfilenode and\nalso change RelidByRelfilenumber(). In RelidByRelfilenumber I have\nchanged the hash to maintain based on just the relfilenumber but we\nstill need to pass the tablespace to identify whether it is a shared\nrelation or not. If we want we can make it bool but I don't think\nthat is really needed here.\n- Changed logic of GetNewRelFileNumber() based on what Robert\ndescribed, and instead of tracking the pending logged relnumbercount\nnow I am tracking last loggedRelNumber, which help little bit in\nSetNextRelFileNumber in making code cleaner, but otherwise it doesn't\nmake much difference.\n- Some new asserts in buf_internal inline function to validate value\nof computed/input relfilenumber.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 14 Jul 2022 17:18:32 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 5:18 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> Apart from this I have fixed all the pending issues that includes\n>\n> - Change existing macros to inline functions done in 0001.\n> - Change pg_class index from (tbspc, relfilenode) to relfilenode and\n> also change RelidByRelfilenumber(). In RelidByRelfilenumber I have\n> changed the hash to maintain based on just the relfilenumber but we\n> still need to pass the tablespace to identify whether it is a shared\n> relation or not. If we want we can make it bool but I don't think\n> that is really needed here.\n> - Changed logic of GetNewRelFileNumber() based on what Robert\n> described, and instead of tracking the pending logged relnumbercount\n> now I am tracking last loggedRelNumber, which help little bit in\n> SetNextRelFileNumber in making code cleaner, but otherwise it doesn't\n> make much difference.\n> - Some new asserts in buf_internal inline function to validate value\n> of computed/input relfilenumber.\n\nI was doing some more testing by setting the FirstNormalRelFileNumber\nto a high value(more than 32 bits) I have noticed a couple of problems\nthere e.g. relpath is still using OIDCHARS macro which says max\nrelfilenumber file name can be only 10 character long which is no\nlonger true. So there we need to change this value to 20 and also\nneed to carefully rename the macros and other variable names used for\nthis purpose.\n\nSimilarly there was some issue in macro in buf_internal.h while\nfetching the relfilenumber. So I will relook into all those issues\nand repost the patch soon.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Jul 2022 16:51:00 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Mon, Jul 18, 2022 at 4:51 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> I was doing some more testing by setting the FirstNormalRelFileNumber\n> to a high value(more than 32 bits) I have noticed a couple of problems\n> there e.g. relpath is still using OIDCHARS macro which says max\n> relfilenumber file name can be only 10 character long which is no\n> longer true. So there we need to change this value to 20 and also\n> need to carefully rename the macros and other variable names used for\n> this purpose.\n>\n> Similarly there was some issue in macro in buf_internal.h while\n> fetching the relfilenumber. So I will relook into all those issues\n> and repost the patch soon.\n\nI have fixed these existing issues and there was also some issue in\npg_dump.c which was creating problems in upgrading to the same version\nwhile using a higher range of the relfilenumber.\n\nThere was also an issue where the user table from the old cluster's\nrelfilenode could conflict with the system table of the new cluster.\nAs a solution currently for system table object (while creating\nstorage first time) we are keeping the low range of relfilenumber,\nbasically we are using the same relfilenumber as OID so that during\nupgrade the normal user table from the old cluster will not conflict\nwith the system tables in the new cluster. But with this solution\nRobert told me (in off list chat) a problem that in future if we want\nto make relfilenumber completely unique within a cluster by\nimplementing the CREATEDB differently then we can not do that as we\nhave created fixed relfilenodes for the system tables.\n\nI am not sure what exactly we can do to avoid that because even if we\ndo something to avoid that in the new cluster the old cluster might\nbe already using the non-unique relfilenode so after upgrading the new\ncluster will also get those non-unique relfilenode.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 20 Jul 2022 16:56:47 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 11:27 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> [v10 patch set]\n\nHi Dilip, I'm experimenting with these patches and will hopefully have\nmore to say soon, but I just wanted to point out that this builds with\nwarnings and failed on 3/4 of the CI OSes on cfbot's last run. Maybe\nthere is the good kind of uninitialised data on Linux, and the bad\nkind of uninitialised data on those other pesky systems?\n\n\n",
"msg_date": "Thu, 21 Jul 2022 16:23:16 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 4:57 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Jul 18, 2022 at 4:51 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > I was doing some more testing by setting the FirstNormalRelFileNumber\n> > to a high value(more than 32 bits) I have noticed a couple of problems\n> > there e.g. relpath is still using OIDCHARS macro which says max\n> > relfilenumber file name can be only 10 character long which is no\n> > longer true. So there we need to change this value to 20 and also\n> > need to carefully rename the macros and other variable names used for\n> > this purpose.\n> >\n> > Similarly there was some issue in macro in buf_internal.h while\n> > fetching the relfilenumber. So I will relook into all those issues\n> > and repost the patch soon.\n>\n> I have fixed these existing issues and there was also some issue in\n> pg_dump.c which was creating problems in upgrading to the same version\n> while using a higher range of the relfilenumber.\n>\n> There was also an issue where the user table from the old cluster's\n> relfilenode could conflict with the system table of the new cluster.\n> As a solution currently for system table object (while creating\n> storage first time) we are keeping the low range of relfilenumber,\n> basically we are using the same relfilenumber as OID so that during\n> upgrade the normal user table from the old cluster will not conflict\n> with the system tables in the new cluster. But with this solution\n> Robert told me (in off list chat) a problem that in future if we want\n> to make relfilenumber completely unique within a cluster by\n> implementing the CREATEDB differently then we can not do that as we\n> have created fixed relfilenodes for the system tables.\n>\n> I am not sure what exactly we can do to avoid that because even if we\n> do something to avoid that in the new cluster the old cluster might\n> be already using the non-unique relfilenode so after upgrading the new\n> cluster will also get those non-unique relfilenode.\n\nThanks for the patch, my comments from the initial review:\n1) Since we have changed the macros to inline functions, should we\nchange the function names similar to the other inline functions in the\nsame file like: ClearBufferTag, InitBufferTag & BufferTagsEqual:\n-#define BUFFERTAGS_EQUAL(a,b) \\\n-( \\\n- RelFileLocatorEquals((a).rlocator, (b).rlocator) && \\\n- (a).blockNum == (b).blockNum && \\\n- (a).forkNum == (b).forkNum \\\n-)\n+static inline void\n+CLEAR_BUFFERTAG(BufferTag *tag)\n+{\n+ tag->rlocator.spcOid = InvalidOid;\n+ tag->rlocator.dbOid = InvalidOid;\n+ tag->rlocator.relNumber = InvalidRelFileNumber;\n+ tag->forkNum = InvalidForkNumber;\n+ tag->blockNum = InvalidBlockNumber;\n+}\n\n2) We could move this macros along with the other macros at the top of the file:\n+/*\n+ * The freeNext field is either the index of the next freelist entry,\n+ * or one of these special values:\n+ */\n+#define FREENEXT_END_OF_LIST (-1)\n+#define FREENEXT_NOT_IN_LIST (-2)\n\n3) typo thn should be then:\n+ * can raise it as necessary if we end up with more mapped relations. For\n+ * now, we just pick a round number that is modestly larger thn the expected\n+ * number of mappings.\n+ */\n\n4) There is one whitespace issue:\ngit am v10-0004-Widen-relfilenumber-from-32-bits-to-56-bits.patch\nApplying: Widen relfilenumber from 32 bits to 56 bits\n.git/rebase-apply/patch:1500: space before tab in indent.\n(relfilenumber)))); \\\nwarning: 1 line adds whitespace errors.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 22 Jul 2022 16:21:37 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "Hi,\n\nAs oid and relfilenumber are linked with each other, I still see that if\nthe oid value reaches the threshold limit, we are unable to create a table\nwith storage. For example I set FirstNormalObjectId to 4294967294 (one\nvalue less than the range limit of 2^32 -1 = 4294967295). Now when I try to\ncreate a table, the CREATE TABLE command gets stuck because it is unable to\nfind the OID for the comp type although it can find a new relfilenumber.\n\npostgres=# create table t1(a int);\nCREATE TABLE\n\npostgres=# select oid, reltype, relfilenode from pg_class where relname =\n't1';\n oid | reltype | relfilenode\n------------+------------+-------------\n 4294967295 | 4294967294 | 100000\n(1 row)\n\npostgres=# create table t2(a int);\n^CCancel request sent\nERROR: canceling statement due to user request\n\ncreation of t2 table gets stuck as it is unable to find a new oid.\nBasically the point that I am trying to put here is even though we will be\nable to find the new relfile number by increasing the relfilenumber size\nbut still the commands like above will not execute if the oid value (of 32\nbits) has reached the threshold limit.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n\nOn Wed, Jul 20, 2022 at 4:57 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Mon, Jul 18, 2022 at 4:51 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > I was doing some more testing by setting the FirstNormalRelFileNumber\n> > to a high value(more than 32 bits) I have noticed a couple of problems\n> > there e.g. relpath is still using OIDCHARS macro which says max\n> > relfilenumber file name can be only 10 character long which is no\n> > longer true. So there we need to change this value to 20 and also\n> > need to carefully rename the macros and other variable names used for\n> > this purpose.\n> >\n> > Similarly there was some issue in macro in buf_internal.h while\n> > fetching the relfilenumber. So I will relook into all those issues\n> > and repost the patch soon.\n>\n> I have fixed these existing issues and there was also some issue in\n> pg_dump.c which was creating problems in upgrading to the same version\n> while using a higher range of the relfilenumber.\n>\n> There was also an issue where the user table from the old cluster's\n> relfilenode could conflict with the system table of the new cluster.\n> As a solution currently for system table object (while creating\n> storage first time) we are keeping the low range of relfilenumber,\n> basically we are using the same relfilenumber as OID so that during\n> upgrade the normal user table from the old cluster will not conflict\n> with the system tables in the new cluster. But with this solution\n> Robert told me (in off list chat) a problem that in future if we want\n> to make relfilenumber completely unique within a cluster by\n> implementing the CREATEDB differently then we can not do that as we\n> have created fixed relfilenodes for the system tables.\n>\n> I am not sure what exactly we can do to avoid that because even if we\n> do something to avoid that in the new cluster the old cluster might\n> be already using the non-unique relfilenode so after upgrading the new\n> cluster will also get those non-unique relfilenode.\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nHi,As oid and relfilenumber are linked with each other, I still see that if the oid value reaches the threshold limit, we are unable to create a table with storage. For example I set FirstNormalObjectId to 4294967294 (one value less than the range limit of 2^32 -1 = 4294967295). Now when I try to create a table, the CREATE TABLE command gets stuck because it is unable to find the OID for the comp type although it can find a new relfilenumber.postgres=# create table t1(a int);CREATE TABLEpostgres=# select oid, reltype, relfilenode from pg_class where relname = 't1'; oid | reltype | relfilenode ------------+------------+------------- 4294967295 | 4294967294 | 100000(1 row)postgres=# create table t2(a int);^CCancel request sentERROR: canceling statement due to user requestcreation of t2 table gets stuck as it is unable to find a new oid. Basically the point that I am trying to put here is even though we will be able to find the new relfile number by increasing the relfilenumber size but still the commands like above will not execute if the oid value (of 32 bits) has reached the threshold limit.--With Regards,Ashutosh Sharma.On Wed, Jul 20, 2022 at 4:57 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Mon, Jul 18, 2022 at 4:51 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> I was doing some more testing by setting the FirstNormalRelFileNumber\n> to a high value(more than 32 bits) I have noticed a couple of problems\n> there e.g. relpath is still using OIDCHARS macro which says max\n> relfilenumber file name can be only 10 character long which is no\n> longer true. So there we need to change this value to 20 and also\n> need to carefully rename the macros and other variable names used for\n> this purpose.\n>\n> Similarly there was some issue in macro in buf_internal.h while\n> fetching the relfilenumber. So I will relook into all those issues\n> and repost the patch soon.\n\nI have fixed these existing issues and there was also some issue in\npg_dump.c which was creating problems in upgrading to the same version\nwhile using a higher range of the relfilenumber.\n\nThere was also an issue where the user table from the old cluster's\nrelfilenode could conflict with the system table of the new cluster.\nAs a solution currently for system table object (while creating\nstorage first time) we are keeping the low range of relfilenumber,\nbasically we are using the same relfilenumber as OID so that during\nupgrade the normal user table from the old cluster will not conflict\nwith the system tables in the new cluster. But with this solution\nRobert told me (in off list chat) a problem that in future if we want\nto make relfilenumber completely unique within a cluster by\nimplementing the CREATEDB differently then we can not do that as we\nhave created fixed relfilenodes for the system tables.\n\nI am not sure what exactly we can do to avoid that because even if we\ndo something to avoid that in the new cluster the old cluster might\nbe already using the non-unique relfilenode so after upgrading the new\ncluster will also get those non-unique relfilenode.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 25 Jul 2022 21:51:27 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 4:21 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, Jul 20, 2022 at 4:57 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Jul 18, 2022 at 4:51 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > I was doing some more testing by setting the FirstNormalRelFileNumber\n> > > to a high value(more than 32 bits) I have noticed a couple of problems\n> > > there e.g. relpath is still using OIDCHARS macro which says max\n> > > relfilenumber file name can be only 10 character long which is no\n> > > longer true. So there we need to change this value to 20 and also\n> > > need to carefully rename the macros and other variable names used for\n> > > this purpose.\n> > >\n> > > Similarly there was some issue in macro in buf_internal.h while\n> > > fetching the relfilenumber. So I will relook into all those issues\n> > > and repost the patch soon.\n> >\n> > I have fixed these existing issues and there was also some issue in\n> > pg_dump.c which was creating problems in upgrading to the same version\n> > while using a higher range of the relfilenumber.\n> >\n> > There was also an issue where the user table from the old cluster's\n> > relfilenode could conflict with the system table of the new cluster.\n> > As a solution currently for system table object (while creating\n> > storage first time) we are keeping the low range of relfilenumber,\n> > basically we are using the same relfilenumber as OID so that during\n> > upgrade the normal user table from the old cluster will not conflict\n> > with the system tables in the new cluster. But with this solution\n> > Robert told me (in off list chat) a problem that in future if we want\n> > to make relfilenumber completely unique within a cluster by\n> > implementing the CREATEDB differently then we can not do that as we\n> > have created fixed relfilenodes for the system tables.\n> >\n> > I am not sure what exactly we can do to avoid that because even if we\n> > do something to avoid that in the new cluster the old cluster might\n> > be already using the non-unique relfilenode so after upgrading the new\n> > cluster will also get those non-unique relfilenode.\n>\n> Thanks for the patch, my comments from the initial review:\n> 1) Since we have changed the macros to inline functions, should we\n> change the function names similar to the other inline functions in the\n> same file like: ClearBufferTag, InitBufferTag & BufferTagsEqual:\n> -#define BUFFERTAGS_EQUAL(a,b) \\\n> -( \\\n> - RelFileLocatorEquals((a).rlocator, (b).rlocator) && \\\n> - (a).blockNum == (b).blockNum && \\\n> - (a).forkNum == (b).forkNum \\\n> -)\n> +static inline void\n> +CLEAR_BUFFERTAG(BufferTag *tag)\n> +{\n> + tag->rlocator.spcOid = InvalidOid;\n> + tag->rlocator.dbOid = InvalidOid;\n> + tag->rlocator.relNumber = InvalidRelFileNumber;\n> + tag->forkNum = InvalidForkNumber;\n> + tag->blockNum = InvalidBlockNumber;\n> +}\n>\n> 2) We could move this macros along with the other macros at the top of the file:\n> +/*\n> + * The freeNext field is either the index of the next freelist entry,\n> + * or one of these special values:\n> + */\n> +#define FREENEXT_END_OF_LIST (-1)\n> +#define FREENEXT_NOT_IN_LIST (-2)\n>\n> 3) typo thn should be then:\n> + * can raise it as necessary if we end up with more mapped relations. For\n> + * now, we just pick a round number that is modestly larger thn the expected\n> + * number of mappings.\n> + */\n>\n\nFew more typos in 0004 patch as well:\n\nthe a value\ninterger\npreviosly\ncurrenly\n\n> 4) There is one whitespace issue:\n> git am v10-0004-Widen-relfilenumber-from-32-bits-to-56-bits.patch\n> Applying: Widen relfilenumber from 32 bits to 56 bits\n> .git/rebase-apply/patch:1500: space before tab in indent.\n> (relfilenumber)))); \\\n> warning: 1 line adds whitespace errors.\n>\n> Regards,\n> Vignesh\n>\n\nRegards,\nAmul\n\n\n",
"msg_date": "Tue, 26 Jul 2022 10:05:08 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Mon, Jul 25, 2022 at 9:51 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi,\n>\n> As oid and relfilenumber are linked with each other, I still see that if the oid value reaches the threshold limit, we are unable to create a table with storage. For example I set FirstNormalObjectId to 4294967294 (one value less than the range limit of 2^32 -1 = 4294967295). Now when I try to create a table, the CREATE TABLE command gets stuck because it is unable to find the OID for the comp type although it can find a new relfilenumber.\n>\n\nFirst of all if the OID value reaches to max oid then it should wrap\naround to the FirstNormalObjectId and find a new non conflicting OID.\nSince in your case the first normaloid is 4294967294 and max oid is\n42949672945 there is no scope of wraparound because in this case you\ncan create at most one object and once you created that then there is\nno more unused oid left and with the current patch we are not at all\ntrying do anything about this.\n\nNow come to the problem we are trying to solve with 56bits\nrelfilenode. Here we are not trying to extend the limit of the system\nto create more than 4294967294 objects. What we are trying to solve\nis to not to reuse the same disk filenames for different objects. And\nalso notice that the relfilenodes can get consumed really faster than\noid so chances of wraparound is more, I mean you can truncate/rewrite\nthe same relation multiple times so that relation will have the same\noid but will consume multiple relfilenodes.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Jul 2022 11:27:28 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 9:53 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Wed, Jul 20, 2022 at 11:27 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > [v10 patch set]\n>\n> Hi Dilip, I'm experimenting with these patches and will hopefully have\n> more to say soon, but I just wanted to point out that this builds with\n> warnings and failed on 3/4 of the CI OSes on cfbot's last run. Maybe\n> there is the good kind of uninitialised data on Linux, and the bad\n> kind of uninitialised data on those other pesky systems?\n\nThanks, I have figured out the issue, I will post the patch soon.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Jul 2022 11:29:23 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Fri, Jul 22, 2022 at 4:21 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, Jul 20, 2022 at 4:57 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n\n> Thanks for the patch, my comments from the initial review:\n> 1) Since we have changed the macros to inline functions, should we\n> change the function names similar to the other inline functions in the\n> same file like: ClearBufferTag, InitBufferTag & BufferTagsEqual:\n\nI have thought about it while doing so but I am not sure whether it is\na good idea or not, because before my change these all were macros\nwith 2 naming conventions so I just changed to inline function so why\nto change the name.\n\n> -#define BUFFERTAGS_EQUAL(a,b) \\\n> -( \\\n> - RelFileLocatorEquals((a).rlocator, (b).rlocator) && \\\n> - (a).blockNum == (b).blockNum && \\\n> - (a).forkNum == (b).forkNum \\\n> -)\n> +static inline void\n> +CLEAR_BUFFERTAG(BufferTag *tag)\n> +{\n> + tag->rlocator.spcOid = InvalidOid;\n> + tag->rlocator.dbOid = InvalidOid;\n> + tag->rlocator.relNumber = InvalidRelFileNumber;\n> + tag->forkNum = InvalidForkNumber;\n> + tag->blockNum = InvalidBlockNumber;\n> +}\n>\n> 2) We could move this macros along with the other macros at the top of the file:\n> +/*\n> + * The freeNext field is either the index of the next freelist entry,\n> + * or one of these special values:\n> + */\n> +#define FREENEXT_END_OF_LIST (-1)\n> +#define FREENEXT_NOT_IN_LIST (-2)\n\nYeah we can do that.\n\n> 3) typo thn should be then:\n> + * can raise it as necessary if we end up with more mapped relations. For\n> + * now, we just pick a round number that is modestly larger thn the expected\n> + * number of mappings.\n> + */\n>\n> 4) There is one whitespace issue:\n> git am v10-0004-Widen-relfilenumber-from-32-bits-to-56-bits.patch\n> Applying: Widen relfilenumber from 32 bits to 56 bits\n> .git/rebase-apply/patch:1500: space before tab in indent.\n> (relfilenumber)))); \\\n> warning: 1 line adds whitespace errors.\n\nOkay, I will fix it.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Jul 2022 11:37:33 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 10:05 AM Amul Sul <sulamul@gmail.com> wrote:\n>\n> Few more typos in 0004 patch as well:\n>\n> the a value\n> interger\n> previosly\n> currenly\n>\n\nThanks for the review, I will fix it in the next version.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Jul 2022 11:39:13 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Thu, Jul 21, 2022 at 9:53 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Wed, Jul 20, 2022 at 11:27 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > [v10 patch set]\n>\n> Hi Dilip, I'm experimenting with these patches and will hopefully have\n> more to say soon, but I just wanted to point out that this builds with\n> warnings and failed on 3/4 of the CI OSes on cfbot's last run. Maybe\n> there is the good kind of uninitialised data on Linux, and the bad\n> kind of uninitialised data on those other pesky systems?\n\nHere is the patch to fix the issue, basically, while asserting for the\nfile existence it was not setting the relfilenumber in the\nrelfilelocator before generating the path so it was just checking for\nthe existence of the random path so it was asserting randomly.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 26 Jul 2022 13:31:38 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "/*\n * If relfilenumber is unspecified by the caller then create storage\n- * with oid same as relid.\n+ * with relfilenumber same as relid if it is a system table\notherwise\n+ * allocate a new relfilenumber. For more details read comments\natop\n+ * FirstNormalRelFileNumber declaration.\n */\n if (!RelFileNumberIsValid(relfilenumber))\n- relfilenumber = relid;\n+ {\n+ relfilenumber = relid < FirstNormalObjectId ?\n+ relid : GetNewRelFileNumber();\n\nAbove code says that in the case of system table we want relfilenode to be\nthe same as object id. This technically means that the relfilenode or oid\nfor the system tables would not be exceeding 16383. However in the below\nlines of code added in the patch, it says there is some chance for the\nstorage path of the user tables from the old cluster conflicting with the\nstorage path of the system tables in the new cluster. Assuming that the\nOIDs for the user tables on the old cluster would start with 16384 (the\nfirst object ID), I see no reason why there would be a conflict.\n\n+/* ----------\n+ * RelFileNumber zero is InvalidRelFileNumber.\n+ *\n+ * For the system tables (OID < FirstNormalObjectId) the initial storage\n+ * will be created with the relfilenumber same as their oid. And, later\nfor\n+ * any storage the relfilenumber allocated by GetNewRelFileNumber() will\nstart\n+ * at 100000. Thus, when upgrading from an older cluster, the relation\nstorage\n+ * path for the user table from the old cluster will not conflict with the\n+ * relation storage path for the system table from the new cluster.\nAnyway,\n+ * the new cluster must not have any user tables while upgrading, so we\nneedn't\n+ * worry about them.\n+ * ----------\n+ */\n+#define FirstNormalRelFileNumber ((RelFileNumber) 100000)\n\n==\n\nWhen WAL logging the next object id we have the chosen the xlog threshold\nvalue as 8192 whereas for relfilenode it is 512. Any reason for choosing\nthis low arbitrary value in case of relfilenumber?\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Tue, Jul 26, 2022 at 1:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Thu, Jul 21, 2022 at 9:53 AM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> >\n> > On Wed, Jul 20, 2022 at 11:27 PM Dilip Kumar <dilipbalaut@gmail.com>\n> wrote:\n> > > [v10 patch set]\n> >\n> > Hi Dilip, I'm experimenting with these patches and will hopefully have\n> > more to say soon, but I just wanted to point out that this builds with\n> > warnings and failed on 3/4 of the CI OSes on cfbot's last run. Maybe\n> > there is the good kind of uninitialised data on Linux, and the bad\n> > kind of uninitialised data on those other pesky systems?\n>\n> Here is the patch to fix the issue, basically, while asserting for the\n> file existence it was not setting the relfilenumber in the\n> relfilelocator before generating the path so it was just checking for\n> the existence of the random path so it was asserting randomly.\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\n /* * If relfilenumber is unspecified by the caller then create storage- * with oid same as relid.+ * with relfilenumber same as relid if it is a system table otherwise+ * allocate a new relfilenumber. For more details read comments atop+ * FirstNormalRelFileNumber declaration. */ if (!RelFileNumberIsValid(relfilenumber))- relfilenumber = relid;+ {+ relfilenumber = relid < FirstNormalObjectId ?+ relid : GetNewRelFileNumber();Above code says that in the case of system table we want relfilenode to be the same as object id. This technically means that the relfilenode or oid for the system tables would not be exceeding 16383. However in the below lines of code added in the patch, it says there is some chance for the storage path of the user tables from the old cluster conflicting with the storage path of the system tables in the new cluster. Assuming that the OIDs for the user tables on the old cluster would start with 16384 (the first object ID), I see no reason why there would be a conflict.+/* ----------+ * RelFileNumber zero is InvalidRelFileNumber.+ *+ * For the system tables (OID < FirstNormalObjectId) the initial storage+ * will be created with the relfilenumber same as their oid. And, later for+ * any storage the relfilenumber allocated by GetNewRelFileNumber() will start+ * at 100000. Thus, when upgrading from an older cluster, the relation storage+ * path for the user table from the old cluster will not conflict with the+ * relation storage path for the system table from the new cluster. Anyway,+ * the new cluster must not have any user tables while upgrading, so we needn't+ * worry about them.+ * ----------+ */+#define FirstNormalRelFileNumber ((RelFileNumber) 100000)==When WAL logging the next object id we have the chosen the xlog threshold value as 8192 whereas for relfilenode it is 512. Any reason for choosing this low arbitrary value in case of relfilenumber?--With Regards,Ashutosh Sharma.On Tue, Jul 26, 2022 at 1:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Thu, Jul 21, 2022 at 9:53 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Wed, Jul 20, 2022 at 11:27 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > [v10 patch set]\n>\n> Hi Dilip, I'm experimenting with these patches and will hopefully have\n> more to say soon, but I just wanted to point out that this builds with\n> warnings and failed on 3/4 of the CI OSes on cfbot's last run. Maybe\n> there is the good kind of uninitialised data on Linux, and the bad\n> kind of uninitialised data on those other pesky systems?\n\nHere is the patch to fix the issue, basically, while asserting for the\nfile existence it was not setting the relfilenumber in the\nrelfilelocator before generating the path so it was just checking for\nthe existence of the random path so it was asserting randomly.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 26 Jul 2022 18:05:46 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 6:06 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n\nHi,\nNote: please avoid top posting.\n\n> /*\n> * If relfilenumber is unspecified by the caller then create storage\n> - * with oid same as relid.\n> + * with relfilenumber same as relid if it is a system table otherwise\n> + * allocate a new relfilenumber. For more details read comments atop\n> + * FirstNormalRelFileNumber declaration.\n> */\n> if (!RelFileNumberIsValid(relfilenumber))\n> - relfilenumber = relid;\n> + {\n> + relfilenumber = relid < FirstNormalObjectId ?\n> + relid : GetNewRelFileNumber();\n>\n> Above code says that in the case of system table we want relfilenode to be the same as object id. This technically means that the relfilenode or oid for the system tables would not be exceeding 16383. However in the below lines of code added in the patch, it says there is some chance for the storage path of the user tables from the old cluster conflicting with the storage path of the system tables in the new cluster. Assuming that the OIDs for the user tables on the old cluster would start with 16384 (the first object ID), I see no reason why there would be a conflict.\n\n\nBasically, the above comment says that the initial system table\nstorage will be created with the same relfilenumber as Oid so you are\nright that will not exceed 16383. And below code is explaining the\nreason that in order to avoid the conflict with the user table from\nthe older cluster we do it this way. Otherwise, in the new design, we\nhave no intention to keep the relfilenode same as Oid. But during an\nupgrade from the older cluster which is not following this new design\nmight have user table relfilenode which can conflict with the system\ntable in the new cluster so we have to ensure that with the new design\nalso when creating the initial cluster we keep the system table\nrelfilenode in low range and directly using Oid is the best idea for\nthis purpose instead of defining the completely new range and\nmaintaining a separate counter for that.\n\n> +/* ----------\n> + * RelFileNumber zero is InvalidRelFileNumber.\n> + *\n> + * For the system tables (OID < FirstNormalObjectId) the initial storage\n> + * will be created with the relfilenumber same as their oid. And, later for\n> + * any storage the relfilenumber allocated by GetNewRelFileNumber() will start\n> + * at 100000. Thus, when upgrading from an older cluster, the relation storage\n> + * path for the user table from the old cluster will not conflict with the\n> + * relation storage path for the system table from the new cluster. Anyway,\n> + * the new cluster must not have any user tables while upgrading, so we needn't\n> + * worry about them.\n> + * ----------\n> + */\n> +#define FirstNormalRelFileNumber ((RelFileNumber) 100000)\n>\n> ==\n>\n> When WAL logging the next object id we have the chosen the xlog threshold value as 8192 whereas for relfilenode it is 512. Any reason for choosing this low arbitrary value in case of relfilenumber?\n\nFor Oid when we cross the max value we will wraparound, whereas for\nrelfilenumber we can not expect the wraparound for cluster lifetime.\nSo it is better not to log forward a really large number of\nrelfilenumber as we do for Oid. OTOH if we make it really low like 64\nthen we can is RelFIleNumberGenLock in wait event in very high\nconcurrency where from 32 backends we are continuously\ncreating/dropping tables. So we thought of choosing this number 512\nso that it is not very low that can create the lock contention and it\nis not very high so that we need to worry about wasting those many\nrelfilenumbers on the crash.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Jul 2022 19:32:42 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "Thanks Dilip. Here are few comments that could find upon quickly reviewing\nthe v11 patch:\n\n /*\n+ * Similar to the XLogPutNextOid but instead of writing NEXTOID log record\nit\n+ * writes a NEXT_RELFILENUMBER log record. If '*prevrecptr' is a valid\n+ * XLogRecPtrthen flush the wal upto this record pointer otherwise flush\nupto\n\nXLogRecPtrthen -> XLogRecPtr then\n\n==\n\n+ switch (relpersistence)\n+ {\n+ case RELPERSISTENCE_TEMP:\n+ backend = BackendIdForTempRelations();\n+ break;\n+ case RELPERSISTENCE_UNLOGGED:\n+ case RELPERSISTENCE_PERMANENT:\n+ backend = InvalidBackendId;\n+ break;\n+ default:\n+ elog(ERROR, \"invalid relpersistence: %c\", relpersistence);\n+ return InvalidRelFileNumber; /* placate compiler */\n+ }\n\n\nI think the above check should be added at the beginning of the function\nfor the reason that if we come to the default switch case we won't be\nacquiring the lwlock and do other stuff to get a new relfilenumber.\n\n==\n\n- newrelfilenumber = GetNewRelFileNumber(newTableSpace, NULL,\n+ * Generate a new relfilenumber. We cannot reuse the old relfilenumber\n+ * because of the possibility that that relation will be moved back to\nthe\n\nthat that relation -> that relation.\n\n==\n\n+ * option_parse_relfilenumber\n+ *\n+ * Parse relfilenumber value for an option. If the parsing is successful,\n+ * returns; if parsing fails, returns false.\n+ */\n\nIf parsing is successful, returns true;\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Tue, Jul 26, 2022 at 7:33 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Tue, Jul 26, 2022 at 6:06 PM Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n>\n> Hi,\n> Note: please avoid top posting.\n>\n> > /*\n> > * If relfilenumber is unspecified by the caller then create\n> storage\n> > - * with oid same as relid.\n> > + * with relfilenumber same as relid if it is a system table\n> otherwise\n> > + * allocate a new relfilenumber. For more details read comments\n> atop\n> > + * FirstNormalRelFileNumber declaration.\n> > */\n> > if (!RelFileNumberIsValid(relfilenumber))\n> > - relfilenumber = relid;\n> > + {\n> > + relfilenumber = relid < FirstNormalObjectId ?\n> > + relid : GetNewRelFileNumber();\n> >\n> > Above code says that in the case of system table we want relfilenode to\n> be the same as object id. This technically means that the relfilenode or\n> oid for the system tables would not be exceeding 16383. However in the\n> below lines of code added in the patch, it says there is some chance for\n> the storage path of the user tables from the old cluster conflicting with\n> the storage path of the system tables in the new cluster. Assuming that the\n> OIDs for the user tables on the old cluster would start with 16384 (the\n> first object ID), I see no reason why there would be a conflict.\n>\n>\n> Basically, the above comment says that the initial system table\n> storage will be created with the same relfilenumber as Oid so you are\n> right that will not exceed 16383. And below code is explaining the\n> reason that in order to avoid the conflict with the user table from\n> the older cluster we do it this way. Otherwise, in the new design, we\n> have no intention to keep the relfilenode same as Oid. But during an\n> upgrade from the older cluster which is not following this new design\n> might have user table relfilenode which can conflict with the system\n> table in the new cluster so we have to ensure that with the new design\n> also when creating the initial cluster we keep the system table\n> relfilenode in low range and directly using Oid is the best idea for\n> this purpose instead of defining the completely new range and\n> maintaining a separate counter for that.\n>\n> > +/* ----------\n> > + * RelFileNumber zero is InvalidRelFileNumber.\n> > + *\n> > + * For the system tables (OID < FirstNormalObjectId) the initial storage\n> > + * will be created with the relfilenumber same as their oid. And,\n> later for\n> > + * any storage the relfilenumber allocated by GetNewRelFileNumber()\n> will start\n> > + * at 100000. Thus, when upgrading from an older cluster, the relation\n> storage\n> > + * path for the user table from the old cluster will not conflict with\n> the\n> > + * relation storage path for the system table from the new cluster.\n> Anyway,\n> > + * the new cluster must not have any user tables while upgrading, so we\n> needn't\n> > + * worry about them.\n> > + * ----------\n> > + */\n> > +#define FirstNormalRelFileNumber ((RelFileNumber) 100000)\n> >\n> > ==\n> >\n> > When WAL logging the next object id we have the chosen the xlog\n> threshold value as 8192 whereas for relfilenode it is 512. Any reason for\n> choosing this low arbitrary value in case of relfilenumber?\n>\n> For Oid when we cross the max value we will wraparound, whereas for\n> relfilenumber we can not expect the wraparound for cluster lifetime.\n> So it is better not to log forward a really large number of\n> relfilenumber as we do for Oid. OTOH if we make it really low like 64\n> then we can is RelFIleNumberGenLock in wait event in very high\n> concurrency where from 32 backends we are continuously\n> creating/dropping tables. So we thought of choosing this number 512\n> so that it is not very low that can create the lock contention and it\n> is not very high so that we need to worry about wasting those many\n> relfilenumbers on the crash.\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nThanks Dilip. Here are few comments that could find upon quickly reviewing the v11 patch: /*+ * Similar to the XLogPutNextOid but instead of writing NEXTOID log record it+ * writes a NEXT_RELFILENUMBER log record. If '*prevrecptr' is a valid+ * XLogRecPtrthen flush the wal upto this record pointer otherwise flush uptoXLogRecPtrthen -> XLogRecPtr then==+ switch (relpersistence)+ {+ case RELPERSISTENCE_TEMP:+ backend = BackendIdForTempRelations();+ break;+ case RELPERSISTENCE_UNLOGGED:+ case RELPERSISTENCE_PERMANENT:+ backend = InvalidBackendId;+ break;+ default:+ elog(ERROR, \"invalid relpersistence: %c\", relpersistence);+ return InvalidRelFileNumber; /* placate compiler */+ }I think the above check should be added at the beginning of the function for the reason that if we come to the default switch case we won't be acquiring the lwlock and do other stuff to get a new relfilenumber.==- newrelfilenumber = GetNewRelFileNumber(newTableSpace, NULL,+ * Generate a new relfilenumber. We cannot reuse the old relfilenumber+ * because of the possibility that that relation will be moved back to thethat that relation -> that relation.==+ * option_parse_relfilenumber+ *+ * Parse relfilenumber value for an option. If the parsing is successful,+ * returns; if parsing fails, returns false.+ */If parsing is successful, returns true;--With Regards,Ashutosh Sharma.On Tue, Jul 26, 2022 at 7:33 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Tue, Jul 26, 2022 at 6:06 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n\nHi,\nNote: please avoid top posting.\n\n> /*\n> * If relfilenumber is unspecified by the caller then create storage\n> - * with oid same as relid.\n> + * with relfilenumber same as relid if it is a system table otherwise\n> + * allocate a new relfilenumber. For more details read comments atop\n> + * FirstNormalRelFileNumber declaration.\n> */\n> if (!RelFileNumberIsValid(relfilenumber))\n> - relfilenumber = relid;\n> + {\n> + relfilenumber = relid < FirstNormalObjectId ?\n> + relid : GetNewRelFileNumber();\n>\n> Above code says that in the case of system table we want relfilenode to be the same as object id. This technically means that the relfilenode or oid for the system tables would not be exceeding 16383. However in the below lines of code added in the patch, it says there is some chance for the storage path of the user tables from the old cluster conflicting with the storage path of the system tables in the new cluster. Assuming that the OIDs for the user tables on the old cluster would start with 16384 (the first object ID), I see no reason why there would be a conflict.\n\n\nBasically, the above comment says that the initial system table\nstorage will be created with the same relfilenumber as Oid so you are\nright that will not exceed 16383. And below code is explaining the\nreason that in order to avoid the conflict with the user table from\nthe older cluster we do it this way. Otherwise, in the new design, we\nhave no intention to keep the relfilenode same as Oid. But during an\nupgrade from the older cluster which is not following this new design\nmight have user table relfilenode which can conflict with the system\ntable in the new cluster so we have to ensure that with the new design\nalso when creating the initial cluster we keep the system table\nrelfilenode in low range and directly using Oid is the best idea for\nthis purpose instead of defining the completely new range and\nmaintaining a separate counter for that.\n\n> +/* ----------\n> + * RelFileNumber zero is InvalidRelFileNumber.\n> + *\n> + * For the system tables (OID < FirstNormalObjectId) the initial storage\n> + * will be created with the relfilenumber same as their oid. And, later for\n> + * any storage the relfilenumber allocated by GetNewRelFileNumber() will start\n> + * at 100000. Thus, when upgrading from an older cluster, the relation storage\n> + * path for the user table from the old cluster will not conflict with the\n> + * relation storage path for the system table from the new cluster. Anyway,\n> + * the new cluster must not have any user tables while upgrading, so we needn't\n> + * worry about them.\n> + * ----------\n> + */\n> +#define FirstNormalRelFileNumber ((RelFileNumber) 100000)\n>\n> ==\n>\n> When WAL logging the next object id we have the chosen the xlog threshold value as 8192 whereas for relfilenode it is 512. Any reason for choosing this low arbitrary value in case of relfilenumber?\n\nFor Oid when we cross the max value we will wraparound, whereas for\nrelfilenumber we can not expect the wraparound for cluster lifetime.\nSo it is better not to log forward a really large number of\nrelfilenumber as we do for Oid. OTOH if we make it really low like 64\nthen we can is RelFIleNumberGenLock in wait event in very high\nconcurrency where from 32 backends we are continuously\ncreating/dropping tables. So we thought of choosing this number 512\nso that it is not very low that can create the lock contention and it\nis not very high so that we need to worry about wasting those many\nrelfilenumbers on the crash.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 26 Jul 2022 22:36:44 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 2:07 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I have thought about it while doing so but I am not sure whether it is\n> a good idea or not, because before my change these all were macros\n> with 2 naming conventions so I just changed to inline function so why\n> to change the name.\n\nWell, the reason to change the name would be for consistency. It feels\nweird to have some NAMES_LIKETHIS() and other NamesLikeThis().\n\nNow, an argument against that is that it will make back-patching more\nannoying, if any code using these functions/macros is touched. But\nsince the calling sequence is changing anyway (you now have to pass a\npointer rather than the object itself) that argument doesn't really\ncarry any weight. So I would favor ClearBufferTag(), InitBufferTag(),\netc.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Jul 2022 14:37:19 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Jul 12, 2022 at 4:35 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Very minor nitpick: To me REPLACE would be a bit more accurate than RENAME,\n> > since it includes fsync etc?\n>\n> Sure, I had it that way for a while and changed it at the last minute.\n> I can change it back.\n\nCommitted that way, also with the fix for the typo Dilip found.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 26 Jul 2022 15:19:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "Some more comments:\n\n==\n\nShouldn't we retry for the new relfilenumber if\n\"ShmemVariableCache->nextRelFileNumber > MAX_RELFILENUMBER\". There can be a\ncases where some of the tables are dropped by the user and relfilenumber of\nthose tables can be reused for which we would need to find the\nrelfilenumber that can be resued. For e.g. consider below example:\n\npostgres=# create table t1(a int);\nCREATE TABLE\n\npostgres=# create table t2(a int);\nCREATE TABLE\n\npostgres=# create table t3(a int);\nERROR: relfilenumber is out of bound\n\npostgres=# drop table t1, t2;\nDROP TABLE\n\npostgres=# checkpoint;\nCHECKPOINT\n\npostgres=# vacuum;\nVACUUM\n\nNow if I try to recreate table t3, it should succeed, shouldn't it? But it\ndoesn't because we simply error out by seeing the nextRelFileNumber saved\nin the shared memory.\n\npostgres=# create table t1(a int);\nERROR: relfilenumber is out of bound\n\nI think, above should have worked.\n\n==\n\n<caution>\n<para>\nNote that while a table's filenode often matches its OID, this is\n<emphasis>not</emphasis> necessarily the case; some operations, like\n<command>TRUNCATE</command>, <command>REINDEX</command>,\n<command>CLUSTER</command> and some forms\nof <command>ALTER TABLE</command>, can change the filenode while preserving\nthe OID.\n\nI think this note needs some improvement in storage.sgml. It says the\ntable's relfilenode mostly matches its OID, but it doesn't. This will\nhappen only in case of system table and maybe never in case of user table.\n\n==\n\npostgres=# create table t2(a int);\nERROR: relfilenumber is out of bound\n\nSince this is a user-visible error, I think it would be good to mention\nrelfilenode instead of relfilenumber. Elsewhere (including the user manual)\nwe refer to this as a relfilenode.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Tue, Jul 26, 2022 at 10:36 PM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n\n> Thanks Dilip. Here are few comments that could find upon quickly\n> reviewing the v11 patch:\n>\n> /*\n> + * Similar to the XLogPutNextOid but instead of writing NEXTOID log\n> record it\n> + * writes a NEXT_RELFILENUMBER log record. If '*prevrecptr' is a valid\n> + * XLogRecPtrthen flush the wal upto this record pointer otherwise flush\n> upto\n>\n> XLogRecPtrthen -> XLogRecPtr then\n>\n> ==\n>\n> + switch (relpersistence)\n> + {\n> + case RELPERSISTENCE_TEMP:\n> + backend = BackendIdForTempRelations();\n> + break;\n> + case RELPERSISTENCE_UNLOGGED:\n> + case RELPERSISTENCE_PERMANENT:\n> + backend = InvalidBackendId;\n> + break;\n> + default:\n> + elog(ERROR, \"invalid relpersistence: %c\", relpersistence);\n> + return InvalidRelFileNumber; /* placate compiler */\n> + }\n>\n>\n> I think the above check should be added at the beginning of the function\n> for the reason that if we come to the default switch case we won't be\n> acquiring the lwlock and do other stuff to get a new relfilenumber.\n>\n> ==\n>\n> - newrelfilenumber = GetNewRelFileNumber(newTableSpace, NULL,\n> + * Generate a new relfilenumber. We cannot reuse the old relfilenumber\n> + * because of the possibility that that relation will be moved back to\n> the\n>\n> that that relation -> that relation.\n>\n> ==\n>\n> + * option_parse_relfilenumber\n> + *\n> + * Parse relfilenumber value for an option. If the parsing is successful,\n> + * returns; if parsing fails, returns false.\n> + */\n>\n> If parsing is successful, returns true;\n>\n> --\n> With Regards,\n> Ashutosh Sharma.\n>\n> On Tue, Jul 26, 2022 at 7:33 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n>> On Tue, Jul 26, 2022 at 6:06 PM Ashutosh Sharma <ashu.coek88@gmail.com>\n>> wrote:\n>>\n>> Hi,\n>> Note: please avoid top posting.\n>>\n>> > /*\n>> > * If relfilenumber is unspecified by the caller then create\n>> storage\n>> > - * with oid same as relid.\n>> > + * with relfilenumber same as relid if it is a system table\n>> otherwise\n>> > + * allocate a new relfilenumber. For more details read\n>> comments atop\n>> > + * FirstNormalRelFileNumber declaration.\n>> > */\n>> > if (!RelFileNumberIsValid(relfilenumber))\n>> > - relfilenumber = relid;\n>> > + {\n>> > + relfilenumber = relid < FirstNormalObjectId ?\n>> > + relid : GetNewRelFileNumber();\n>> >\n>> > Above code says that in the case of system table we want relfilenode to\n>> be the same as object id. This technically means that the relfilenode or\n>> oid for the system tables would not be exceeding 16383. However in the\n>> below lines of code added in the patch, it says there is some chance for\n>> the storage path of the user tables from the old cluster conflicting with\n>> the storage path of the system tables in the new cluster. Assuming that the\n>> OIDs for the user tables on the old cluster would start with 16384 (the\n>> first object ID), I see no reason why there would be a conflict.\n>>\n>>\n>> Basically, the above comment says that the initial system table\n>> storage will be created with the same relfilenumber as Oid so you are\n>> right that will not exceed 16383. And below code is explaining the\n>> reason that in order to avoid the conflict with the user table from\n>> the older cluster we do it this way. Otherwise, in the new design, we\n>> have no intention to keep the relfilenode same as Oid. But during an\n>> upgrade from the older cluster which is not following this new design\n>> might have user table relfilenode which can conflict with the system\n>> table in the new cluster so we have to ensure that with the new design\n>> also when creating the initial cluster we keep the system table\n>> relfilenode in low range and directly using Oid is the best idea for\n>> this purpose instead of defining the completely new range and\n>> maintaining a separate counter for that.\n>>\n>> > +/* ----------\n>> > + * RelFileNumber zero is InvalidRelFileNumber.\n>> > + *\n>> > + * For the system tables (OID < FirstNormalObjectId) the initial\n>> storage\n>> > + * will be created with the relfilenumber same as their oid. And,\n>> later for\n>> > + * any storage the relfilenumber allocated by GetNewRelFileNumber()\n>> will start\n>> > + * at 100000. Thus, when upgrading from an older cluster, the\n>> relation storage\n>> > + * path for the user table from the old cluster will not conflict with\n>> the\n>> > + * relation storage path for the system table from the new cluster.\n>> Anyway,\n>> > + * the new cluster must not have any user tables while upgrading, so\n>> we needn't\n>> > + * worry about them.\n>> > + * ----------\n>> > + */\n>> > +#define FirstNormalRelFileNumber ((RelFileNumber) 100000)\n>> >\n>> > ==\n>> >\n>> > When WAL logging the next object id we have the chosen the xlog\n>> threshold value as 8192 whereas for relfilenode it is 512. Any reason for\n>> choosing this low arbitrary value in case of relfilenumber?\n>>\n>> For Oid when we cross the max value we will wraparound, whereas for\n>> relfilenumber we can not expect the wraparound for cluster lifetime.\n>> So it is better not to log forward a really large number of\n>> relfilenumber as we do for Oid. OTOH if we make it really low like 64\n>> then we can is RelFIleNumberGenLock in wait event in very high\n>> concurrency where from 32 backends we are continuously\n>> creating/dropping tables. So we thought of choosing this number 512\n>> so that it is not very low that can create the lock contention and it\n>> is not very high so that we need to worry about wasting those many\n>> relfilenumbers on the crash.\n>>\n>> --\n>> Regards,\n>> Dilip Kumar\n>> EnterpriseDB: http://www.enterprisedb.com\n>>\n>\n\nSome more comments:==Shouldn't we retry for the new relfilenumber if \"ShmemVariableCache->nextRelFileNumber > MAX_RELFILENUMBER\". There can be a cases where some of the tables are dropped by the user and relfilenumber of those tables can be reused for which we would need to find the relfilenumber that can be resued. For e.g. consider below example:postgres=# create table t1(a int);CREATE TABLEpostgres=# create table t2(a int);CREATE TABLEpostgres=# create table t3(a int);ERROR: relfilenumber is out of boundpostgres=# drop table t1, t2;DROP TABLEpostgres=# checkpoint;CHECKPOINTpostgres=# vacuum;VACUUMNow if I try to recreate table t3, it should succeed, shouldn't it? But it doesn't because we simply error out by seeing the nextRelFileNumber saved in the shared memory.postgres=# create table t1(a int);ERROR: relfilenumber is out of boundI think, above should have worked.==<caution><para>Note that while a table's filenode often matches its OID, this is<emphasis>not</emphasis> necessarily the case; some operations, like<command>TRUNCATE</command>, <command>REINDEX</command>, <command>CLUSTER</command> and some formsof <command>ALTER TABLE</command>, can change the filenode while preserving the OID.I think this note needs some improvement in storage.sgml. It says the table's relfilenode mostly matches its OID, but it doesn't. This will happen only in case of system table and maybe never in case of user table.==postgres=# create table t2(a int);ERROR: relfilenumber is out of boundSince this is a user-visible error, I think it would be good to mention relfilenode instead of relfilenumber. Elsewhere (including the user manual) we refer to this as a relfilenode.--With Regards,Ashutosh Sharma.On Tue, Jul 26, 2022 at 10:36 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:Thanks Dilip. Here are few comments that could find upon quickly reviewing the v11 patch: /*+ * Similar to the XLogPutNextOid but instead of writing NEXTOID log record it+ * writes a NEXT_RELFILENUMBER log record. If '*prevrecptr' is a valid+ * XLogRecPtrthen flush the wal upto this record pointer otherwise flush uptoXLogRecPtrthen -> XLogRecPtr then==+ switch (relpersistence)+ {+ case RELPERSISTENCE_TEMP:+ backend = BackendIdForTempRelations();+ break;+ case RELPERSISTENCE_UNLOGGED:+ case RELPERSISTENCE_PERMANENT:+ backend = InvalidBackendId;+ break;+ default:+ elog(ERROR, \"invalid relpersistence: %c\", relpersistence);+ return InvalidRelFileNumber; /* placate compiler */+ }I think the above check should be added at the beginning of the function for the reason that if we come to the default switch case we won't be acquiring the lwlock and do other stuff to get a new relfilenumber.==- newrelfilenumber = GetNewRelFileNumber(newTableSpace, NULL,+ * Generate a new relfilenumber. We cannot reuse the old relfilenumber+ * because of the possibility that that relation will be moved back to thethat that relation -> that relation.==+ * option_parse_relfilenumber+ *+ * Parse relfilenumber value for an option. If the parsing is successful,+ * returns; if parsing fails, returns false.+ */If parsing is successful, returns true;--With Regards,Ashutosh Sharma.On Tue, Jul 26, 2022 at 7:33 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Tue, Jul 26, 2022 at 6:06 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n\nHi,\nNote: please avoid top posting.\n\n> /*\n> * If relfilenumber is unspecified by the caller then create storage\n> - * with oid same as relid.\n> + * with relfilenumber same as relid if it is a system table otherwise\n> + * allocate a new relfilenumber. For more details read comments atop\n> + * FirstNormalRelFileNumber declaration.\n> */\n> if (!RelFileNumberIsValid(relfilenumber))\n> - relfilenumber = relid;\n> + {\n> + relfilenumber = relid < FirstNormalObjectId ?\n> + relid : GetNewRelFileNumber();\n>\n> Above code says that in the case of system table we want relfilenode to be the same as object id. This technically means that the relfilenode or oid for the system tables would not be exceeding 16383. However in the below lines of code added in the patch, it says there is some chance for the storage path of the user tables from the old cluster conflicting with the storage path of the system tables in the new cluster. Assuming that the OIDs for the user tables on the old cluster would start with 16384 (the first object ID), I see no reason why there would be a conflict.\n\n\nBasically, the above comment says that the initial system table\nstorage will be created with the same relfilenumber as Oid so you are\nright that will not exceed 16383. And below code is explaining the\nreason that in order to avoid the conflict with the user table from\nthe older cluster we do it this way. Otherwise, in the new design, we\nhave no intention to keep the relfilenode same as Oid. But during an\nupgrade from the older cluster which is not following this new design\nmight have user table relfilenode which can conflict with the system\ntable in the new cluster so we have to ensure that with the new design\nalso when creating the initial cluster we keep the system table\nrelfilenode in low range and directly using Oid is the best idea for\nthis purpose instead of defining the completely new range and\nmaintaining a separate counter for that.\n\n> +/* ----------\n> + * RelFileNumber zero is InvalidRelFileNumber.\n> + *\n> + * For the system tables (OID < FirstNormalObjectId) the initial storage\n> + * will be created with the relfilenumber same as their oid. And, later for\n> + * any storage the relfilenumber allocated by GetNewRelFileNumber() will start\n> + * at 100000. Thus, when upgrading from an older cluster, the relation storage\n> + * path for the user table from the old cluster will not conflict with the\n> + * relation storage path for the system table from the new cluster. Anyway,\n> + * the new cluster must not have any user tables while upgrading, so we needn't\n> + * worry about them.\n> + * ----------\n> + */\n> +#define FirstNormalRelFileNumber ((RelFileNumber) 100000)\n>\n> ==\n>\n> When WAL logging the next object id we have the chosen the xlog threshold value as 8192 whereas for relfilenode it is 512. Any reason for choosing this low arbitrary value in case of relfilenumber?\n\nFor Oid when we cross the max value we will wraparound, whereas for\nrelfilenumber we can not expect the wraparound for cluster lifetime.\nSo it is better not to log forward a really large number of\nrelfilenumber as we do for Oid. OTOH if we make it really low like 64\nthen we can is RelFIleNumberGenLock in wait event in very high\nconcurrency where from 32 backends we are continuously\ncreating/dropping tables. So we thought of choosing this number 512\nso that it is not very low that can create the lock contention and it\nis not very high so that we need to worry about wasting those many\nrelfilenumbers on the crash.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 27 Jul 2022 13:24:07 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 1:24 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Some more comments:\n\nNote: Please don't top post.\n\n> ==\n>\n> Shouldn't we retry for the new relfilenumber if \"ShmemVariableCache->nextRelFileNumber > MAX_RELFILENUMBER\". There can be a cases where some of the tables are dropped by the user and relfilenumber of those tables can be reused for which we would need to find the relfilenumber that can be resued. For e.g. consider below example:\n>\n> postgres=# create table t1(a int);\n> CREATE TABLE\n>\n> postgres=# create table t2(a int);\n> CREATE TABLE\n>\n> postgres=# create table t3(a int);\n> ERROR: relfilenumber is out of bound\n>\n> postgres=# drop table t1, t2;\n> DROP TABLE\n>\n> postgres=# checkpoint;\n> CHECKPOINT\n>\n> postgres=# vacuum;\n> VACUUM\n>\n> Now if I try to recreate table t3, it should succeed, shouldn't it? But it doesn't because we simply error out by seeing the nextRelFileNumber saved in the shared memory.\n>\n> postgres=# create table t1(a int);\n> ERROR: relfilenumber is out of bound\n>\n> I think, above should have worked.\n\nNo, it should not, the whole point of this design is not to reuse the\nrelfilenumber ever within a cluster lifetime. You might want to read\nthis mail[1] that by the time we use 2^56 relfilenumbers the cluster\nwill anyway reach its lifetime by other factors.\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKG%2BZrDms7gSjckme8YV2tzxgZ0KVfGcsjaFoKyzQX_f_Mw%40mail.gmail.com\n\n> ==\n>\n> <caution>\n> <para>\n> Note that while a table's filenode often matches its OID, this is\n> <emphasis>not</emphasis> necessarily the case; some operations, like\n> <command>TRUNCATE</command>, <command>REINDEX</command>, <command>CLUSTER</command> and some forms\n> of <command>ALTER TABLE</command>, can change the filenode while preserving the OID.\n>\n> I think this note needs some improvement in storage.sgml. It says the table's relfilenode mostly matches its OID, but it doesn't. This will happen only in case of system table and maybe never in case of user table.\n\nYes, this should be changed.\n\n> postgres=# create table t2(a int);\n> ERROR: relfilenumber is out of bound\n>\n> Since this is a user-visible error, I think it would be good to mention relfilenode instead of relfilenumber. Elsewhere (including the user manual) we refer to this as a relfilenode.\n\nNo this is expected to be an internal error because in general during\nthe cluster lifetime ideally, we should never reach this number. So\nwe are putting this check so that it should not reach this number due\nto some other computational/programming mistake.\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 27 Jul 2022 13:41:16 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 1:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Jul 21, 2022 at 9:53 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > On Wed, Jul 20, 2022 at 11:27 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > [v10 patch set]\n> >\n> > Hi Dilip, I'm experimenting with these patches and will hopefully have\n> > more to say soon, but I just wanted to point out that this builds with\n> > warnings and failed on 3/4 of the CI OSes on cfbot's last run. Maybe\n> > there is the good kind of uninitialised data on Linux, and the bad\n> > kind of uninitialised data on those other pesky systems?\n>\n> Here is the patch to fix the issue, basically, while asserting for the\n> file existence it was not setting the relfilenumber in the\n> relfilelocator before generating the path so it was just checking for\n> the existence of the random path so it was asserting randomly.\n\nThanks for the updated patch, Few comments:\n1) The format specifier should be changed from %u to INT64_FORMAT\nautoprewarm.c -> apw_load_buffers\n...............\nif (fscanf(file, \"%u,%u,%u,%u,%u\\n\", &blkinfo[i].database,\n &blkinfo[i].tablespace, &blkinfo[i].filenumber,\n &forknum, &blkinfo[i].blocknum) != 5)\n...............\n\n2) The format specifier should be changed from %u to INT64_FORMAT\nautoprewarm.c -> apw_dump_now\n...............\nret = fprintf(file, \"%u,%u,%u,%u,%u\\n\",\n block_info_array[i].database,\n block_info_array[i].tablespace,\n block_info_array[i].filenumber,\n (uint32) block_info_array[i].forknum,\n block_info_array[i].blocknum);\n...............\n\n3) should the comment \"entry point for old extension version\" be on\ntop of pg_buffercache_pages, as the current version will use\npg_buffercache_pages_v1_4\n+\n+Datum\n+pg_buffercache_pages(PG_FUNCTION_ARGS)\n+{\n+ return pg_buffercache_pages_internal(fcinfo, OIDOID);\n+}\n+\n+/* entry point for old extension version */\n+Datum\n+pg_buffercache_pages_v1_4(PG_FUNCTION_ARGS)\n+{\n+ return pg_buffercache_pages_internal(fcinfo, INT8OID);\n+}\n\n4) we could use the new style or ereport by removing the brackets\naround errcode:\n+ if (fctx->record[i].relfilenumber > OID_MAX)\n+ ereport(ERROR,\n+\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+\nerrmsg(\"relfilenode\" INT64_FORMAT \" is too large to be represented as\nan OID\",\n+\n fctx->record[i].relfilenumber),\n+\nerrhint(\"Upgrade the extension using ALTER EXTENSION pg_buffercache\nUPDATE\")));\n\nlike:\nereport(ERROR,\n\nerrcode(ERRCODE_INVALID_PARAMETER_VALUE),\n\nerrmsg(\"relfilenode\" INT64_FORMAT \" is too large to be represented as\nan OID\",\n\nfctx->record[i].relfilenumber),\n\nerrhint(\"Upgrade the extension using ALTER EXTENSION pg_buffercache\nUPDATE\"));\n\n5) Similarly in the below code too:\n+ /* check whether the relfilenumber is within a valid range */\n+ if ((relfilenumber) < 0 || (relfilenumber) > MAX_RELFILENUMBER)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"relfilenumber \" INT64_FORMAT\n\" is out of range\",\n+ (relfilenumber))));\n\n\n6) Similarly in the below code too:\n+#define CHECK_RELFILENUMBER_RANGE(relfilenumber)\n \\\n+do {\n \\\n+ if ((relfilenumber) < 0 || (relfilenumber) > MAX_RELFILENUMBER) \\\n+ ereport(ERROR,\n \\\n+\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE), \\\n+ errmsg(\"relfilenumber \" INT64_FORMAT\n\" is out of range\", \\\n+ (relfilenumber)))); \\\n+} while (0)\n+\n\n\n7) This error code looks similar to CHECK_RELFILENUMBER_RANGE, can\nthis macro be used here too:\npg_filenode_relation(PG_FUNCTION_ARGS)\n {\n Oid reltablespace = PG_GETARG_OID(0);\n- RelFileNumber relfilenumber = PG_GETARG_OID(1);\n+ RelFileNumber relfilenumber = PG_GETARG_INT64(1);\n Oid heaprel;\n\n+ /* check whether the relfilenumber is within a valid range */\n+ if ((relfilenumber) < 0 || (relfilenumber) > MAX_RELFILENUMBER)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"relfilenumber \" INT64_FORMAT\n\" is out of range\",\n+ (relfilenumber))));\n\n\n8) I felt this include is not required:\ndiff --git a/src/backend/access/transam/varsup.c\nb/src/backend/access/transam/varsup.c\nindex 849a7ce..a2f0d35 100644\n--- a/src/backend/access/transam/varsup.c\n+++ b/src/backend/access/transam/varsup.c\n@@ -13,12 +13,16 @@\n\n #include \"postgres.h\"\n\n+#include <unistd.h>\n+\n #include \"access/clog.h\"\n #include \"access/commit_ts.h\"\n\n9) should we change elog to ereport to use the New-style error reporting API\n+ /* safety check, we should never get this far in a HS standby */\n+ if (RecoveryInProgress())\n+ elog(ERROR, \"cannot assign RelFileNumber during recovery\");\n+\n+ if (IsBinaryUpgrade)\n+ elog(ERROR, \"cannot assign RelFileNumber during binary\nupgrade\");\n\n10) Here nextRelFileNumber is protected by RelFileNumberGenLock, the\ncomment stated OidGenLock. It should be slightly adjusted.\ntypedef struct VariableCacheData\n{\n/*\n* These fields are protected by OidGenLock.\n*/\nOid nextOid; /* next OID to assign */\nuint32 oidCount; /* OIDs available before must do XLOG work */\nRelFileNumber nextRelFileNumber; /* next relfilenumber to assign */\nRelFileNumber loggedRelFileNumber; /* last logged relfilenumber */\nXLogRecPtr loggedRelFileNumberRecPtr; /* xlog record pointer w.r.t.\n* loggedRelFileNumber */\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 27 Jul 2022 15:27:11 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 3:27 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n\n> Thanks for the updated patch, Few comments:\n> 1) The format specifier should be changed from %u to INT64_FORMAT\n> autoprewarm.c -> apw_load_buffers\n> ...............\n> if (fscanf(file, \"%u,%u,%u,%u,%u\\n\", &blkinfo[i].database,\n> &blkinfo[i].tablespace, &blkinfo[i].filenumber,\n> &forknum, &blkinfo[i].blocknum) != 5)\n> ...............\n>\n> 2) The format specifier should be changed from %u to INT64_FORMAT\n> autoprewarm.c -> apw_dump_now\n> ...............\n> ret = fprintf(file, \"%u,%u,%u,%u,%u\\n\",\n> block_info_array[i].database,\n> block_info_array[i].tablespace,\n> block_info_array[i].filenumber,\n> (uint32) block_info_array[i].forknum,\n> block_info_array[i].blocknum);\n> ...............\n>\n> 3) should the comment \"entry point for old extension version\" be on\n> top of pg_buffercache_pages, as the current version will use\n> pg_buffercache_pages_v1_4\n> +\n> +Datum\n> +pg_buffercache_pages(PG_FUNCTION_ARGS)\n> +{\n> + return pg_buffercache_pages_internal(fcinfo, OIDOID);\n> +}\n> +\n> +/* entry point for old extension version */\n> +Datum\n> +pg_buffercache_pages_v1_4(PG_FUNCTION_ARGS)\n> +{\n> + return pg_buffercache_pages_internal(fcinfo, INT8OID);\n> +}\n>\n> 4) we could use the new style or ereport by removing the brackets\n> around errcode:\n> + if (fctx->record[i].relfilenumber > OID_MAX)\n> + ereport(ERROR,\n> +\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> +\n> errmsg(\"relfilenode\" INT64_FORMAT \" is too large to be represented as\n> an OID\",\n> +\n> fctx->record[i].relfilenumber),\n> +\n> errhint(\"Upgrade the extension using ALTER EXTENSION pg_buffercache\n> UPDATE\")));\n>\n> like:\n> ereport(ERROR,\n>\n> errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n>\n> errmsg(\"relfilenode\" INT64_FORMAT \" is too large to be represented as\n> an OID\",\n>\n> fctx->record[i].relfilenumber),\n>\n> errhint(\"Upgrade the extension using ALTER EXTENSION pg_buffercache\n> UPDATE\"));\n>\n> 5) Similarly in the below code too:\n> + /* check whether the relfilenumber is within a valid range */\n> + if ((relfilenumber) < 0 || (relfilenumber) > MAX_RELFILENUMBER)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"relfilenumber \" INT64_FORMAT\n> \" is out of range\",\n> + (relfilenumber))));\n>\n>\n> 6) Similarly in the below code too:\n> +#define CHECK_RELFILENUMBER_RANGE(relfilenumber)\n> \\\n> +do {\n> \\\n> + if ((relfilenumber) < 0 || (relfilenumber) > MAX_RELFILENUMBER) \\\n> + ereport(ERROR,\n> \\\n> +\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE), \\\n> + errmsg(\"relfilenumber \" INT64_FORMAT\n> \" is out of range\", \\\n> + (relfilenumber)))); \\\n> +} while (0)\n> +\n>\n>\n> 7) This error code looks similar to CHECK_RELFILENUMBER_RANGE, can\n> this macro be used here too:\n> pg_filenode_relation(PG_FUNCTION_ARGS)\n> {\n> Oid reltablespace = PG_GETARG_OID(0);\n> - RelFileNumber relfilenumber = PG_GETARG_OID(1);\n> + RelFileNumber relfilenumber = PG_GETARG_INT64(1);\n> Oid heaprel;\n>\n> + /* check whether the relfilenumber is within a valid range */\n> + if ((relfilenumber) < 0 || (relfilenumber) > MAX_RELFILENUMBER)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"relfilenumber \" INT64_FORMAT\n> \" is out of range\",\n> + (relfilenumber))));\n>\n>\n> 8) I felt this include is not required:\n> diff --git a/src/backend/access/transam/varsup.c\n> b/src/backend/access/transam/varsup.c\n> index 849a7ce..a2f0d35 100644\n> --- a/src/backend/access/transam/varsup.c\n> +++ b/src/backend/access/transam/varsup.c\n> @@ -13,12 +13,16 @@\n>\n> #include \"postgres.h\"\n>\n> +#include <unistd.h>\n> +\n> #include \"access/clog.h\"\n> #include \"access/commit_ts.h\"\n>\n> 9) should we change elog to ereport to use the New-style error reporting API\n> + /* safety check, we should never get this far in a HS standby */\n> + if (RecoveryInProgress())\n> + elog(ERROR, \"cannot assign RelFileNumber during recovery\");\n> +\n> + if (IsBinaryUpgrade)\n> + elog(ERROR, \"cannot assign RelFileNumber during binary\n> upgrade\");\n>\n> 10) Here nextRelFileNumber is protected by RelFileNumberGenLock, the\n> comment stated OidGenLock. It should be slightly adjusted.\n> typedef struct VariableCacheData\n> {\n> /*\n> * These fields are protected by OidGenLock.\n> */\n> Oid nextOid; /* next OID to assign */\n> uint32 oidCount; /* OIDs available before must do XLOG work */\n> RelFileNumber nextRelFileNumber; /* next relfilenumber to assign */\n> RelFileNumber loggedRelFileNumber; /* last logged relfilenumber */\n> XLogRecPtr loggedRelFileNumberRecPtr; /* xlog record pointer w.r.t.\n> * loggedRelFileNumber */\n\nThanks for the review I have fixed these except,\n> 9) should we change elog to ereport to use the New-style error reporting API\nI think this is internal error so if we use ereport we need to give\nerror code and all and I think for internal that is not necessary?\n\n> 8) I felt this include is not required:\nit is using access API so we do need <unistd.h>\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 27 Jul 2022 18:02:01 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 12:07 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Jul 26, 2022 at 2:07 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I have thought about it while doing so but I am not sure whether it is\n> > a good idea or not, because before my change these all were macros\n> > with 2 naming conventions so I just changed to inline function so why\n> > to change the name.\n>\n> Well, the reason to change the name would be for consistency. It feels\n> weird to have some NAMES_LIKETHIS() and other NamesLikeThis().\n>\n> Now, an argument against that is that it will make back-patching more\n> annoying, if any code using these functions/macros is touched. But\n> since the calling sequence is changing anyway (you now have to pass a\n> pointer rather than the object itself) that argument doesn't really\n> carry any weight. So I would favor ClearBufferTag(), InitBufferTag(),\n> etc.\n\nOkay, so I have renamed these 2 functions and BUFFERTAGS_EQUAL as well\nto BufferTagEqual().\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 27 Jul 2022 21:49:38 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, 27 Jul 2022 at 9:49 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Wed, Jul 27, 2022 at 12:07 AM Robert Haas <robertmhaas@gmail.com>\n> wrote:\n> >\n> > On Tue, Jul 26, 2022 at 2:07 AM Dilip Kumar <dilipbalaut@gmail.com>\n> wrote:\n> > > I have thought about it while doing so but I am not sure whether it is\n> > > a good idea or not, because before my change these all were macros\n> > > with 2 naming conventions so I just changed to inline function so why\n> > > to change the name.\n> >\n> > Well, the reason to change the name would be for consistency. It feels\n> > weird to have some NAMES_LIKETHIS() and other NamesLikeThis().\n> >\n> > Now, an argument against that is that it will make back-patching more\n> > annoying, if any code using these functions/macros is touched. But\n> > since the calling sequence is changing anyway (you now have to pass a\n> > pointer rather than the object itself) that argument doesn't really\n> > carry any weight. So I would favor ClearBufferTag(), InitBufferTag(),\n> > etc.\n>\n> Okay, so I have renamed these 2 functions and BUFFERTAGS_EQUAL as well\n> to BufferTagEqual().\n\n\nJust realised that this should have been BufferTagsEqual instead of\nBufferTagEqual\n\nI will modify this and send an updated patch tomorrow.\n\n—\nDilip\n\n> --\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, 27 Jul 2022 at 9:49 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:On Wed, Jul 27, 2022 at 12:07 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Jul 26, 2022 at 2:07 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I have thought about it while doing so but I am not sure whether it is\n> > a good idea or not, because before my change these all were macros\n> > with 2 naming conventions so I just changed to inline function so why\n> > to change the name.\n>\n> Well, the reason to change the name would be for consistency. It feels\n> weird to have some NAMES_LIKETHIS() and other NamesLikeThis().\n>\n> Now, an argument against that is that it will make back-patching more\n> annoying, if any code using these functions/macros is touched. But\n> since the calling sequence is changing anyway (you now have to pass a\n> pointer rather than the object itself) that argument doesn't really\n> carry any weight. So I would favor ClearBufferTag(), InitBufferTag(),\n> etc.\n\nOkay, so I have renamed these 2 functions and BUFFERTAGS_EQUAL as well\nto BufferTagEqual().Just realised that this should have been BufferTagsEqual instead of BufferTagEqualI will modify this and send an updated patch tomorrow.—Dilip-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 27 Jul 2022 22:07:16 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 12:37 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Just realised that this should have been BufferTagsEqual instead of BufferTagEqual\n>\n> I will modify this and send an updated patch tomorrow.\n\nI changed it and committed.\n\nWhat was formerly 0002 will need minor rebasing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 27 Jul 2022 14:09:19 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 11:39 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jul 27, 2022 at 12:37 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Just realised that this should have been BufferTagsEqual instead of BufferTagEqual\n> >\n> > I will modify this and send an updated patch tomorrow.\n>\n> I changed it and committed.\n>\n> What was formerly 0002 will need minor rebasing.\n\nThanks, I have rebased other patches, actually, there is a new 0001\npatch now. It seems during renaming relnode related Oid to\nRelFileNumber, some of the references were missed and in the last\npatch set I kept it as part of main patch 0003, but I think it's\nbetter to keep it separate. So took out those changes and created\n0001, but you think this can be committed as part of 0003 only then\nalso it's fine with me.\n\nI have done some cleanup in 0002 as well, basically, earlier we were\nstoring the result of the BufTagGetRelFileLocator() in a separate\nvariable which is not required everywhere. So wherever possible I\nhave avoided using the intermediate variable.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 28 Jul 2022 17:01:47 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 7:32 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Thanks, I have rebased other patches, actually, there is a new 0001\n> patch now. It seems during renaming relnode related Oid to\n> RelFileNumber, some of the references were missed and in the last\n> patch set I kept it as part of main patch 0003, but I think it's\n> better to keep it separate. So took out those changes and created\n> 0001, but you think this can be committed as part of 0003 only then\n> also it's fine with me.\n\nI committed this in part. I took out the introduction of\nRELNUMBERCHARS as I think that should probably be a separate commit,\nbut added in a comment change that you seem to have overlooked.\n\n> I have done some cleanup in 0002 as well, basically, earlier we were\n> storing the result of the BufTagGetRelFileLocator() in a separate\n> variable which is not required everywhere. So wherever possible I\n> have avoided using the intermediate variable.\n\nI'll have a look at this next.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 28 Jul 2022 10:29:16 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "Not a full review, just a quick skim of 0003.\n\nOn 2022-Jul-28, Dilip Kumar wrote:\n\n> +\tif (!shutdown)\n> +\t{\n> +\t\tif (ShmemVariableCache->loggedRelFileNumber < checkPoint.nextRelFileNumber)\n> +\t\t\telog(ERROR, \"nextRelFileNumber can not go backward from \" INT64_FORMAT \"to\" INT64_FORMAT,\n> +\t\t\t\t checkPoint.nextRelFileNumber, ShmemVariableCache->loggedRelFileNumber);\n> +\n> +\t\tcheckPoint.nextRelFileNumber = ShmemVariableCache->loggedRelFileNumber;\n> +\t}\n\nPlease don't do this; rather use %llu and cast to (long long).\nOtherwise the string becomes mangled for translation. I think there are\nmany uses of this sort of pattern in strings, but not all of them are\ntranslatable so maybe we don't care -- for example contrib doesn't have\ntranslations. And the rmgrdesc routines don't translate either, so we\nprobably don't care about it there; and nothing that uses elog either.\nBut this one in particular I think should be an ereport, not an elog.\nThere are several other ereports in various places of the patch also.\n\n> @@ -2378,7 +2378,7 @@ verifyBackupPageConsistency(XLogReaderState *record)\n> \t\tif (memcmp(replay_image_masked, primary_image_masked, BLCKSZ) != 0)\n> \t\t{\n> \t\t\telog(FATAL,\n> -\t\t\t\t \"inconsistent page found, rel %u/%u/%u, forknum %u, blkno %u\",\n> +\t\t\t\t \"inconsistent page found, rel %u/%u/\" INT64_FORMAT \", forknum %u, blkno %u\",\n> \t\t\t\t rlocator.spcOid, rlocator.dbOid, rlocator.relNumber,\n> \t\t\t\t forknum, blkno);\n\nShould this one be an ereport, and thus you do need to change it to that\nand handle it like that?\n\n\n> +\t\tif (xlrec->rlocator.relNumber > ShmemVariableCache->nextRelFileNumber)\n> +\t\t\telog(ERROR, \"unexpected relnumber \" INT64_FORMAT \"that is bigger than nextRelFileNumber \" INT64_FORMAT,\n> +\t\t\t\t xlrec->rlocator.relNumber, ShmemVariableCache->nextRelFileNumber);\n\nYou missed one whitespace here after the INT64_FORMAT.\n\n> diff --git a/src/bin/pg_controldata/pg_controldata.c b/src/bin/pg_controldata/pg_controldata.c\n> index c390ec5..f727078 100644\n> --- a/src/bin/pg_controldata/pg_controldata.c\n> +++ b/src/bin/pg_controldata/pg_controldata.c\n> @@ -250,6 +250,8 @@ main(int argc, char *argv[])\n> \tprintf(_(\"Latest checkpoint's NextXID: %u:%u\\n\"),\n> \t\t EpochFromFullTransactionId(ControlFile->checkPointCopy.nextXid),\n> \t\t XidFromFullTransactionId(ControlFile->checkPointCopy.nextXid));\n> +\tprintf(_(\"Latest checkpoint's NextRelFileNumber: \" INT64_FORMAT \"\\n\"),\n> +\t\t ControlFile->checkPointCopy.nextRelFileNumber);\n\nThis one must definitely be translatable.\n\n> /* Characters to allow for an RelFileNumber in a relation path */\n> -#define RELNUMBERCHARS\tOIDCHARS\t/* same as OIDCHARS */\n> +#define RELNUMBERCHARS\t20\t\t/* max chars printed by %lu */\n\nMaybe say %llu here instead.\n\n\nI do wonder why do we keep relfilenodes limited to decimal digits. Why\nnot use hex digits? Then we know the limit is 14 chars, as in\n0x00FFFFFFFFFFFFFF in the MAX_RELFILENUMBER definition.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Thou shalt not follow the NULL pointer, for chaos and madness await\nthee at its end.\" (2nd Commandment for C programmers)\n\n\n",
"msg_date": "Thu, 28 Jul 2022 17:59:11 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 11:59 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> I do wonder why do we keep relfilenodes limited to decimal digits. Why\n> not use hex digits? Then we know the limit is 14 chars, as in\n> 0x00FFFFFFFFFFFFFF in the MAX_RELFILENUMBER definition.\n\nHmm, but surely we want the error messages to be printed using the\nsame format that we use for the actual filenames. We could make the\nfilenames use hex characters too, but I'm not wild about changing\nuser-visible details like that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 28 Jul 2022 12:52:17 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 9:52 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Jul 28, 2022 at 11:59 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n> > I do wonder why do we keep relfilenodes limited to decimal digits. Why\n> > not use hex digits? Then we know the limit is 14 chars, as in\n> > 0x00FFFFFFFFFFFFFF in the MAX_RELFILENUMBER definition.\n>\n> Hmm, but surely we want the error messages to be printed using the\n> same format that we use for the actual filenames. We could make the\n> filenames use hex characters too, but I'm not wild about changing\n> user-visible details like that.\n>\n\n From a DBA perspective this would be a regression in usability.\n\nJD\n\n-- \n\n - Founder - https://commandprompt.com/ - 24x7x365 Postgres since 1997\n - Founder and Co-Chair - https://postgresconf.org/\n - Founder - https://postgresql.us - United States PostgreSQL\n - Public speaker, published author, postgresql expert, and people\n believer.\n - Host - More than a refresh\n <https://commandprompt.com/about/more-than-a-refresh/>: A podcast about\n data and the people who wrangle it.\n\nOn Thu, Jul 28, 2022 at 9:52 AM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Jul 28, 2022 at 11:59 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> I do wonder why do we keep relfilenodes limited to decimal digits. Why\n> not use hex digits? Then we know the limit is 14 chars, as in\n> 0x00FFFFFFFFFFFFFF in the MAX_RELFILENUMBER definition.\n\nHmm, but surely we want the error messages to be printed using the\nsame format that we use for the actual filenames. We could make the\nfilenames use hex characters too, but I'm not wild about changing\nuser-visible details like that.From a DBA perspective this would be a regression in usability.JD-- Founder - https://commandprompt.com/ - 24x7x365 Postgres since 1997Founder and Co-Chair - https://postgresconf.org/ Founder - https://postgresql.us - United States PostgreSQLPublic speaker, published author, postgresql expert, and people believer.Host - More than a refresh: A podcast about data and the people who wrangle it.",
"msg_date": "Thu, 28 Jul 2022 10:24:22 -0700",
"msg_from": "Joshua Drake <jd@commandprompt.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On 2022-Jul-28, Robert Haas wrote:\n\n> On Thu, Jul 28, 2022 at 11:59 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > I do wonder why do we keep relfilenodes limited to decimal digits. Why\n> > not use hex digits? Then we know the limit is 14 chars, as in\n> > 0x00FFFFFFFFFFFFFF in the MAX_RELFILENUMBER definition.\n> \n> Hmm, but surely we want the error messages to be printed using the\n> same format that we use for the actual filenames.\n\nOf course.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Most hackers will be perfectly comfortable conceptualizing users as entropy\n sources, so let's move on.\" (Nathaniel Smith)\n\n\n",
"msg_date": "Fri, 29 Jul 2022 11:36:33 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 5:02 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n+/* ----------\n+ * RelFileNumber zero is InvalidRelFileNumber.\n+ *\n+ * For the system tables (OID < FirstNormalObjectId) the initial storage\n\nAbove comment says that RelFileNumber zero is invalid which is technically\ncorrect because we don't have any relation file in disk with zero number.\nBut the point is that if someone reads below definition of\nCHECK_RELFILENUMBER_RANGE he/she might get confused because as per this\ndefinition relfilenumber zero is valid.\n\n+#define CHECK_RELFILENUMBER_RANGE(relfilenumber) \\\n+do { \\\n+ if ((relfilenumber) < 0 || (relfilenumber) > MAX_RELFILENUMBER) \\\n+ ereport(ERROR, \\\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE), \\\n+ errmsg(\"relfilenumber \" INT64_FORMAT \" is out of range\",\n\\\n+ (relfilenumber)))); \\\n+} while (0)\n+\n\n+ RelFileNumber relfilenumber = PG_GETARG_INT64(0);\n+ CHECK_RELFILENUMBER_RANGE(relfilenumber);\n\nIt seems like the relfilenumber in above definition represents relfilenode\nvalue in pg_class which can hold zero value which actually means it's a\nmapped relation. I think it would be good to provide some clarity here.\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Thu, Jul 28, 2022 at 5:02 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:+/* ----------+ * RelFileNumber zero is InvalidRelFileNumber.+ *+ * For the system tables (OID < FirstNormalObjectId) the initial storageAbove comment says that RelFileNumber zero is invalid which is technically correct because we don't have any relation file in disk with zero number. But the point is that if someone reads below definition of CHECK_RELFILENUMBER_RANGE he/she might get confused because as per this definition relfilenumber zero is valid.+#define CHECK_RELFILENUMBER_RANGE(relfilenumber) \\+do { \\+ if ((relfilenumber) < 0 || (relfilenumber) > MAX_RELFILENUMBER) \\+ ereport(ERROR, \\+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE), \\+ errmsg(\"relfilenumber \" INT64_FORMAT \" is out of range\", \\+ (relfilenumber)))); \\+} while (0)++ RelFileNumber relfilenumber = PG_GETARG_INT64(0);+ CHECK_RELFILENUMBER_RANGE(relfilenumber);It seems like the relfilenumber in above definition represents relfilenode value in pg_class which can hold zero value which actually means it's a mapped relation. I think it would be good to provide some clarity here.--With Regards,Ashutosh Sharma.",
"msg_date": "Fri, 29 Jul 2022 18:26:29 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Fri, Jul 29, 2022 at 6:26 PM Ashutosh Sharma <ashu.coek88@gmail.com>\nwrote:\n\n> On Thu, Jul 28, 2022 at 5:02 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> +/* ----------\n> + * RelFileNumber zero is InvalidRelFileNumber.\n> + *\n> + * For the system tables (OID < FirstNormalObjectId) the initial storage\n>\n> Above comment says that RelFileNumber zero is invalid which is technically\n> correct because we don't have any relation file in disk with zero number.\n> But the point is that if someone reads below definition of\n> CHECK_RELFILENUMBER_RANGE he/she might get confused because as per this\n> definition relfilenumber zero is valid.\n>\n\nPlease ignore the above comment shared in my previous email. It is a\nlittle over-thinking on my part that generated this comment in my mind.\nSorry for that. Here are the other comments I have:\n\n+/* First we have to remove them from the extension */\n+ALTER EXTENSION pg_buffercache DROP VIEW pg_buffercache;\n+ALTER EXTENSION pg_buffercache DROP FUNCTION pg_buffercache_pages();\n+\n+/* Then we can drop them */\n+DROP VIEW pg_buffercache;\n+DROP FUNCTION pg_buffercache_pages();\n+\n+/* Now redefine */\n+CREATE OR REPLACE FUNCTION pg_buffercache_pages()\n+RETURNS SETOF RECORD\n+AS 'MODULE_PATHNAME', 'pg_buffercache_pages_v1_4'\n+LANGUAGE C PARALLEL SAFE;\n+\n+CREATE OR REPLACE VIEW pg_buffercache AS\n+ SELECT P.* FROM pg_buffercache_pages() AS P\n+ (bufferid integer, relfilenode int8, reltablespace oid, reldatabase oid,\n+ relforknumber int2, relblocknumber int8, isdirty bool, usagecount int2,\n+ pinning_backends int4);\n\nAs we are dropping the function and view I think it would be good if we\n*don't* use the \"OR REPLACE\" keyword when re-defining them.\n\n==\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"relfilenode\" INT64_FORMAT \" is too\nlarge to be represented as an OID\",\n+ fctx->record[i].relfilenumber),\n+ errhint(\"Upgrade the extension using ALTER\nEXTENSION pg_buffercache UPDATE\")));\n\nI think it would be good to recommend users to upgrade to the latest\nversion instead of just saying upgrade the pg_buffercache using ALTER\nEXTENSION ....\n\n==\n\n--- a/contrib/pg_walinspect/sql/pg_walinspect.sql\n+++ b/contrib/pg_walinspect/sql/pg_walinspect.sql\n@@ -39,10 +39,10 @@ SELECT COUNT(*) >= 0 AS ok FROM\npg_get_wal_stats_till_end_of_wal(:'wal_lsn1');\n -- Test for filtering out WAL records of a particular table\n -- ===================================================================\n\n-SELECT oid AS sample_tbl_oid FROM pg_class WHERE relname = 'sample_tbl'\n\\gset\n+SELECT relfilenode AS sample_tbl_relfilenode FROM pg_class WHERE relname =\n'sample_tbl' \\gset\n\nIs this change required? The original query is just trying to fetch table\noid not relfilenode and AFAIK we haven't changed anything in table oid.\n\n==\n\n+#define CHECK_RELFILENUMBER_RANGE(relfilenumber) \\\n+do { \\\n+ if ((relfilenumber) < 0 || (relfilenumber) > MAX_RELFILENUMBER) \\\n+ ereport(ERROR, \\\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE), \\\n+ errmsg(\"relfilenumber \" INT64_FORMAT \" is out of range\",\n\\\n+ (relfilenumber)))); \\\n+} while (0)\n+\n\nI think we can shift this macro to some header file and reuse it at several\nplaces.\n\n==\n\n\n+ * Generate a new relfilenumber. We cannot reuse the old relfilenumber\n+ * because of the possibility that that relation will be moved back to\nthe\n\nthat that relation -> that relation\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Fri, Jul 29, 2022 at 6:26 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:On Thu, Jul 28, 2022 at 5:02 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:+/* ----------+ * RelFileNumber zero is InvalidRelFileNumber.+ *+ * For the system tables (OID < FirstNormalObjectId) the initial storageAbove comment says that RelFileNumber zero is invalid which is technically correct because we don't have any relation file in disk with zero number. But the point is that if someone reads below definition of CHECK_RELFILENUMBER_RANGE he/she might get confused because as per this definition relfilenumber zero is valid.Please ignore the above comment shared in my previous email. It is a little over-thinking on my part that generated this comment in my mind. Sorry for that. Here are the other comments I have:+/* First we have to remove them from the extension */+ALTER EXTENSION pg_buffercache DROP VIEW pg_buffercache;+ALTER EXTENSION pg_buffercache DROP FUNCTION pg_buffercache_pages();++/* Then we can drop them */+DROP VIEW pg_buffercache;+DROP FUNCTION pg_buffercache_pages();++/* Now redefine */+CREATE OR REPLACE FUNCTION pg_buffercache_pages()+RETURNS SETOF RECORD+AS 'MODULE_PATHNAME', 'pg_buffercache_pages_v1_4'+LANGUAGE C PARALLEL SAFE;++CREATE OR REPLACE VIEW pg_buffercache AS+ SELECT P.* FROM pg_buffercache_pages() AS P+ (bufferid integer, relfilenode int8, reltablespace oid, reldatabase oid,+ relforknumber int2, relblocknumber int8, isdirty bool, usagecount int2,+ pinning_backends int4);As we are dropping the function and view I think it would be good if we *don't* use the \"OR REPLACE\" keyword when re-defining them.==+ ereport(ERROR,+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),+ errmsg(\"relfilenode\" INT64_FORMAT \" is too large to be represented as an OID\",+ fctx->record[i].relfilenumber),+ errhint(\"Upgrade the extension using ALTER EXTENSION pg_buffercache UPDATE\")));I think it would be good to recommend users to upgrade to the latest version instead of just saying upgrade the pg_buffercache using ALTER EXTENSION ....==--- a/contrib/pg_walinspect/sql/pg_walinspect.sql+++ b/contrib/pg_walinspect/sql/pg_walinspect.sql@@ -39,10 +39,10 @@ SELECT COUNT(*) >= 0 AS ok FROM pg_get_wal_stats_till_end_of_wal(:'wal_lsn1'); -- Test for filtering out WAL records of a particular table -- ===================================================================-SELECT oid AS sample_tbl_oid FROM pg_class WHERE relname = 'sample_tbl' \\gset+SELECT relfilenode AS sample_tbl_relfilenode FROM pg_class WHERE relname = 'sample_tbl' \\gsetIs this change required? The original query is just trying to fetch table oid not relfilenode and AFAIK we haven't changed anything in table oid.==+#define CHECK_RELFILENUMBER_RANGE(relfilenumber) \\+do { \\+ if ((relfilenumber) < 0 || (relfilenumber) > MAX_RELFILENUMBER) \\+ ereport(ERROR, \\+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE), \\+ errmsg(\"relfilenumber \" INT64_FORMAT \" is out of range\", \\+ (relfilenumber)))); \\+} while (0)+I think we can shift this macro to some header file and reuse it at several places.==+ * Generate a new relfilenumber. We cannot reuse the old relfilenumber+ * because of the possibility that that relation will be moved back to thethat that relation -> that relation--With Regards,Ashutosh Sharma.",
"msg_date": "Fri, 29 Jul 2022 20:02:08 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 6:02 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Jul 27, 2022 at 3:27 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n>\n> > Thanks for the updated patch, Few comments:\n> > 1) The format specifier should be changed from %u to INT64_FORMAT\n> > autoprewarm.c -> apw_load_buffers\n> > ...............\n> > if (fscanf(file, \"%u,%u,%u,%u,%u\\n\", &blkinfo[i].database,\n> > &blkinfo[i].tablespace, &blkinfo[i].filenumber,\n> > &forknum, &blkinfo[i].blocknum) != 5)\n> > ...............\n> >\n> > 2) The format specifier should be changed from %u to INT64_FORMAT\n> > autoprewarm.c -> apw_dump_now\n> > ...............\n> > ret = fprintf(file, \"%u,%u,%u,%u,%u\\n\",\n> > block_info_array[i].database,\n> > block_info_array[i].tablespace,\n> > block_info_array[i].filenumber,\n> > (uint32) block_info_array[i].forknum,\n> > block_info_array[i].blocknum);\n> > ...............\n> >\n> > 3) should the comment \"entry point for old extension version\" be on\n> > top of pg_buffercache_pages, as the current version will use\n> > pg_buffercache_pages_v1_4\n> > +\n> > +Datum\n> > +pg_buffercache_pages(PG_FUNCTION_ARGS)\n> > +{\n> > + return pg_buffercache_pages_internal(fcinfo, OIDOID);\n> > +}\n> > +\n> > +/* entry point for old extension version */\n> > +Datum\n> > +pg_buffercache_pages_v1_4(PG_FUNCTION_ARGS)\n> > +{\n> > + return pg_buffercache_pages_internal(fcinfo, INT8OID);\n> > +}\n> >\n> > 4) we could use the new style or ereport by removing the brackets\n> > around errcode:\n> > + if (fctx->record[i].relfilenumber > OID_MAX)\n> > + ereport(ERROR,\n> > +\n> > (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > +\n> > errmsg(\"relfilenode\" INT64_FORMAT \" is too large to be represented as\n> > an OID\",\n> > +\n> > fctx->record[i].relfilenumber),\n> > +\n> > errhint(\"Upgrade the extension using ALTER EXTENSION pg_buffercache\n> > UPDATE\")));\n> >\n> > like:\n> > ereport(ERROR,\n> >\n> > errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> >\n> > errmsg(\"relfilenode\" INT64_FORMAT \" is too large to be represented as\n> > an OID\",\n> >\n> > fctx->record[i].relfilenumber),\n> >\n> > errhint(\"Upgrade the extension using ALTER EXTENSION pg_buffercache\n> > UPDATE\"));\n> >\n> > 5) Similarly in the below code too:\n> > + /* check whether the relfilenumber is within a valid range */\n> > + if ((relfilenumber) < 0 || (relfilenumber) > MAX_RELFILENUMBER)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > + errmsg(\"relfilenumber \" INT64_FORMAT\n> > \" is out of range\",\n> > + (relfilenumber))));\n> >\n> >\n> > 6) Similarly in the below code too:\n> > +#define CHECK_RELFILENUMBER_RANGE(relfilenumber)\n> > \\\n> > +do {\n> > \\\n> > + if ((relfilenumber) < 0 || (relfilenumber) > MAX_RELFILENUMBER) \\\n> > + ereport(ERROR,\n> > \\\n> > +\n> > (errcode(ERRCODE_INVALID_PARAMETER_VALUE), \\\n> > + errmsg(\"relfilenumber \" INT64_FORMAT\n> > \" is out of range\", \\\n> > + (relfilenumber)))); \\\n> > +} while (0)\n> > +\n> >\n> >\n> > 7) This error code looks similar to CHECK_RELFILENUMBER_RANGE, can\n> > this macro be used here too:\n> > pg_filenode_relation(PG_FUNCTION_ARGS)\n> > {\n> > Oid reltablespace = PG_GETARG_OID(0);\n> > - RelFileNumber relfilenumber = PG_GETARG_OID(1);\n> > + RelFileNumber relfilenumber = PG_GETARG_INT64(1);\n> > Oid heaprel;\n> >\n> > + /* check whether the relfilenumber is within a valid range */\n> > + if ((relfilenumber) < 0 || (relfilenumber) > MAX_RELFILENUMBER)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > + errmsg(\"relfilenumber \" INT64_FORMAT\n> > \" is out of range\",\n> > + (relfilenumber))));\n> >\n> >\n> > 8) I felt this include is not required:\n> > diff --git a/src/backend/access/transam/varsup.c\n> > b/src/backend/access/transam/varsup.c\n> > index 849a7ce..a2f0d35 100644\n> > --- a/src/backend/access/transam/varsup.c\n> > +++ b/src/backend/access/transam/varsup.c\n> > @@ -13,12 +13,16 @@\n> >\n> > #include \"postgres.h\"\n> >\n> > +#include <unistd.h>\n> > +\n> > #include \"access/clog.h\"\n> > #include \"access/commit_ts.h\"\n> >\n> > 9) should we change elog to ereport to use the New-style error reporting API\n> > + /* safety check, we should never get this far in a HS standby */\n> > + if (RecoveryInProgress())\n> > + elog(ERROR, \"cannot assign RelFileNumber during recovery\");\n> > +\n> > + if (IsBinaryUpgrade)\n> > + elog(ERROR, \"cannot assign RelFileNumber during binary\n> > upgrade\");\n> >\n> > 10) Here nextRelFileNumber is protected by RelFileNumberGenLock, the\n> > comment stated OidGenLock. It should be slightly adjusted.\n> > typedef struct VariableCacheData\n> > {\n> > /*\n> > * These fields are protected by OidGenLock.\n> > */\n> > Oid nextOid; /* next OID to assign */\n> > uint32 oidCount; /* OIDs available before must do XLOG work */\n> > RelFileNumber nextRelFileNumber; /* next relfilenumber to assign */\n> > RelFileNumber loggedRelFileNumber; /* last logged relfilenumber */\n> > XLogRecPtr loggedRelFileNumberRecPtr; /* xlog record pointer w.r.t.\n> > * loggedRelFileNumber */\n>\n> Thanks for the review I have fixed these except,\n> > 9) should we change elog to ereport to use the New-style error reporting API\n> I think this is internal error so if we use ereport we need to give\n> error code and all and I think for internal that is not necessary?\n\nOk, Sounds reasonable.\n\n> > 8) I felt this include is not required:\n> it is using access API so we do need <unistd.h>\n\nOk, It worked for me because I had not used the ASSERT enabled flag\nwhile compilation.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 29 Jul 2022 21:14:08 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 10:29 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I have done some cleanup in 0002 as well, basically, earlier we were\n> > storing the result of the BufTagGetRelFileLocator() in a separate\n> > variable which is not required everywhere. So wherever possible I\n> > have avoided using the intermediate variable.\n>\n> I'll have a look at this next.\n\nI was taught that when programming in C one should avoid returning a\nstruct type, as BufTagGetRelFileLocator does. I would have expected it\nto return void and take an argument of type RelFileLocator * into\nwhich it writes the results. On the other hand, I was also taught that\none should avoid passing a struct type as an argument, and smgropen()\nhas been doing that since Tom Lane committed\n87bd95638552b8fc1f5f787ce5b862bb6fc2eb80 all the way back in 2004. So\nmaybe this isn't that relevant any more on modern compilers? Or maybe\nfor small structs it doesn't matter much? I dunno.\n\nOther than that, I think your 0002 looks fine.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 29 Jul 2022 13:24:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On 2022-Jul-29, Robert Haas wrote:\n\n> I was taught that when programming in C one should avoid returning a\n> struct type, as BufTagGetRelFileLocator does.\n\nDoing it like that helps RelFileLocatorSkippingWAL, which takes a bare\nRelFileLocator as argument. With this coding you can call one function\nwith the other function as its argument.\n\nHowever, with the current definition of relpathbackend() and siblings,\nit looks quite disastrous -- BufTagGetRelFileLocator is being called\nthree times. You could argue that a solution would be to turn those\nmacros into static inline functions.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"I'm impressed how quickly you are fixing this obscure issue. I came from \nMS SQL and it would be hard for me to put into words how much of a better job\nyou all are doing on [PostgreSQL].\"\n Steve Midgley, http://archives.postgresql.org/pgsql-sql/2008-08/msg00000.php\n\n\n",
"msg_date": "Fri, 29 Jul 2022 20:12:53 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Fri, Jul 29, 2022 at 2:12 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Jul-29, Robert Haas wrote:\n> > I was taught that when programming in C one should avoid returning a\n> > struct type, as BufTagGetRelFileLocator does.\n>\n> Doing it like that helps RelFileLocatorSkippingWAL, which takes a bare\n> RelFileLocator as argument. With this coding you can call one function\n> with the other function as its argument.\n>\n> However, with the current definition of relpathbackend() and siblings,\n> it looks quite disastrous -- BufTagGetRelFileLocator is being called\n> three times. You could argue that a solution would be to turn those\n> macros into static inline functions.\n\nYeah, if we think it's OK to pass around structs, then that seems like\nthe right solution. Otherwise functions that take RelFileLocator\nshould be changed to take const RelFileLocator * and we should adjust\nelsewhere accordingly.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 29 Jul 2022 14:41:29 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On 2022-Jul-29, Robert Haas wrote:\n\n> Yeah, if we think it's OK to pass around structs, then that seems like\n> the right solution. Otherwise functions that take RelFileLocator\n> should be changed to take const RelFileLocator * and we should adjust\n> elsewhere accordingly.\n\nWe do that in other places. See get_object_address() for another\nexample. Now, I don't see *why* they do it. I suppose there's\nnotational convenience; for get_object_address() I think it'd be uglier\nwith another out argument (it already has *relp). For smgropen() it's\nnot clear at all that there is any.\n\nFor the new function, there's at least a couple of places that the\ncalling convention makes simpler, so I don't see why you wouldn't use it\nthat way.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Use it up, wear it out, make it do, or do without\"\n\n\n",
"msg_date": "Fri, 29 Jul 2022 21:18:18 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Fri, Jul 29, 2022 at 3:18 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Jul-29, Robert Haas wrote:\n> > Yeah, if we think it's OK to pass around structs, then that seems like\n> > the right solution. Otherwise functions that take RelFileLocator\n> > should be changed to take const RelFileLocator * and we should adjust\n> > elsewhere accordingly.\n>\n> We do that in other places. See get_object_address() for another\n> example. Now, I don't see *why* they do it. I suppose there's\n> notational convenience; for get_object_address() I think it'd be uglier\n> with another out argument (it already has *relp). For smgropen() it's\n> not clear at all that there is any.\n>\n> For the new function, there's at least a couple of places that the\n> calling convention makes simpler, so I don't see why you wouldn't use it\n> that way.\n\nAll right, perhaps it's fine as Dilip has it, then.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 29 Jul 2022 15:57:17 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Jul-29, Robert Haas wrote:\n>> Yeah, if we think it's OK to pass around structs, then that seems like\n>> the right solution. Otherwise functions that take RelFileLocator\n>> should be changed to take const RelFileLocator * and we should adjust\n>> elsewhere accordingly.\n\n> We do that in other places. See get_object_address() for another\n> example. Now, I don't see *why* they do it.\n\nIf it's a big struct then avoiding copying it is good; but RelFileLocator\nisn't that big.\n\nWhile researching that statement I did happen to notice that no one has\nbothered to update the comment immediately above struct RelFileLocator,\nand it is something that absolutely does require attention if there\nare plans to make RelFileNumber something other than 32 bits.\n\n * Note: various places use RelFileLocator in hashtable keys. Therefore,\n * there *must not* be any unused padding bytes in this struct. That\n * should be safe as long as all the fields are of type Oid.\n */\ntypedef struct RelFileLocator\n{\n Oid spcOid; /* tablespace */\n Oid dbOid; /* database */\n RelFileNumber relNumber; /* relation */\n} RelFileLocator;\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 29 Jul 2022 16:05:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I was taught that when programming in C one should avoid returning a\n> struct type, as BufTagGetRelFileLocator does.\n\nFWIW, I think that was invalid pre-ANSI-C, and maybe even in C89.\nC99 and later requires it. But it is pass-by-value and you have\nto think twice about whether you want the struct to be copied.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 29 Jul 2022 16:08:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, Jul 20, 2022 at 7:27 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> There was also an issue where the user table from the old cluster's\n> relfilenode could conflict with the system table of the new cluster.\n> As a solution currently for system table object (while creating\n> storage first time) we are keeping the low range of relfilenumber,\n> basically we are using the same relfilenumber as OID so that during\n> upgrade the normal user table from the old cluster will not conflict\n> with the system tables in the new cluster. But with this solution\n> Robert told me (in off list chat) a problem that in future if we want\n> to make relfilenumber completely unique within a cluster by\n> implementing the CREATEDB differently then we can not do that as we\n> have created fixed relfilenodes for the system tables.\n>\n> I am not sure what exactly we can do to avoid that because even if we\n> do something to avoid that in the new cluster the old cluster might\n> be already using the non-unique relfilenode so after upgrading the new\n> cluster will also get those non-unique relfilenode.\n\nI think this aspect of the patch could use some more discussion.\n\nTo recap, the problem is that pg_upgrade mustn't discover that a\nrelfilenode that is being migrated from the old cluster is being used\nfor some other table in the new cluster. Since the new cluster should\nonly contain system tables that we assume have never been rewritten,\nthey'll all have relfilenodes equal to their OIDs, and thus less than\n16384. On the other hand all the user tables from the old cluster will\nhave relfilenodes greater than 16384, so we're fine. pg_largeobject,\nwhich also gets migrated, is a special case. Since we don't change OID\nassignments from version to version, it should have either the same\nrelfilenode value in the old and new clusters, if never rewritten, or\nelse the value in the old cluster will be greater than 16384, in which\ncase no conflict is possible.\n\nBut if we just assign all relfilenode values from a central counter,\nthen we have got trouble. If the new version has more system catalog\ntables than the old version, then some value that got used for a user\ntable in the old version might get used for a system table in the new\nversion, which is a problem. One idea for fixing this is to have two\nRelFileNumber ranges: a system range (small values) and a user range.\nSystem tables get values in the system range initially, and in the\nuser range when first rewritten. User tables always get values in the\nuser range. Everything works fine in this scenario except maybe for\npg_largeobject: what if it gets one value from the system range in the\nold cluster, and a different value from the system range in the new\ncluster, but some other system table in the new cluster gets the value\nthat pg_largeobject had in the old cluster? Then we've got trouble. It\ndoesn't help if we assign pg_largeobject a starting relfilenode from\nthe user range, either: now a relfilenode that needs to end up\ncontaining the some user table from the old cluster might find itself\nblocked by pg_largeobject in the new cluster.\n\nOne solution to all this is to do as Dilip proposes here: for system\nrelations, keep assigning the OID as the initial relfilenumber.\nActually, we really only need to do this for pg_largeobject; all the\nother relfilenumber values could be assigned from a counter, as long\nas they're assigned from a range distinct from what we use for user\nrelations.\n\nBut I don't really like that, because I feel like the whole thing\nwhere we start out with relfilenumber=oid is a recipe for hidden bugs.\nI believe we'd be better off if we decouple those concepts more\nthoroughly. So here's another idea: what if we set the\nnext-relfilenumber counter for the new cluster to the value from the\nold cluster, and then rewrote all the (thus-far-empty) system tables?\nThen every system relation in the new cluster has a relfilenode value\ngreater than any in use in the old cluster, so we can afterwards\nmigrate over every relfilenode from the old cluster with no risk of\nconflicting with anything. Then all the special cases go away. We\ndon't need system and user ranges for relfilenodes, and\npg_largeobject's not a special case, either. We can assign relfilenode\nvalues to system relations in exactly the same we do for user\nrelations: assign a value from the global counter and forget about it.\nIf this cluster happens to be the \"new cluster\" for a pg_upgrade\nattempt, the procedure described at the beginning of this paragraph\nwill move everything that might conflict out of the way.\n\nOne thing to perhaps not like about this is that it's a little more\nexpensive: clustering every system table in every database on a new\ncluster isn't completely free. Perhaps it's not expensive enough to be\na big problem, though.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 29 Jul 2022 16:29:28 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Sat, Jul 30, 2022 at 8:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I was taught that when programming in C one should avoid returning a\n> > struct type, as BufTagGetRelFileLocator does.\n>\n> FWIW, I think that was invalid pre-ANSI-C, and maybe even in C89.\n> C99 and later requires it. But it is pass-by-value and you have\n> to think twice about whether you want the struct to be copied.\n\nC89 had that.\n\nAs for what it actually does in a non-inlined function: on all modern\nUnix-y systems, 128 bit first arguments and return values are\ntransferred in register pairs[1]. So if you define a struct that\nholds uint32_t, uint32_t, uint64_t and compile a function that takes\none and returns it, you see the struct being transferred directly from\ninput registers to output registers:\n\n 0x0000000000000000 <+0>: mov %rdi,%rax\n 0x0000000000000003 <+3>: mov %rsi,%rdx\n 0x0000000000000006 <+6>: ret\n\nSimilar on ARM64. There it's an empty function, so it must be using\nthe same register in and out[2].\n\nThe MSVC calling convention is different and doesn't seem to be able\nto pass it through registers, so it schleps it out to memory at a\nreturn address[3]. But that's pretty similar to the proposed\nalternative anyway, so surely no worse. *shrug* And of course those\n\"constructor\"-like functions are inlined anyway.\n\n[1] https://en.wikipedia.org/wiki/X86_calling_conventions#System_V_AMD64_ABI\n[2] https://gcc.godbolt.org/z/qfPzhW7YM\n[3] https://gcc.godbolt.org/z/WqvYz6xjs\n\n\n",
"msg_date": "Sat, 30 Jul 2022 09:11:12 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Sat, Jul 30, 2022 at 9:11 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> on all modern Unix-y systems,\n\n(I meant to write AMD64 there)\n\n\n",
"msg_date": "Sat, 30 Jul 2022 09:17:08 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 9:29 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Not a full review, just a quick skim of 0003.\n\nThanks for the review\n\n> > + if (!shutdown)\n> > + {\n> > + if (ShmemVariableCache->loggedRelFileNumber < checkPoint.nextRelFileNumber)\n> > + elog(ERROR, \"nextRelFileNumber can not go backward from \" INT64_FORMAT \"to\" INT64_FORMAT,\n> > + checkPoint.nextRelFileNumber, ShmemVariableCache->loggedRelFileNumber);\n> > +\n> > + checkPoint.nextRelFileNumber = ShmemVariableCache->loggedRelFileNumber;\n> > + }\n>\n> Please don't do this; rather use %llu and cast to (long long).\n> Otherwise the string becomes mangled for translation. I think there are\n> many uses of this sort of pattern in strings, but not all of them are\n> translatable so maybe we don't care -- for example contrib doesn't have\n> translations. And the rmgrdesc routines don't translate either, so we\n> probably don't care about it there; and nothing that uses elog either.\n> But this one in particular I think should be an ereport, not an elog.\n> There are several other ereports in various places of the patch also.\n\nOkay, actually I did not understand the clear logic of when to use\n%llu and to use (U)INT64_FORMAT. They are both used for 64-bit\nintegers. So do you think it is fine to replace all INT64_FORMAT in\nmy patch with %llu?\n\n> > @@ -2378,7 +2378,7 @@ verifyBackupPageConsistency(XLogReaderState *record)\n> > if (memcmp(replay_image_masked, primary_image_masked, BLCKSZ) != 0)\n> > {\n> > elog(FATAL,\n> > - \"inconsistent page found, rel %u/%u/%u, forknum %u, blkno %u\",\n> > + \"inconsistent page found, rel %u/%u/\" INT64_FORMAT \", forknum %u, blkno %u\",\n> > rlocator.spcOid, rlocator.dbOid, rlocator.relNumber,\n> > forknum, blkno);\n>\n> Should this one be an ereport, and thus you do need to change it to that\n> and handle it like that?\n\nOkay, so you mean irrespective of this patch should this be converted\nto ereport?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 30 Jul 2022 09:15:03 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On 2022-Jul-30, Dilip Kumar wrote:\n\n> On Thu, Jul 28, 2022 at 9:29 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > Please don't do this; rather use %llu and cast to (long long).\n> > Otherwise the string becomes mangled for translation.\n> \n> Okay, actually I did not understand the clear logic of when to use\n> %llu and to use (U)INT64_FORMAT. They are both used for 64-bit\n> integers. So do you think it is fine to replace all INT64_FORMAT in\n> my patch with %llu?\n\nThe point here is that there are two users of the source code: one is\nthe compiler, and the other is gettext, which extracts the string for\nthe translation catalog. The compiler is OK with UINT64_FORMAT, of\ncourse (because the preprocessor deals with it). But gettext is quite\nstupid and doesn't understand that UINT64_FORMAT expands to some\nspecifier, so it truncates the string at the double quote sign just\nbefore; in other words, it just doesn't work. So whenever you have a\nstring that ends up in a translation catalog, you must not use\nUINT64_FORMAT or any other preprocessor macro; it has to be a straight\nspecifier in the format string.\n\nWe have found that the most convenient notation is to use %llu in the\nstring and cast the argument to (unsigned long long), so our convention\nis to use that.\n\nFor strings that do not end up in a translation catalog, there's no\nreason to use %llu-and-cast; UINT64_FORMAT is okay.\n\n> > > @@ -2378,7 +2378,7 @@ verifyBackupPageConsistency(XLogReaderState *record)\n> > > if (memcmp(replay_image_masked, primary_image_masked, BLCKSZ) != 0)\n> > > {\n> > > elog(FATAL,\n> > > - \"inconsistent page found, rel %u/%u/%u, forknum %u, blkno %u\",\n> > > + \"inconsistent page found, rel %u/%u/\" INT64_FORMAT \", forknum %u, blkno %u\",\n> > > rlocator.spcOid, rlocator.dbOid, rlocator.relNumber,\n> > > forknum, blkno);\n> >\n> > Should this one be an ereport, and thus you do need to change it to that\n> > and handle it like that?\n> \n> Okay, so you mean irrespective of this patch should this be converted\n> to ereport?\n\nYes, I think this should be an ereport with errcode(ERRCODE_DATA_CORRUPTED).\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Sat, 30 Jul 2022 13:39:22 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Sat, Jul 30, 2022 at 1:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2022-Jul-29, Robert Haas wrote:\n> >> Yeah, if we think it's OK to pass around structs, then that seems like\n> >> the right solution. Otherwise functions that take RelFileLocator\n> >> should be changed to take const RelFileLocator * and we should adjust\n> >> elsewhere accordingly.\n>\n> > We do that in other places. See get_object_address() for another\n> > example. Now, I don't see *why* they do it.\n>\n> If it's a big struct then avoiding copying it is good; but RelFileLocator\n> isn't that big.\n>\n> While researching that statement I did happen to notice that no one has\n> bothered to update the comment immediately above struct RelFileLocator,\n> and it is something that absolutely does require attention if there\n> are plans to make RelFileNumber something other than 32 bits.\n\nI think we need to update this comment in the patch where we are\nmaking RelFileNumber 64 bits wide. But as such I do not see a problem\nin using RelFileLocator directly as key because if we make\nRelFileNumber 64 bits then its structure will already be 8 byte\naligned so there should not be any padding. However, if we use some\nother structure as key which contain RelFileLocator i.e.\nRelFileLocatorBackend then there will be a problem. So for handling\nthat issue while computing the key size (wherever we have\nRelFileLocatorBackend as key) I have avoided the padding bytes in size\nby introducing this new macro[1].\n\n[1]\n#define SizeOfRelFileLocatorBackend \\\n(offsetof(RelFileLocatorBackend, backend) + sizeof(BackendId))\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 1 Aug 2022 10:50:59 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Fri, Jul 29, 2022 at 10:55 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Jul 28, 2022 at 10:29 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > I have done some cleanup in 0002 as well, basically, earlier we were\n> > > storing the result of the BufTagGetRelFileLocator() in a separate\n> > > variable which is not required everywhere. So wherever possible I\n> > > have avoided using the intermediate variable.\n> >\n> > I'll have a look at this next.\n>\n> I was taught that when programming in C one should avoid returning a\n> struct type, as BufTagGetRelFileLocator does. I would have expected it\n> to return void and take an argument of type RelFileLocator * into\n> which it writes the results. On the other hand, I was also taught that\n> one should avoid passing a struct type as an argument, and smgropen()\n> has been doing that since Tom Lane committed\n> 87bd95638552b8fc1f5f787ce5b862bb6fc2eb80 all the way back in 2004. So\n> maybe this isn't that relevant any more on modern compilers? Or maybe\n> for small structs it doesn't matter much? I dunno.\n>\n> Other than that, I think your 0002 looks fine.\n\nGenerally, I try to avoid it, but I see in current code also if the\nstructure is small and by directly returning the structure it makes\nthe other code easy then we are doing this way[1]. I wanted to do\nthis way is a) if we pass as an argument then I will have to use an\nextra variable which makes some code complicated, it's not a big\nissue, infact I had it that way in the previous version but simplified\nin one of the recent versions. b) If I allocate memory and return\npointer then also I need to store that address and later free that.\n\n[1]\nstatic inline ForEachState\nfor_each_from_setup(const List *lst, int N)\n{\nForEachState r = {lst, N};\n\nAssert(N >= 0);\nreturn r;\n}\n\nstatic inline FullTransactionId\nFullTransactionIdFromEpochAndXid(uint32 epoch, TransactionId xid)\n{\nFullTransactionId result;\n\nresult.value = ((uint64) epoch) << 32 | xid;\n\nreturn result;\n}\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 1 Aug 2022 11:04:09 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Fri, Jul 29, 2022 at 8:02 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n>\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"relfilenode\" INT64_FORMAT \" is too large to be represented as an OID\",\n> + fctx->record[i].relfilenumber),\n> + errhint(\"Upgrade the extension using ALTER EXTENSION pg_buffercache UPDATE\")));\n>\n> I think it would be good to recommend users to upgrade to the latest version instead of just saying upgrade the pg_buffercache using ALTER EXTENSION ....\n\nThis error would be hit if the relfilenumber is out of OID range that\nmeans the user is using a new cluster but old pg_buffercache\nextension. So this errhint is about suggesting to upgrade the\nextension.\n\n> ==\n>\n> --- a/contrib/pg_walinspect/sql/pg_walinspect.sql\n> +++ b/contrib/pg_walinspect/sql/pg_walinspect.sql\n> @@ -39,10 +39,10 @@ SELECT COUNT(*) >= 0 AS ok FROM pg_get_wal_stats_till_end_of_wal(:'wal_lsn1');\n> -- Test for filtering out WAL records of a particular table\n> -- ===================================================================\n>\n> -SELECT oid AS sample_tbl_oid FROM pg_class WHERE relname = 'sample_tbl' \\gset\n> +SELECT relfilenode AS sample_tbl_relfilenode FROM pg_class WHERE relname = 'sample_tbl' \\gset\n>\n> Is this change required? The original query is just trying to fetch table oid not relfilenode and AFAIK we haven't changed anything in table oid.\n\nIf you notice the complete test, then you will realize that\nsample_tbl_oid are used for verifying that in\npg_get_wal_records_info(), so earlier it was okay if we were using oid\ninstead of relfilenode because this test case is just creating table\ndoing some DML and verifying oid in WAL so that will be same as\nrelfilenode, but that is no longer true. So we will have to check the\nrelfilenode that was the actual intention of the test.\n\n\n>\n> + * Generate a new relfilenumber. We cannot reuse the old relfilenumber\n> + * because of the possibility that that relation will be moved back to the\n>\n> that that relation -> that relation\n>\n\nI think this is a grammatically correct sentence .\n\nI have fixed other comments, and also fixed comments from Alvaro to\nuse %lld instead of INT64_FORMAT inside the ereport and wherever he\nsuggested.\n\nI haven't yet changed MAX_RELFILENUMBER to represent the hex\ncharacters because then we will have to change the filename as well.\nSo I think there is no conclusion on this yet whether we want to keep\nit as it is or in hex. And there is another suggestion to change one\nof the existing elog to an ereport, so for that I will share a\nseparate patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 1 Aug 2022 17:27:01 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Sat, Jul 30, 2022 at 1:59 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> One solution to all this is to do as Dilip proposes here: for system\n> relations, keep assigning the OID as the initial relfilenumber.\n> Actually, we really only need to do this for pg_largeobject; all the\n> other relfilenumber values could be assigned from a counter, as long\n> as they're assigned from a range distinct from what we use for user\n> relations.\n>\n> But I don't really like that, because I feel like the whole thing\n> where we start out with relfilenumber=oid is a recipe for hidden bugs.\n> I believe we'd be better off if we decouple those concepts more\n> thoroughly. So here's another idea: what if we set the\n> next-relfilenumber counter for the new cluster to the value from the\n> old cluster, and then rewrote all the (thus-far-empty) system tables?\n\nYou mean in a new cluster start the next-relfilenumber counter from\nthe highest relfilenode/Oid value in the old cluster right?. Yeah, if\nwe start next-relfilenumber after the range of the old cluster then we\ncan also avoid the logic of SetNextRelFileNumber() during upgrade.\n\nMy very initial idea around this was to start the next-relfilenumber\ndirectly from the 4 billion in the new cluster so there can not be any\nconflict and we don't even need to identify the highest value of used\nrelfilenode in the old cluster. In fact we don't need to rewrite the\nsystem table before upgrading I think. So what do we lose with this?\njust 4 billion relfilenode? does that really matter provided the range\nwe get with the 56 bits relfilenumber.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 4 Aug 2022 17:01:32 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Thu, Aug 4, 2022 at 5:01 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sat, Jul 30, 2022 at 1:59 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> > One solution to all this is to do as Dilip proposes here: for system\n> > relations, keep assigning the OID as the initial relfilenumber.\n> > Actually, we really only need to do this for pg_largeobject; all the\n> > other relfilenumber values could be assigned from a counter, as long\n> > as they're assigned from a range distinct from what we use for user\n> > relations.\n> >\n> > But I don't really like that, because I feel like the whole thing\n> > where we start out with relfilenumber=oid is a recipe for hidden bugs.\n> > I believe we'd be better off if we decouple those concepts more\n> > thoroughly. So here's another idea: what if we set the\n> > next-relfilenumber counter for the new cluster to the value from the\n> > old cluster, and then rewrote all the (thus-far-empty) system tables?\n>\n> You mean in a new cluster start the next-relfilenumber counter from\n> the highest relfilenode/Oid value in the old cluster right?. Yeah, if\n> we start next-relfilenumber after the range of the old cluster then we\n> can also avoid the logic of SetNextRelFileNumber() during upgrade.\n>\n> My very initial idea around this was to start the next-relfilenumber\n> directly from the 4 billion in the new cluster so there can not be any\n> conflict and we don't even need to identify the highest value of used\n> relfilenode in the old cluster. In fact we don't need to rewrite the\n> system table before upgrading I think. So what do we lose with this?\n> just 4 billion relfilenode? does that really matter provided the range\n> we get with the 56 bits relfilenumber.\n\nI think even if we start the range from the 4 billion we can not avoid\nkeeping two separate ranges for system and user tables otherwise the\nnext upgrade where old and new clusters both have 56 bits\nrelfilenumber will get conflicting files. And, for the same reason we\nstill have to call SetNextRelFileNumber() during upgrade.\n\nSo the idea is, we will be having 2 ranges for relfilenumbers, system\nrange will start from 4 billion and user range maybe something around\n4.1 (I think we can keep it very small though, just reserve 50k\nrelfilenumber for system for future expansion and start user range\nfrom there).\n\nSo now system tables have no issues and also the user tables from the\nold cluster have no issues. But pg_largeobject might get conflict\nwhen both old and new cluster are using 56 bits relfilenumber, because\nit is possible that in the new cluster some other system table gets\nthat relfilenumber which is used by pg_largeobject in the old cluster.\n\nThis could be resolved if we allocate pg_largeobject's relfilenumber\nfrom the user range, that means this relfilenumber will always be the\nfirst value from the user range. So now if the old and new cluster\nboth are using 56bits relfilenumber then pg_largeobject in both\ncluster would have got the same relfilenumber and if the old cluster\nis using the current 32 bits relfilenode system then the whole range\nof the new cluster is completely different than that of the old\ncluster.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Aug 2022 12:55:30 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Fri, Aug 5, 2022 at 3:25 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I think even if we start the range from the 4 billion we can not avoid\n> keeping two separate ranges for system and user tables otherwise the\n> next upgrade where old and new clusters both have 56 bits\n> relfilenumber will get conflicting files. And, for the same reason we\n> still have to call SetNextRelFileNumber() during upgrade.\n\nWell, my proposal to move everything from the new cluster up to higher\nnumbers would address this without requiring two ranges.\n\n> So the idea is, we will be having 2 ranges for relfilenumbers, system\n> range will start from 4 billion and user range maybe something around\n> 4.1 (I think we can keep it very small though, just reserve 50k\n> relfilenumber for system for future expansion and start user range\n> from there).\n\nA disadvantage of this is that it basically means all the file names\nin new clusters are going to be 10 characters long. That's not a big\ndisadvantage, but it's not wonderful. File names that are only 5-7\ncharacters long are common today, and easier to remember.\n\n> So now system tables have no issues and also the user tables from the\n> old cluster have no issues. But pg_largeobject might get conflict\n> when both old and new cluster are using 56 bits relfilenumber, because\n> it is possible that in the new cluster some other system table gets\n> that relfilenumber which is used by pg_largeobject in the old cluster.\n>\n> This could be resolved if we allocate pg_largeobject's relfilenumber\n> from the user range, that means this relfilenumber will always be the\n> first value from the user range. So now if the old and new cluster\n> both are using 56bits relfilenumber then pg_largeobject in both\n> cluster would have got the same relfilenumber and if the old cluster\n> is using the current 32 bits relfilenode system then the whole range\n> of the new cluster is completely different than that of the old\n> cluster.\n\nI think this can work, but it does rely to some extent on the fact\nthat there are no other tables which need to be treated like\npg_largeobject. If there were others, they'd need fixed starting\nRelFileNumber assignments, or some other trick, like renumbering them\ntwice in the cluster, first two a known-unused value and then back to\nthe proper value. You'd have trouble if in the other cluster\npg_largeobject was 4bn+1 and pg_largeobject2 was 4bn+2 and in the new\ncluster the reverse, without some hackery.\n\nI do feel like your idea here has some advantages - my proposal\nrequires rewriting all the catalogs in the new cluster before we do\nanything else, and that's going to take some time even though they\nshould be small. But I also feel like it has some disadvantages: it\nseems to rely on complicated reasoning and special cases more than I'd\nlike.\n\nWhat do other people think?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Aug 2022 11:21:19 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 8:51 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Aug 5, 2022 at 3:25 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I think even if we start the range from the 4 billion we can not avoid\n> > keeping two separate ranges for system and user tables otherwise the\n> > next upgrade where old and new clusters both have 56 bits\n> > relfilenumber will get conflicting files. And, for the same reason we\n> > still have to call SetNextRelFileNumber() during upgrade.\n>\n> Well, my proposal to move everything from the new cluster up to higher\n> numbers would address this without requiring two ranges.\n>\n> > So the idea is, we will be having 2 ranges for relfilenumbers, system\n> > range will start from 4 billion and user range maybe something around\n> > 4.1 (I think we can keep it very small though, just reserve 50k\n> > relfilenumber for system for future expansion and start user range\n> > from there).\n>\n> A disadvantage of this is that it basically means all the file names\n> in new clusters are going to be 10 characters long. That's not a big\n> disadvantage, but it's not wonderful. File names that are only 5-7\n> characters long are common today, and easier to remember.\n\nThat's correct.\n\n> > So now system tables have no issues and also the user tables from the\n> > old cluster have no issues. But pg_largeobject might get conflict\n> > when both old and new cluster are using 56 bits relfilenumber, because\n> > it is possible that in the new cluster some other system table gets\n> > that relfilenumber which is used by pg_largeobject in the old cluster.\n> >\n> > This could be resolved if we allocate pg_largeobject's relfilenumber\n> > from the user range, that means this relfilenumber will always be the\n> > first value from the user range. So now if the old and new cluster\n> > both are using 56bits relfilenumber then pg_largeobject in both\n> > cluster would have got the same relfilenumber and if the old cluster\n> > is using the current 32 bits relfilenode system then the whole range\n> > of the new cluster is completely different than that of the old\n> > cluster.\n>\n> I think this can work, but it does rely to some extent on the fact\n> that there are no other tables which need to be treated like\n> pg_largeobject. If there were others, they'd need fixed starting\n> RelFileNumber assignments, or some other trick, like renumbering them\n> twice in the cluster, first two a known-unused value and then back to\n> the proper value. You'd have trouble if in the other cluster\n> pg_largeobject was 4bn+1 and pg_largeobject2 was 4bn+2 and in the new\n> cluster the reverse, without some hackery.\n\nAgree, if it has more catalog like pg_largeobject then it would\nrequire some hacking.\n\n> I do feel like your idea here has some advantages - my proposal\n> requires rewriting all the catalogs in the new cluster before we do\n> anything else, and that's going to take some time even though they\n> should be small. But I also feel like it has some disadvantages: it\n> seems to rely on complicated reasoning and special cases more than I'd\n> like.\n\nOne other advantage with your approach is that since we are starting\nthe \"nextrelfilenumber\" after the old cluster's relfilenumber range.\nSo only at the beginning we need to set the \"nextrelfilenumber\" but\nafter that while upgrading each object we don't need to set the\nnextrelfilenumber every time because that is already higher than the\ncomplete old cluster range. In other 2 approaches we will have to try\nto set the nextrelfilenumber everytime we preserve the relfilenumber\nduring upgrade.\n\nOther than these two approaches we have another approach (what the\npatch set is already doing) where we keep the system relfilenumber\nrange same as Oid. I know you have already pointed out that this\nmight have some hidden bug but one advantage of this approach is it is\nsimple compared two above two approaches in the sense that it doesn't\nneed to maintain two ranges and it also doesn't need to rewrite all\nsystem tables in the new cluster. So I think it would be good if we\ncan get others' opinions on all these 3 approaches.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Aug 2022 10:58:42 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Thu, Aug 11, 2022 at 10:58 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Aug 9, 2022 at 8:51 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Fri, Aug 5, 2022 at 3:25 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > I think even if we start the range from the 4 billion we can not avoid\n> > > keeping two separate ranges for system and user tables otherwise the\n> > > next upgrade where old and new clusters both have 56 bits\n> > > relfilenumber will get conflicting files. And, for the same reason we\n> > > still have to call SetNextRelFileNumber() during upgrade.\n> >\n> > Well, my proposal to move everything from the new cluster up to higher\n> > numbers would address this without requiring two ranges.\n> >\n> > > So the idea is, we will be having 2 ranges for relfilenumbers, system\n> > > range will start from 4 billion and user range maybe something around\n> > > 4.1 (I think we can keep it very small though, just reserve 50k\n> > > relfilenumber for system for future expansion and start user range\n> > > from there).\n> >\n> > A disadvantage of this is that it basically means all the file names\n> > in new clusters are going to be 10 characters long. That's not a big\n> > disadvantage, but it's not wonderful. File names that are only 5-7\n> > characters long are common today, and easier to remember.\n>\n> That's correct.\n>\n> > > So now system tables have no issues and also the user tables from the\n> > > old cluster have no issues. But pg_largeobject might get conflict\n> > > when both old and new cluster are using 56 bits relfilenumber, because\n> > > it is possible that in the new cluster some other system table gets\n> > > that relfilenumber which is used by pg_largeobject in the old cluster.\n> > >\n> > > This could be resolved if we allocate pg_largeobject's relfilenumber\n> > > from the user range, that means this relfilenumber will always be the\n> > > first value from the user range. So now if the old and new cluster\n> > > both are using 56bits relfilenumber then pg_largeobject in both\n> > > cluster would have got the same relfilenumber and if the old cluster\n> > > is using the current 32 bits relfilenode system then the whole range\n> > > of the new cluster is completely different than that of the old\n> > > cluster.\n> >\n> > I think this can work, but it does rely to some extent on the fact\n> > that there are no other tables which need to be treated like\n> > pg_largeobject. If there were others, they'd need fixed starting\n> > RelFileNumber assignments, or some other trick, like renumbering them\n> > twice in the cluster, first two a known-unused value and then back to\n> > the proper value. You'd have trouble if in the other cluster\n> > pg_largeobject was 4bn+1 and pg_largeobject2 was 4bn+2 and in the new\n> > cluster the reverse, without some hackery.\n>\n> Agree, if it has more catalog like pg_largeobject then it would\n> require some hacking.\n>\n> > I do feel like your idea here has some advantages - my proposal\n> > requires rewriting all the catalogs in the new cluster before we do\n> > anything else, and that's going to take some time even though they\n> > should be small. But I also feel like it has some disadvantages: it\n> > seems to rely on complicated reasoning and special cases more than I'd\n> > like.\n>\n> One other advantage with your approach is that since we are starting\n> the \"nextrelfilenumber\" after the old cluster's relfilenumber range.\n> So only at the beginning we need to set the \"nextrelfilenumber\" but\n> after that while upgrading each object we don't need to set the\n> nextrelfilenumber every time because that is already higher than the\n> complete old cluster range. In other 2 approaches we will have to try\n> to set the nextrelfilenumber everytime we preserve the relfilenumber\n> during upgrade.\n\nI was also thinking that whether we will get the max \"relfilenumber\"\nfrom the old cluster at the cluster level or per database level? I\nmean if we want to get database level we can run simple query on\npg_class and get it but there also we will need to see how to handle\nthe mapped relation if they are rewritten? I don't think we can get\nthe max relfilenumber from the old cluster at the cluster level.\nMaybe in the newer version we can expose a function from the server to\njust return the NextRelFileNumber and that would be the max\nrelfilenumber but I'm not sure how to do that in the old version.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Aug 2022 13:45:19 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Sat, Jul 30, 2022 at 1:59 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jul 20, 2022 at 7:27 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > There was also an issue where the user table from the old cluster's\n> > relfilenode could conflict with the system table of the new cluster.\n> > As a solution currently for system table object (while creating\n> > storage first time) we are keeping the low range of relfilenumber,\n> > basically we are using the same relfilenumber as OID so that during\n> > upgrade the normal user table from the old cluster will not conflict\n> > with the system tables in the new cluster. But with this solution\n> > Robert told me (in off list chat) a problem that in future if we want\n> > to make relfilenumber completely unique within a cluster by\n> > implementing the CREATEDB differently then we can not do that as we\n> > have created fixed relfilenodes for the system tables.\n> >\n> > I am not sure what exactly we can do to avoid that because even if we\n> > do something to avoid that in the new cluster the old cluster might\n> > be already using the non-unique relfilenode so after upgrading the new\n> > cluster will also get those non-unique relfilenode.\n>\n> I think this aspect of the patch could use some more discussion.\n>\n> To recap, the problem is that pg_upgrade mustn't discover that a\n> relfilenode that is being migrated from the old cluster is being used\n> for some other table in the new cluster. Since the new cluster should\n> only contain system tables that we assume have never been rewritten,\n> they'll all have relfilenodes equal to their OIDs, and thus less than\n> 16384. On the other hand all the user tables from the old cluster will\n> have relfilenodes greater than 16384, so we're fine. pg_largeobject,\n> which also gets migrated, is a special case. Since we don't change OID\n> assignments from version to version, it should have either the same\n> relfilenode value in the old and new clusters, if never rewritten, or\n> else the value in the old cluster will be greater than 16384, in which\n> case no conflict is possible.\n>\n> But if we just assign all relfilenode values from a central counter,\n> then we have got trouble. If the new version has more system catalog\n> tables than the old version, then some value that got used for a user\n> table in the old version might get used for a system table in the new\n> version, which is a problem. One idea for fixing this is to have two\n> RelFileNumber ranges: a system range (small values) and a user range.\n> System tables get values in the system range initially, and in the\n> user range when first rewritten. User tables always get values in the\n> user range. Everything works fine in this scenario except maybe for\n> pg_largeobject: what if it gets one value from the system range in the\n> old cluster, and a different value from the system range in the new\n> cluster, but some other system table in the new cluster gets the value\n> that pg_largeobject had in the old cluster? Then we've got trouble.\n>\n\nTo solve that problem, how about rewriting the system table in the new\ncluster which has a conflicting relfilenode? I think we can probably\ndo this conflict checking before processing the tables from the old\ncluster.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 22 Aug 2022 13:25:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Mon, Aug 22, 2022 at 1:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Jul 30, 2022 at 1:59 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Wed, Jul 20, 2022 at 7:27 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > There was also an issue where the user table from the old cluster's\n> > > relfilenode could conflict with the system table of the new cluster.\n> > > As a solution currently for system table object (while creating\n> > > storage first time) we are keeping the low range of relfilenumber,\n> > > basically we are using the same relfilenumber as OID so that during\n> > > upgrade the normal user table from the old cluster will not conflict\n> > > with the system tables in the new cluster. But with this solution\n> > > Robert told me (in off list chat) a problem that in future if we want\n> > > to make relfilenumber completely unique within a cluster by\n> > > implementing the CREATEDB differently then we can not do that as we\n> > > have created fixed relfilenodes for the system tables.\n> > >\n> > > I am not sure what exactly we can do to avoid that because even if we\n> > > do something to avoid that in the new cluster the old cluster might\n> > > be already using the non-unique relfilenode so after upgrading the new\n> > > cluster will also get those non-unique relfilenode.\n> >\n> > I think this aspect of the patch could use some more discussion.\n> >\n> > To recap, the problem is that pg_upgrade mustn't discover that a\n> > relfilenode that is being migrated from the old cluster is being used\n> > for some other table in the new cluster. Since the new cluster should\n> > only contain system tables that we assume have never been rewritten,\n> > they'll all have relfilenodes equal to their OIDs, and thus less than\n> > 16384. On the other hand all the user tables from the old cluster will\n> > have relfilenodes greater than 16384, so we're fine. pg_largeobject,\n> > which also gets migrated, is a special case. Since we don't change OID\n> > assignments from version to version, it should have either the same\n> > relfilenode value in the old and new clusters, if never rewritten, or\n> > else the value in the old cluster will be greater than 16384, in which\n> > case no conflict is possible.\n> >\n> > But if we just assign all relfilenode values from a central counter,\n> > then we have got trouble. If the new version has more system catalog\n> > tables than the old version, then some value that got used for a user\n> > table in the old version might get used for a system table in the new\n> > version, which is a problem. One idea for fixing this is to have two\n> > RelFileNumber ranges: a system range (small values) and a user range.\n> > System tables get values in the system range initially, and in the\n> > user range when first rewritten. User tables always get values in the\n> > user range. Everything works fine in this scenario except maybe for\n> > pg_largeobject: what if it gets one value from the system range in the\n> > old cluster, and a different value from the system range in the new\n> > cluster, but some other system table in the new cluster gets the value\n> > that pg_largeobject had in the old cluster? Then we've got trouble.\n> >\n>\n> To solve that problem, how about rewriting the system table in the new\n> cluster which has a conflicting relfilenode? I think we can probably\n> do this conflict checking before processing the tables from the old\n> cluster.\n>\n\nI think while rewriting of system table during the upgrade, we need to\nensure that it gets relfilenumber from the system range, otherwise, if\nwe allocate it from the user range, there will be a chance of conflict\nwith the user tables from the old cluster. Another way could be to set\nthe next-relfilenumber counter for the new cluster to the value from\nthe old cluster as mentioned by Robert in his previous email [1].\n\n[1] - https://www.postgresql.org/message-id/CA%2BTgmoYsNiF8JGZ%2BKp7Zgcct67Qk%2B%2BYAp%2B1ybOQ0qomUayn%2B7A%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 22 Aug 2022 14:11:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Mon, Aug 22, 2022 at 3:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> To solve that problem, how about rewriting the system table in the new\n> cluster which has a conflicting relfilenode? I think we can probably\n> do this conflict checking before processing the tables from the old\n> cluster.\n\nThanks for chiming in.\n\nRight now, there are two parts to the relfilenumber preservation\nsystem, and this scheme doesn't quite fit into either of them. First,\nthe dump includes commands to set pg_class.relfilenode in the new\ncluster to the same value that it had in the old cluster. The dump\ncan't include any SQL commands that depend on what's happening in the\nnew cluster because pg_dump(all) only connects to a single cluster,\nwhich in this case is the old cluster. Second, pg_upgrade itself\ncopies the files from the old cluster to the new cluster. This doesn't\ninvolve a database connection at all. So there's no part of the\ncurrent relfilenode preservation mechanism that can look at the old\nAND the new database and decide on some SQL to execute against the new\ndatabase.\n\nI thought for a while that we could use the information that's already\ngathered by get_rel_infos() to do what you're suggesting here, but it\ndoesn't quite work, because that function excludes system tables, and\nwe can't afford to do that here. We'd either need to modify that query\nto include system tables - at least for the new cluster - or run a\nseparate one to gather information about system tables in the new\ncluster. Then, we could put all the pg_class.relfilenode values we\nfound in the new cluster into a hash table, loop over the list of rels\nthis function found in the old cluster, and for each one, probe into\nthe hash table. If we find a match, that's a system table that needs\nto be moved out of the way before calling create_new_objects(), or\nmaybe inside that function but before it runs pg_restore.\n\nThat doesn't seem too crazy, I think. It's a little bit of new\nmechanism, but it doesn't sound horrific. It's got the advantage of\nbeing significantly cheaper than my proposal of moving everything out\nof the way unconditionally, and at the same time it retains one of the\nkey advantages of that proposal - IMV, anyway - which is that we don't\nneed separate relfilenode ranges for user and system objects any more.\nSo I guess on balance I kind of like it, but maybe I'm missing\nsomething.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 22 Aug 2022 16:16:15 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 1:46 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Aug 22, 2022 at 3:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > To solve that problem, how about rewriting the system table in the new\n> > cluster which has a conflicting relfilenode? I think we can probably\n> > do this conflict checking before processing the tables from the old\n> > cluster.\n>\n> Thanks for chiming in.\n>\n> Right now, there are two parts to the relfilenumber preservation\n> system, and this scheme doesn't quite fit into either of them. First,\n> the dump includes commands to set pg_class.relfilenode in the new\n> cluster to the same value that it had in the old cluster. The dump\n> can't include any SQL commands that depend on what's happening in the\n> new cluster because pg_dump(all) only connects to a single cluster,\n> which in this case is the old cluster. Second, pg_upgrade itself\n> copies the files from the old cluster to the new cluster. This doesn't\n> involve a database connection at all. So there's no part of the\n> current relfilenode preservation mechanism that can look at the old\n> AND the new database and decide on some SQL to execute against the new\n> database.\n>\n> I thought for a while that we could use the information that's already\n> gathered by get_rel_infos() to do what you're suggesting here, but it\n> doesn't quite work, because that function excludes system tables, and\n> we can't afford to do that here. We'd either need to modify that query\n> to include system tables - at least for the new cluster - or run a\n> separate one to gather information about system tables in the new\n> cluster. Then, we could put all the pg_class.relfilenode values we\n> found in the new cluster into a hash table, loop over the list of rels\n> this function found in the old cluster, and for each one, probe into\n> the hash table. If we find a match, that's a system table that needs\n> to be moved out of the way before calling create_new_objects(), or\n> maybe inside that function but before it runs pg_restore.\n>\n> That doesn't seem too crazy, I think. It's a little bit of new\n> mechanism, but it doesn't sound horrific. It's got the advantage of\n> being significantly cheaper than my proposal of moving everything out\n> of the way unconditionally, and at the same time it retains one of the\n> key advantages of that proposal - IMV, anyway - which is that we don't\n> need separate relfilenode ranges for user and system objects any more.\n> So I guess on balance I kind of like it, but maybe I'm missing\n> something.\n\nOkay, so this seems exactly the same as your previous proposal but\ninstead of unconditionally rewriting all the system tables we will\nrewrite only those conflict with the user table or pg_largeobject from\nthe previous cluster. Although it might have additional\nimplementation complexity on the pg upgrade side, it seems cheaper\nthan rewriting everything.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Aug 2022 08:33:14 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 8:33 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Aug 23, 2022 at 1:46 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Mon, Aug 22, 2022 at 3:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > To solve that problem, how about rewriting the system table in the new\n> > > cluster which has a conflicting relfilenode? I think we can probably\n> > > do this conflict checking before processing the tables from the old\n> > > cluster.\n> >\n> > Thanks for chiming in.\n> >\n> > Right now, there are two parts to the relfilenumber preservation\n> > system, and this scheme doesn't quite fit into either of them. First,\n> > the dump includes commands to set pg_class.relfilenode in the new\n> > cluster to the same value that it had in the old cluster. The dump\n> > can't include any SQL commands that depend on what's happening in the\n> > new cluster because pg_dump(all) only connects to a single cluster,\n> > which in this case is the old cluster. Second, pg_upgrade itself\n> > copies the files from the old cluster to the new cluster. This doesn't\n> > involve a database connection at all. So there's no part of the\n> > current relfilenode preservation mechanism that can look at the old\n> > AND the new database and decide on some SQL to execute against the new\n> > database.\n> >\n> > I thought for a while that we could use the information that's already\n> > gathered by get_rel_infos() to do what you're suggesting here, but it\n> > doesn't quite work, because that function excludes system tables, and\n> > we can't afford to do that here. We'd either need to modify that query\n> > to include system tables - at least for the new cluster - or run a\n> > separate one to gather information about system tables in the new\n> > cluster. Then, we could put all the pg_class.relfilenode values we\n> > found in the new cluster into a hash table, loop over the list of rels\n> > this function found in the old cluster, and for each one, probe into\n> > the hash table. If we find a match, that's a system table that needs\n> > to be moved out of the way before calling create_new_objects(), or\n> > maybe inside that function but before it runs pg_restore.\n> >\n> > That doesn't seem too crazy, I think. It's a little bit of new\n> > mechanism, but it doesn't sound horrific. It's got the advantage of\n> > being significantly cheaper than my proposal of moving everything out\n> > of the way unconditionally, and at the same time it retains one of the\n> > key advantages of that proposal - IMV, anyway - which is that we don't\n> > need separate relfilenode ranges for user and system objects any more.\n> > So I guess on balance I kind of like it, but maybe I'm missing\n> > something.\n>\n> Okay, so this seems exactly the same as your previous proposal but\n> instead of unconditionally rewriting all the system tables we will\n> rewrite only those conflict with the user table or pg_largeobject from\n> the previous cluster. Although it might have additional\n> implementation complexity on the pg upgrade side, it seems cheaper\n> than rewriting everything.\n\nOTOH, if we keep the two separate ranges for the user and system table\nthen we don't need all this complex logic of conflict checking. From\nthe old cluster, we can just remember the relfilenumbr of the\npg_largeobject, and in the new cluster before trying to restore we can\njust query the new cluster pg_class and find out whether it is used by\nany system table and if so then we can just rewrite that system table.\nAnd I think using two ranges might not be that complicated because as\nsoon as we are done with the initdb we can just set NextRelFileNumber\nto the first user range relfilenumber so I think this could be the\nsimplest solution. And I think what Amit is suggesting is something\non this line?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Aug 2022 11:36:27 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 11:36 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Aug 23, 2022 at 8:33 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Aug 23, 2022 at 1:46 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > > On Mon, Aug 22, 2022 at 3:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > To solve that problem, how about rewriting the system table in the new\n> > > > cluster which has a conflicting relfilenode? I think we can probably\n> > > > do this conflict checking before processing the tables from the old\n> > > > cluster.\n> > >\n> > > Thanks for chiming in.\n> > >\n> > > Right now, there are two parts to the relfilenumber preservation\n> > > system, and this scheme doesn't quite fit into either of them. First,\n> > > the dump includes commands to set pg_class.relfilenode in the new\n> > > cluster to the same value that it had in the old cluster. The dump\n> > > can't include any SQL commands that depend on what's happening in the\n> > > new cluster because pg_dump(all) only connects to a single cluster,\n> > > which in this case is the old cluster. Second, pg_upgrade itself\n> > > copies the files from the old cluster to the new cluster. This doesn't\n> > > involve a database connection at all. So there's no part of the\n> > > current relfilenode preservation mechanism that can look at the old\n> > > AND the new database and decide on some SQL to execute against the new\n> > > database.\n> > >\n> > > I thought for a while that we could use the information that's already\n> > > gathered by get_rel_infos() to do what you're suggesting here, but it\n> > > doesn't quite work, because that function excludes system tables, and\n> > > we can't afford to do that here. We'd either need to modify that query\n> > > to include system tables - at least for the new cluster - or run a\n> > > separate one to gather information about system tables in the new\n> > > cluster. Then, we could put all the pg_class.relfilenode values we\n> > > found in the new cluster into a hash table, loop over the list of rels\n> > > this function found in the old cluster, and for each one, probe into\n> > > the hash table. If we find a match, that's a system table that needs\n> > > to be moved out of the way before calling create_new_objects(), or\n> > > maybe inside that function but before it runs pg_restore.\n> > >\n> > > That doesn't seem too crazy, I think. It's a little bit of new\n> > > mechanism, but it doesn't sound horrific. It's got the advantage of\n> > > being significantly cheaper than my proposal of moving everything out\n> > > of the way unconditionally, and at the same time it retains one of the\n> > > key advantages of that proposal - IMV, anyway - which is that we don't\n> > > need separate relfilenode ranges for user and system objects any more.\n> > > So I guess on balance I kind of like it, but maybe I'm missing\n> > > something.\n> >\n> > Okay, so this seems exactly the same as your previous proposal but\n> > instead of unconditionally rewriting all the system tables we will\n> > rewrite only those conflict with the user table or pg_largeobject from\n> > the previous cluster. Although it might have additional\n> > implementation complexity on the pg upgrade side, it seems cheaper\n> > than rewriting everything.\n>\n> OTOH, if we keep the two separate ranges for the user and system table\n> then we don't need all this complex logic of conflict checking. From\n> the old cluster, we can just remember the relfilenumbr of the\n> pg_largeobject, and in the new cluster before trying to restore we can\n> just query the new cluster pg_class and find out whether it is used by\n> any system table and if so then we can just rewrite that system table.\n>\n\nBefore re-write of that system table, I think we need to set\nNextRelFileNumber to a number greater than the max relfilenumber from\nthe old cluster, otherwise, it can later conflict with some user\ntable.\n\n> And I think using two ranges might not be that complicated because as\n> soon as we are done with the initdb we can just set NextRelFileNumber\n> to the first user range relfilenumber so I think this could be the\n> simplest solution. And I think what Amit is suggesting is something\n> on this line?\n>\n\nYeah, I had thought of checking only pg_largeobject. I think the\nadvantage of having separate ranges is that we have a somewhat simpler\nlogic in the upgrade but OTOH the other scheme has the advantage of\nhaving a single allocation scheme. Do we see any other pros/cons of\none over the other?\n\nOne more thing we may want to think about is what if there are tables\ncreated by extension? For example, I think BDR creates some tables\nlike node_group, conflict_history, etc. Now, I think if such an\nextension is created by default, both old and new clusters will have\nthose tables. Isn't there a chance of relfilenumber conflict in such\ncases?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 23 Aug 2022 15:16:39 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 3:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > OTOH, if we keep the two separate ranges for the user and system table\n> > then we don't need all this complex logic of conflict checking. From\n> > the old cluster, we can just remember the relfilenumbr of the\n> > pg_largeobject, and in the new cluster before trying to restore we can\n> > just query the new cluster pg_class and find out whether it is used by\n> > any system table and if so then we can just rewrite that system table.\n> >\n>\n> Before re-write of that system table, I think we need to set\n> NextRelFileNumber to a number greater than the max relfilenumber from\n> the old cluster, otherwise, it can later conflict with some user\n> table.\n\nYes we will need to do that.\n\n> > And I think using two ranges might not be that complicated because as\n> > soon as we are done with the initdb we can just set NextRelFileNumber\n> > to the first user range relfilenumber so I think this could be the\n> > simplest solution. And I think what Amit is suggesting is something\n> > on this line?\n> >\n>\n> Yeah, I had thought of checking only pg_largeobject. I think the\n> advantage of having separate ranges is that we have a somewhat simpler\n> logic in the upgrade but OTOH the other scheme has the advantage of\n> having a single allocation scheme. Do we see any other pros/cons of\n> one over the other?\n\nI feel having a separate range is not much different from having a\nsingle allocation scheme, after cluster initialization, we will just\nhave to set the NextRelFileNumber to something called\nFirstNormalRelFileNumber which looks fine to me.\n\n> One more thing we may want to think about is what if there are tables\n> created by extension? For example, I think BDR creates some tables\n> like node_group, conflict_history, etc. Now, I think if such an\n> extension is created by default, both old and new clusters will have\n> those tables. Isn't there a chance of relfilenumber conflict in such\n> cases?\n\nShouldn't they behave as a normal user table? because before upgrade\nanyway new cluster can not have any table other than system tables and\nthose tables created by an extension should also be restored as other\nuser table does.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Aug 2022 15:28:25 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 2:06 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> OTOH, if we keep the two separate ranges for the user and system table\n> then we don't need all this complex logic of conflict checking.\n\nTrue. That's the downside. The question is whether it's worth adding\nsome complexity to avoid needing separate ranges.\n\nHonestly, if we don't care about having separate ranges, we can do\nsomething even simpler and just make the starting relfilenumber for\nsystem tables same as the OID. Then we don't have to do anything at\nall, outside of not changing the OID assigned to pg_largeobject in a\nfuture release. Then as long as pg_upgrade is targeting a new cluster\nwith completely fresh databases that have not had any system table\nrewrites so far, there can't be any conflict.\n\nAnd perhaps that is the best solution after all, but while it is\nsimple in terms of code, I feel it's a bit complicated for human\nbeings. It's very simple to understand the scheme that Amit proposed:\nif there's anything in the new cluster that would conflict, we move it\nout of the way. We don't have to assume the new cluster hasn't had any\ntable rewrites. We don't have to nail down starting relfilenumber\nassignments for system tables. We don't have to worry about\nrelfilenumber or OID assignments changing between releases.\npg_largeobject is not a special case. There are no special ranges of\nOIDs or relfilenumbers required. It just straight up works -- all the\ntime, no matter what, end of story.\n\nThe other schemes we're talking about here all require a bunch of\nassumptions about stuff like what I just mentioned. We can certainly\ndo it that way, and maybe it's even for the best. But I feel like it's\na little bit fragile. Maybe some future change gets blocked because it\nwould break one of the assumptions that the system relies on, or maybe\nsomeone doesn't even realize there's an issue and changes something\nthat introduces a bug into this system. Or on the other hand maybe\nnot. But I think there's at least some value in considering whether\nadding a little more code might actually make things simpler to reason\nabout, and whether that might be a good enough reason to do it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Aug 2022 10:30:30 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 8:00 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Aug 23, 2022 at 2:06 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > OTOH, if we keep the two separate ranges for the user and system table\n> > then we don't need all this complex logic of conflict checking.\n>\n> True. That's the downside. The question is whether it's worth adding\n> some complexity to avoid needing separate ranges.\n>\n> Honestly, if we don't care about having separate ranges, we can do\n> something even simpler and just make the starting relfilenumber for\n> system tables same as the OID. Then we don't have to do anything at\n> all, outside of not changing the OID assigned to pg_largeobject in a\n> future release. Then as long as pg_upgrade is targeting a new cluster\n> with completely fresh databases that have not had any system table\n> rewrites so far, there can't be any conflict.\n>\n> And perhaps that is the best solution after all, but while it is\n> simple in terms of code, I feel it's a bit complicated for human\n> beings. It's very simple to understand the scheme that Amit proposed:\n> if there's anything in the new cluster that would conflict, we move it\n> out of the way. We don't have to assume the new cluster hasn't had any\n> table rewrites. We don't have to nail down starting relfilenumber\n> assignments for system tables. We don't have to worry about\n> relfilenumber or OID assignments changing between releases.\n> pg_largeobject is not a special case. There are no special ranges of\n> OIDs or relfilenumbers required. It just straight up works -- all the\n> time, no matter what, end of story.\n>\n\nThis sounds simple to understand. It seems we always create new system\ntables in the new cluster before the upgrade, so I think it is safe to\nassume there won't be any table rewrite in it. OTOH, if the\nrelfilenumber allocation scheme is robust to deal with table rewrites\nthen we probably don't need to worry about this assumption changing in\nthe future.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 24 Aug 2022 15:31:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 3:28 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Aug 23, 2022 at 3:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > One more thing we may want to think about is what if there are tables\n> > created by extension? For example, I think BDR creates some tables\n> > like node_group, conflict_history, etc. Now, I think if such an\n> > extension is created by default, both old and new clusters will have\n> > those tables. Isn't there a chance of relfilenumber conflict in such\n> > cases?\n>\n> Shouldn't they behave as a normal user table? because before upgrade\n> anyway new cluster can not have any table other than system tables and\n> those tables created by an extension should also be restored as other\n> user table does.\n>\n\nRight.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 24 Aug 2022 15:32:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Mon, Aug 1, 2022 at 7:57 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I have fixed other comments, and also fixed comments from Alvaro to\n> use %lld instead of INT64_FORMAT inside the ereport and wherever he\n> suggested.\n\nNotwithstanding the ongoing discussion about the exact approach for\nthe main patch, it seemed OK to push the preparatory patch you posted\nhere, so I have now done that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Aug 2022 17:01:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 8:00 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Aug 23, 2022 at 2:06 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > OTOH, if we keep the two separate ranges for the user and system table\n> > then we don't need all this complex logic of conflict checking.\n>\n> True. That's the downside. The question is whether it's worth adding\n> some complexity to avoid needing separate ranges.\n\nOther than complexity, we will have to check the conflict for all the\nuser table's relfilenumber from the old cluster into the hash build on\nthe new cluster's relfilenumber, isn't it extra overhead if there are\na lot of user tables? But I think we are already restoring all those\ntables in the new cluster so compared to that it will be very small.\n\n> Honestly, if we don't care about having separate ranges, we can do\n> something even simpler and just make the starting relfilenumber for\n> system tables same as the OID. Then we don't have to do anything at\n> all, outside of not changing the OID assigned to pg_largeobject in a\n> future release. Then as long as pg_upgrade is targeting a new cluster\n> with completely fresh databases that have not had any system table\n> rewrites so far, there can't be any conflict.\n\nI think having the OID-based system and having two ranges are not\nexactly the same. Because if we have the OID-based relfilenumber\nallocation for system table (initially) and then later allocate from\nthe nextRelFileNumber counter then it seems like a mix of old system\n(where actually OID and relfilenumber are tightly connected) and the\nnew system where nextRelFileNumber is completely independent counter.\nOTOH having two ranges means logically we are not making dependent on\nOID we are just allocating from a central counter but after catalog\ninitialization, we will leave some gap and start from a new range. So\nI don't think this system is hard to explain.\n\n> And perhaps that is the best solution after all, but while it is\n> simple in terms of code, I feel it's a bit complicated for human\n> beings. It's very simple to understand the scheme that Amit proposed:\n> if there's anything in the new cluster that would conflict, we move it\n> out of the way. We don't have to assume the new cluster hasn't had any\n> table rewrites. We don't have to nail down starting relfilenumber\n> assignments for system tables. We don't have to worry about\n> relfilenumber or OID assignments changing between releases.\n> pg_largeobject is not a special case. There are no special ranges of\n> OIDs or relfilenumbers required. It just straight up works -- all the\n> time, no matter what, end of story.\n\nI agree on this that this system is easy to explain that we just\nrewrite anything that conflicts so looks more future-proof. Okay, I\nwill try this solution and post the patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Aug 2022 17:26:18 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 5:26 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> I agree on this that this system is easy to explain that we just\n> rewrite anything that conflicts so looks more future-proof. Okay, I\n> will try this solution and post the patch.\n\nWhile working on this solution I noticed one issue. Basically, the\nproblem is that during binary upgrade when we try to rewrite a heap we\nwould expect that “binary_upgrade_next_heap_pg_class_oid” and\n“binary_upgrade_next_heap_pg_class_relfilenumber” are already set for\ncreating a new heap. But we are not preserving anything so we don't\nhave those values. One option to this problem is that we can first\nstart the postmaster in non-binary upgrade mode perform all conflict\nchecking and rewrite and stop the postmaster. Then start postmaster\nagain and perform the restore as we are doing now. Although we will\nhave to start/stop the postmaster one extra time we have a solution.\n\nBut while thinking about this I started to think that since now we are\ncompletely decoupling the concept of Oid and relfilenumber then\nlogically during REWRITE we should only be allocating new\nrelfilenumber but we don’t really need to allocate the Oid at all.\nYeah, we can do that if inside make_new_heap() if we pass the\nOIDOldHeap to heap_create_with_catalog(), then it will just create new\nstorage(relfilenumber) but not a new Oid. But the problem is that the\nATRewriteTable() and finish_heap_swap() functions are completely based\non the relation cache. So now if we only create a new relfilenumber\nbut not a new Oid then we will have to change this infrastructure to\ncopy at smgr level.\n\nThoughts?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Aug 2022 16:30:54 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 7:01 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> While working on this solution I noticed one issue. Basically, the\n> problem is that during binary upgrade when we try to rewrite a heap we\n> would expect that “binary_upgrade_next_heap_pg_class_oid” and\n> “binary_upgrade_next_heap_pg_class_relfilenumber” are already set for\n> creating a new heap. But we are not preserving anything so we don't\n> have those values. One option to this problem is that we can first\n> start the postmaster in non-binary upgrade mode perform all conflict\n> checking and rewrite and stop the postmaster. Then start postmaster\n> again and perform the restore as we are doing now. Although we will\n> have to start/stop the postmaster one extra time we have a solution.\n\nYeah, that seems OK. Or we could add a new function, like\nbinary_upgrade_allow_relation_oid_and_relfilenode_assignment(bool).\nNot sure which way is better.\n\n> But while thinking about this I started to think that since now we are\n> completely decoupling the concept of Oid and relfilenumber then\n> logically during REWRITE we should only be allocating new\n> relfilenumber but we don’t really need to allocate the Oid at all.\n> Yeah, we can do that if inside make_new_heap() if we pass the\n> OIDOldHeap to heap_create_with_catalog(), then it will just create new\n> storage(relfilenumber) but not a new Oid. But the problem is that the\n> ATRewriteTable() and finish_heap_swap() functions are completely based\n> on the relation cache. So now if we only create a new relfilenumber\n> but not a new Oid then we will have to change this infrastructure to\n> copy at smgr level.\n\nI think it would be a good idea to continue preserving the OIDs. If\nnothing else, it makes debugging way easier, but also, there might be\nuser-defined regclass columns or something. Note the comments in\ncheck_for_reg_data_type_usage().\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Aug 2022 12:03:04 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 9:33 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Aug 26, 2022 at 7:01 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > While working on this solution I noticed one issue. Basically, the\n> > problem is that during binary upgrade when we try to rewrite a heap we\n> > would expect that “binary_upgrade_next_heap_pg_class_oid” and\n> > “binary_upgrade_next_heap_pg_class_relfilenumber” are already set for\n> > creating a new heap. But we are not preserving anything so we don't\n> > have those values. One option to this problem is that we can first\n> > start the postmaster in non-binary upgrade mode perform all conflict\n> > checking and rewrite and stop the postmaster. Then start postmaster\n> > again and perform the restore as we are doing now. Although we will\n> > have to start/stop the postmaster one extra time we have a solution.\n>\n> Yeah, that seems OK. Or we could add a new function, like\n> binary_upgrade_allow_relation_oid_and_relfilenode_assignment(bool).\n> Not sure which way is better.\n\nI have found one more issue with this approach of rewriting the\nconflicting table. Earlier I thought we could do the conflict\nchecking and rewriting inside create_new_objects() right before the\nrestore command. But after implementing (while testing) this I\nrealized that we DROP and CREATE the database while restoring the dump\nthat means it will again generate the conflicting system tables. So\ntheoretically the rewriting should go in between the CREATE DATABASE\nand restoring the object but as of now both create database and\nrestoring other objects are part of a single dump file. I haven't yet\nanalyzed how feasible it is to generate the dump in two parts, first\npart just to create the database and in second part restore the rest\nof the object.\n\nThoughts?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 30 Aug 2022 18:14:57 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 8:45 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I have found one more issue with this approach of rewriting the\n> conflicting table. Earlier I thought we could do the conflict\n> checking and rewriting inside create_new_objects() right before the\n> restore command. But after implementing (while testing) this I\n> realized that we DROP and CREATE the database while restoring the dump\n> that means it will again generate the conflicting system tables. So\n> theoretically the rewriting should go in between the CREATE DATABASE\n> and restoring the object but as of now both create database and\n> restoring other objects are part of a single dump file. I haven't yet\n> analyzed how feasible it is to generate the dump in two parts, first\n> part just to create the database and in second part restore the rest\n> of the object.\n>\n> Thoughts?\n\nWell, that's very awkward. It doesn't seem like it would be very\ndifficult to teach pg_upgrade to call pg_restore without --clean and\njust do the drop database itself, but that doesn't really help,\nbecause pg_restore will in any event be creating the new database.\nThat doesn't seem like something we can practically refactor out,\nbecause only pg_dump knows what properties to use when creating the\nnew database. What we could do is have the dump include a command like\nSELECT pg_binary_upgrade_move_things_out_of_the_way(some_arguments_here),\nbut that doesn't really help very much, because passing the whole list\nof relfilenode values from the old database seems pretty certain to be\na bad idea. The whole idea here was that we'd be able to build a hash\ntable on the new database's system table OIDs, and it seems like\nthat's not going to work.\n\nWe could try to salvage some portion of the idea by making\npg_binary_upgrade_move_things_out_of_the_way() take a more restricted\nset of arguments, like the smallest and largest relfilenode values\nfrom the old database, and then we'd just need to move things that\noverlap. But that feels pretty hit-or-miss to me as to whether it\nactually avoids any work, and\npg_binary_upgrade_move_things_out_of_the_way() might also be annoying\nto write. So perhaps we have to go back to the drawing board here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 30 Aug 2022 11:53:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 9:23 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> Well, that's very awkward. It doesn't seem like it would be very\n> difficult to teach pg_upgrade to call pg_restore without --clean and\n> just do the drop database itself, but that doesn't really help,\n> because pg_restore will in any event be creating the new database.\n> That doesn't seem like something we can practically refactor out,\n> because only pg_dump knows what properties to use when creating the\n> new database. What we could do is have the dump include a command like\n> SELECT pg_binary_upgrade_move_things_out_of_the_way(some_arguments_here),\n> but that doesn't really help very much, because passing the whole list\n> of relfilenode values from the old database seems pretty certain to be\n> a bad idea. The whole idea here was that we'd be able to build a hash\n> table on the new database's system table OIDs, and it seems like\n> that's not going to work.\n\nRight.\n\n> We could try to salvage some portion of the idea by making\n> pg_binary_upgrade_move_things_out_of_the_way() take a more restricted\n> set of arguments, like the smallest and largest relfilenode values\n> from the old database, and then we'd just need to move things that\n> overlap. But that feels pretty hit-or-miss to me as to whether it\n> actually avoids any work, and\n> pg_binary_upgrade_move_things_out_of_the_way() might also be annoying\n> to write. So perhaps we have to go back to the drawing board here.\n\nSo as of now, we have two open options 1) the current approach and\nwhat patch is following to use Oid as relfilenode for the system\ntables when initially created. 2) call\npg_binary_upgrade_move_things_out_of_the_way() which force rewrite all\nthe system tables.\n\nAnother idea that I am not very sure how feasible is. Can we change\nthe dump such that in binary upgrade mode it will not use template0 as\na template database (in creating database command) but instead some\nnew database as a template e.g. template-XYZ? And later for conflict\nchecking, we will create this template-XYZ database on the new cluster\nand then we will perform all the conflict check (from all the\ndatabases of the old cluster) and rewrite operations on this database.\nAnd later all the databases will be created using template-XYZ as the\ntemplate and all the rewriting stuff we have done is still intact.\nThe problems I could think of are 1) only for a binary upgrade we will\nhave to change the pg_dump. 2) we will have to use another database\nname as the reserved database name but what if that name is already in\nuse in the previous cluster?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 3 Sep 2022 13:50:36 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Sat, Sep 3, 2022 at 1:50 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Aug 30, 2022 at 9:23 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> > Well, that's very awkward. It doesn't seem like it would be very\n> > difficult to teach pg_upgrade to call pg_restore without --clean and\n> > just do the drop database itself, but that doesn't really help,\n> > because pg_restore will in any event be creating the new database.\n> > That doesn't seem like something we can practically refactor out,\n> > because only pg_dump knows what properties to use when creating the\n> > new database. What we could do is have the dump include a command like\n> > SELECT pg_binary_upgrade_move_things_out_of_the_way(some_arguments_here),\n> > but that doesn't really help very much, because passing the whole list\n> > of relfilenode values from the old database seems pretty certain to be\n> > a bad idea. The whole idea here was that we'd be able to build a hash\n> > table on the new database's system table OIDs, and it seems like\n> > that's not going to work.\n>\n> Right.\n>\n> > We could try to salvage some portion of the idea by making\n> > pg_binary_upgrade_move_things_out_of_the_way() take a more restricted\n> > set of arguments, like the smallest and largest relfilenode values\n> > from the old database, and then we'd just need to move things that\n> > overlap. But that feels pretty hit-or-miss to me as to whether it\n> > actually avoids any work, and\n> > pg_binary_upgrade_move_things_out_of_the_way() might also be annoying\n> > to write. So perhaps we have to go back to the drawing board here.\n>\n> So as of now, we have two open options 1) the current approach and\n> what patch is following to use Oid as relfilenode for the system\n> tables when initially created. 2) call\n> pg_binary_upgrade_move_things_out_of_the_way() which force rewrite all\n> the system tables.\n>\n> Another idea that I am not very sure how feasible is. Can we change\n> the dump such that in binary upgrade mode it will not use template0 as\n> a template database (in creating database command) but instead some\n> new database as a template e.g. template-XYZ? And later for conflict\n> checking, we will create this template-XYZ database on the new cluster\n> and then we will perform all the conflict check (from all the\n> databases of the old cluster) and rewrite operations on this database.\n> And later all the databases will be created using template-XYZ as the\n> template and all the rewriting stuff we have done is still intact.\n> The problems I could think of are 1) only for a binary upgrade we will\n> have to change the pg_dump. 2) we will have to use another database\n> name as the reserved database name but what if that name is already in\n> use in the previous cluster?\n\nWhile we are still thinking on this issue, I have rebased the patch on\nthe latest head and fixed a couple of minor issues.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sat, 3 Sep 2022 14:26:00 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 6:15 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Aug 26, 2022 at 9:33 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Fri, Aug 26, 2022 at 7:01 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > While working on this solution I noticed one issue. Basically, the\n> > > problem is that during binary upgrade when we try to rewrite a heap we\n> > > would expect that “binary_upgrade_next_heap_pg_class_oid” and\n> > > “binary_upgrade_next_heap_pg_class_relfilenumber” are already set for\n> > > creating a new heap. But we are not preserving anything so we don't\n> > > have those values. One option to this problem is that we can first\n> > > start the postmaster in non-binary upgrade mode perform all conflict\n> > > checking and rewrite and stop the postmaster. Then start postmaster\n> > > again and perform the restore as we are doing now. Although we will\n> > > have to start/stop the postmaster one extra time we have a solution.\n> >\n> > Yeah, that seems OK. Or we could add a new function, like\n> > binary_upgrade_allow_relation_oid_and_relfilenode_assignment(bool).\n> > Not sure which way is better.\n>\n> I have found one more issue with this approach of rewriting the\n> conflicting table. Earlier I thought we could do the conflict\n> checking and rewriting inside create_new_objects() right before the\n> restore command. But after implementing (while testing) this I\n> realized that we DROP and CREATE the database while restoring the dump\n> that means it will again generate the conflicting system tables. So\n> theoretically the rewriting should go in between the CREATE DATABASE\n> and restoring the object but as of now both create database and\n> restoring other objects are part of a single dump file. I haven't yet\n> analyzed how feasible it is to generate the dump in two parts, first\n> part just to create the database and in second part restore the rest\n> of the object.\n>\n\nIsn't this happening because we are passing \"--clean\n--create\"/\"--create\" options to pg_restore in create_new_objects()? If\nso, then I think one idea to decouple would be to not use those\noptions. Perform drop/create separately via commands (for create, we\nneed to generate the command as we are generating while generating the\ndump in custom format), then rewrite the conflicting tables, and\nfinally restore the dump.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 3 Sep 2022 17:11:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Sat, Sep 3, 2022 at 5:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> > I have found one more issue with this approach of rewriting the\n> > conflicting table. Earlier I thought we could do the conflict\n> > checking and rewriting inside create_new_objects() right before the\n> > restore command. But after implementing (while testing) this I\n> > realized that we DROP and CREATE the database while restoring the dump\n> > that means it will again generate the conflicting system tables. So\n> > theoretically the rewriting should go in between the CREATE DATABASE\n> > and restoring the object but as of now both create database and\n> > restoring other objects are part of a single dump file. I haven't yet\n> > analyzed how feasible it is to generate the dump in two parts, first\n> > part just to create the database and in second part restore the rest\n> > of the object.\n> >\n>\n> Isn't this happening because we are passing \"--clean\n> --create\"/\"--create\" options to pg_restore in create_new_objects()? If\n> so, then I think one idea to decouple would be to not use those\n> options. Perform drop/create separately via commands (for create, we\n> need to generate the command as we are generating while generating the\n> dump in custom format), then rewrite the conflicting tables, and\n> finally restore the dump.\n\nHmm, you are right. So I think something like this is possible to do,\nI will explore this more. Thanks for the idea.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 4 Sep 2022 09:27:44 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Sun, Sep 4, 2022 at 9:27 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sat, Sep 3, 2022 at 5:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> > Isn't this happening because we are passing \"--clean\n> > --create\"/\"--create\" options to pg_restore in create_new_objects()? If\n> > so, then I think one idea to decouple would be to not use those\n> > options. Perform drop/create separately via commands (for create, we\n> > need to generate the command as we are generating while generating the\n> > dump in custom format), then rewrite the conflicting tables, and\n> > finally restore the dump.\n>\n> Hmm, you are right. So I think something like this is possible to do,\n> I will explore this more. Thanks for the idea.\n\nI have explored this area more and also tried to come up with a\nworking prototype, so while working on this I realized that we would\nhave almost to execute all the code which is getting generated as part\nof the dumpDatabase() and dumpACL() which is basically,\n\n1. UPDATE pg_catalog.pg_database SET datistemplate = false\n2. DROP DATABASE\n3. CREATE DATABASE with all the database properties like ENCODING,\nLOCALE_PROVIDER, LOCALE, LC_COLLATE, LC_CTYPE, ICU_LOCALE,\nCOLLATION_VERSION, TABLESPACE\n4. COMMENT ON DATABASE\n5. Logic inside dumpACL()\n\nI feel duplicating logic like this is really error-prone, but I do not\nfind any clear way to reuse the code as dumpDatabase() has a high\ndependency on the Archive handle and generating the dump file.\n\nSo currently I have implemented most of this logic except for a few\ne.g. dumpACL(), comments on the database, etc. So before we go too\nfar in this direction I wanted to know the opinions of others.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 6 Sep 2022 14:10:28 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Sep 6, 2022 at 4:40 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I have explored this area more and also tried to come up with a\n> working prototype, so while working on this I realized that we would\n> have almost to execute all the code which is getting generated as part\n> of the dumpDatabase() and dumpACL() which is basically,\n>\n> 1. UPDATE pg_catalog.pg_database SET datistemplate = false\n> 2. DROP DATABASE\n> 3. CREATE DATABASE with all the database properties like ENCODING,\n> LOCALE_PROVIDER, LOCALE, LC_COLLATE, LC_CTYPE, ICU_LOCALE,\n> COLLATION_VERSION, TABLESPACE\n> 4. COMMENT ON DATABASE\n> 5. Logic inside dumpACL()\n>\n> I feel duplicating logic like this is really error-prone, but I do not\n> find any clear way to reuse the code as dumpDatabase() has a high\n> dependency on the Archive handle and generating the dump file.\n\nYeah, I don't think this is the way to go at all. The duplicated logic\nis likely to get broken, and is also likely to annoy the next person\nwho has to maintain it.\n\nI suggest that for now we fall back on making the initial\nRelFileNumber for a system table equal to pg_class.oid. I don't really\nlove that system and I think maybe we should change it at some point\nin the future, but all the alternatives seem too complicated to cram\nthem into the current patch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 6 Sep 2022 13:37:13 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Sep 6, 2022 at 11:07 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Sep 6, 2022 at 4:40 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I have explored this area more and also tried to come up with a\n> > working prototype, so while working on this I realized that we would\n> > have almost to execute all the code which is getting generated as part\n> > of the dumpDatabase() and dumpACL() which is basically,\n> >\n> > 1. UPDATE pg_catalog.pg_database SET datistemplate = false\n> > 2. DROP DATABASE\n> > 3. CREATE DATABASE with all the database properties like ENCODING,\n> > LOCALE_PROVIDER, LOCALE, LC_COLLATE, LC_CTYPE, ICU_LOCALE,\n> > COLLATION_VERSION, TABLESPACE\n> > 4. COMMENT ON DATABASE\n> > 5. Logic inside dumpACL()\n> >\n> > I feel duplicating logic like this is really error-prone, but I do not\n> > find any clear way to reuse the code as dumpDatabase() has a high\n> > dependency on the Archive handle and generating the dump file.\n>\n> Yeah, I don't think this is the way to go at all. The duplicated logic\n> is likely to get broken, and is also likely to annoy the next person\n> who has to maintain it.\n>\n\nRight\n\n\n> I suggest that for now we fall back on making the initial\n> RelFileNumber for a system table equal to pg_class.oid. I don't really\n> love that system and I think maybe we should change it at some point\n> in the future, but all the alternatives seem too complicated to cram\n> them into the current patch.\n>\n\nThat makes sense.\n\nOn a separate note, while reviewing the latest patch I see there is some\nrisk of using the unflushed relfilenumber in GetNewRelFileNumber()\nfunction. Basically, in the current code, the flushing logic is tightly\ncoupled with the logging new relfilenumber logic and that might not work\nwith all the values of the VAR_RELNUMBER_NEW_XLOG_THRESHOLD. So the idea\nis we need to keep the flushing logic separate from the logging, I am\nworking on the idea and I will post the patch soon.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Sep 6, 2022 at 11:07 PM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Sep 6, 2022 at 4:40 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I have explored this area more and also tried to come up with a\n> working prototype, so while working on this I realized that we would\n> have almost to execute all the code which is getting generated as part\n> of the dumpDatabase() and dumpACL() which is basically,\n>\n> 1. UPDATE pg_catalog.pg_database SET datistemplate = false\n> 2. DROP DATABASE\n> 3. CREATE DATABASE with all the database properties like ENCODING,\n> LOCALE_PROVIDER, LOCALE, LC_COLLATE, LC_CTYPE, ICU_LOCALE,\n> COLLATION_VERSION, TABLESPACE\n> 4. COMMENT ON DATABASE\n> 5. Logic inside dumpACL()\n>\n> I feel duplicating logic like this is really error-prone, but I do not\n> find any clear way to reuse the code as dumpDatabase() has a high\n> dependency on the Archive handle and generating the dump file.\n\nYeah, I don't think this is the way to go at all. The duplicated logic\nis likely to get broken, and is also likely to annoy the next person\nwho has to maintain it.Right \nI suggest that for now we fall back on making the initial\nRelFileNumber for a system table equal to pg_class.oid. I don't really\nlove that system and I think maybe we should change it at some point\nin the future, but all the alternatives seem too complicated to cram\nthem into the current patch.\nThat makes sense.On a separate note, while reviewing the latest patch I see there is some risk of using the unflushed relfilenumber in GetNewRelFileNumber() function. Basically, in the current code, the flushing logic is tightly coupled with the logging new relfilenumber logic and that might not work with all the values of the VAR_RELNUMBER_NEW_XLOG_THRESHOLD. So the idea is we need to keep the flushing logic separate from the logging, I am working on the idea and I will post the patch soon.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 8 Sep 2022 16:10:28 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Thu, Sep 8, 2022 at 4:10 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On a separate note, while reviewing the latest patch I see there is some risk of using the unflushed relfilenumber in GetNewRelFileNumber() function. Basically, in the current code, the flushing logic is tightly coupled with the logging new relfilenumber logic and that might not work with all the values of the VAR_RELNUMBER_NEW_XLOG_THRESHOLD. So the idea is we need to keep the flushing logic separate from the logging, I am working on the idea and I will post the patch soon.\n\nI have fixed the issue, so now we will track nextRelFileNumber,\nloggedRelFileNumber and flushedRelFileNumber. So whenever\nnextRelFileNumber is just VAR_RELNUMBER_NEW_XLOG_THRESHOLD behind the\nloggedRelFileNumber we will log VAR_RELNUMBER_PER_XLOG more\nrelfilenumbers. And whenever nextRelFileNumber reaches the\nflushedRelFileNumber then we will do XlogFlush for WAL upto the last\nloggedRelFileNumber. Ideally flushedRelFileNumber should always be\nVAR_RELNUMBER_PER_XLOG number behind the loggedRelFileNumber so we can\navoid tracking the flushedRelFileNumber. But I feel keeping track of\nthe flushedRelFileNumber looks cleaner and easier to understand. For\nmore details refer to the code in GetNewRelFileNumber().\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 9 Sep 2022 15:32:17 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Fri, Sep 9, 2022 at 3:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Sep 8, 2022 at 4:10 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > On a separate note, while reviewing the latest patch I see there is some risk of using the unflushed relfilenumber in GetNewRelFileNumber() function. Basically, in the current code, the flushing logic is tightly coupled with the logging new relfilenumber logic and that might not work with all the values of the VAR_RELNUMBER_NEW_XLOG_THRESHOLD. So the idea is we need to keep the flushing logic separate from the logging, I am working on the idea and I will post the patch soon.\n>\n> I have fixed the issue, so now we will track nextRelFileNumber,\n> loggedRelFileNumber and flushedRelFileNumber. So whenever\n> nextRelFileNumber is just VAR_RELNUMBER_NEW_XLOG_THRESHOLD behind the\n> loggedRelFileNumber we will log VAR_RELNUMBER_PER_XLOG more\n> relfilenumbers. And whenever nextRelFileNumber reaches the\n> flushedRelFileNumber then we will do XlogFlush for WAL upto the last\n> loggedRelFileNumber. Ideally flushedRelFileNumber should always be\n> VAR_RELNUMBER_PER_XLOG number behind the loggedRelFileNumber so we can\n> avoid tracking the flushedRelFileNumber. But I feel keeping track of\n> the flushedRelFileNumber looks cleaner and easier to understand. For\n> more details refer to the code in GetNewRelFileNumber().\n>\n\nHere are a few minor suggestions I came across while reading this\npatch, might be useful:\n\n+#ifdef USE_ASSERT_CHECKING\n+\n+ {\n\nUnnecessary space after USE_ASSERT_CHECKING.\n--\n\n+ return InvalidRelFileNumber; /* placate compiler */\n\nI don't think we needed this after the error on the latest branches.\n--\n\n+ LWLockAcquire(RelFileNumberGenLock, LW_SHARED);\n+ if (shutdown)\n+ checkPoint.nextRelFileNumber = ShmemVariableCache->nextRelFileNumber;\n+ else\n+ checkPoint.nextRelFileNumber = ShmemVariableCache->loggedRelFileNumber;\n+\n+ LWLockRelease(RelFileNumberGenLock);\n\nThis is done for the good reason, I think, it should have a comment\ndescribing different checkPoint.nextRelFileNumber assignment\nneed and crash recovery perspective.\n--\n\n+#define SizeOfRelFileLocatorBackend \\\n+ (offsetof(RelFileLocatorBackend, backend) + sizeof(BackendId))\n\nCan append empty parenthesis \"()\" to the macro name, to look like a\nfunction call at use or change the macro name to uppercase?\n--\n\n + if (val < 0 || val > MAX_RELFILENUMBER)\n..\n if ((relfilenumber) < 0 || (relfilenumber) > MAX_RELFILENUMBER) \\\n\nHow about adding a macro for this condition as RelFileNumberIsValid()?\nWe can replace all the checks referring to MAX_RELFILENUMBER with this.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Tue, 20 Sep 2022 19:46:20 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Fri, Sep 9, 2022 at 6:02 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> [ new patch ]\n\n+typedef pg_int64 RelFileNumber;\n\nThis seems really random to me. First, why isn't this an unsigned\ntype? OID is unsigned and I don't see a reason to change to a signed\ntype. But even if we were going to change to a signed type, why\npg_int64? That is declared like this:\n\n/* Define a signed 64-bit integer type for use in client API declarations. */\ntypedef PG_INT64_TYPE pg_int64;\n\nSurely this is not a client API declaration....\n\nNote that if we change this a lot of references to INT64_FORMAT will\nneed to become UINT64_FORMAT.\n\nI think we should use int64 at the SQL level, because we don't have an\nunsigned 64-bit SQL type, and a signed 64-bit type can hold 56 bits.\nSo it would still be Int64GetDatum((int64) rd_rel->relfilenode) or\nsimilar. But internally I think using unsigned is cleaner.\n\n+ * RelFileNumber is unique within a cluster.\n\nNot really, because of CREATE DATABASE. Probably just drop this line.\nOr else expand it: we never assign the same RelFileNumber twice within\nthe lifetime of the same cluster, but there can be multiple relations\nwith the same RelFileNumber e.g. because CREATE DATABASE duplicates\nthe RelFileNumber values from the template database. But maybe we\ndon't need this here, as it's already explained in relfilelocator.h.\n\n+ ret = (int8) (tag->relForkDetails[0] >> BUFTAG_RELNUM_HIGH_BITS);\n\nWhy not declare ret as ForkNumber instead of casting twice?\n\n+ uint64 relnum;\n+\n+ Assert(relnumber <= MAX_RELFILENUMBER);\n+ Assert(forknum <= MAX_FORKNUM);\n+\n+ relnum = relnumber;\n\nPerhaps it'd be better to write uint64 relnum = relnumber instead of\ninitializing on a separate line.\n\n+#define RELNUMBERCHARS 20 /* max chars printed by %llu */\n\nMaybe instead of %llu we should say UINT64_FORMAT (or INT64_FORMAT if\nthere's some reason to stick with a signed type).\n\n+ elog(ERROR, \"relfilenumber is out of bound\");\n\nIt would have to be \"out of bounds\", with an \"s\". But maybe \"is too\nlarge\" would be better.\n\n+ nextRelFileNumber = ShmemVariableCache->nextRelFileNumber;\n+ loggedRelFileNumber = ShmemVariableCache->loggedRelFileNumber;\n+ flushedRelFileNumber = ShmemVariableCache->flushedRelFileNumber;\n\nMaybe it would be a good idea to asset that next <= flushed and\nflushed <= logged?\n\n+#ifdef USE_ASSERT_CHECKING\n+\n+ {\n+ RelFileLocatorBackend rlocator;\n+ char *rpath;\n\nLet's add a comment here, like \"Because the RelFileNumber counter only\never increases and never wraps around, it should be impossible for the\nnewly-allocated RelFileNumber to already be in use. But, if Asserts\nare enabled, double check that there's no main-fork relation file with\nthe new RelFileNumber already on disk.\"\n\n+ elog(ERROR, \"cannot forward RelFileNumber during recovery\");\n\nforward -> set (or advance)\n\n+ if (relnumber >= ShmemVariableCache->loggedRelFileNumber)\n\nIt probably doesn't make any difference, but to me it seems better to\ntest flushedRelFileNumber rather than logRelFileNumber here. What do\nyou think?\n\n /*\n * We set up the lockRelId in case anything tries to lock the dummy\n- * relation. Note that this is fairly bogus since relNumber may be\n- * different from the relation's OID. It shouldn't really matter though.\n- * In recovery, we are running by ourselves and can't have any lock\n- * conflicts. While syncing, we already hold AccessExclusiveLock.\n+ * relation. Note we are setting relId to just FirstNormalObjectId which\n+ * is completely bogus. It shouldn't really matter though. In recovery,\n+ * we are running by ourselves and can't have any lock conflicts. While\n+ * syncing, we already hold AccessExclusiveLock.\n */\n rel->rd_lockInfo.lockRelId.dbId = rlocator.dbOid;\n- rel->rd_lockInfo.lockRelId.relId = rlocator.relNumber;\n+ rel->rd_lockInfo.lockRelId.relId = FirstNormalObjectId;\n\nBoy, this makes me uncomfortable. The existing logic is pretty bogus,\nand we're replacing it with some other bogus thing. Do we know whether\nanything actually does try to use this for locking?\n\nOne notable difference between the existing logic and your change is\nthat, with the existing logic, we use a bogus value that will differ\nfrom one relation to the next, whereas with this change, it will\nalways be the same value. Perhaps el->rd_lockInfo.lockRelId.relId =\n(Oid) rlocator.relNumber would be a more natural adaptation?\n\n+#define CHECK_RELFILENUMBER_RANGE(relfilenumber) \\\n+do { \\\n+ if ((relfilenumber) < 0 || (relfilenumber) > MAX_RELFILENUMBER) \\\n+ ereport(ERROR, \\\n+ errcode(ERRCODE_INVALID_PARAMETER_VALUE), \\\n+ errmsg(\"relfilenumber %lld is out of range\", \\\n+ (long long) (relfilenumber))); \\\n+} while (0)\n\nHere, you take the approach of casting the relfilenumber to long long\nand then using %lld. But elsewhere, you use\nINT64_FORMAT/UINT64_FORMAT. If we're going to use this technique, we\nought to use it everywhere.\n\n typedef struct\n {\n- Oid reltablespace;\n- RelFileNumber relfilenumber;\n-} RelfilenumberMapKey;\n-\n-typedef struct\n-{\n- RelfilenumberMapKey key; /* lookup key - must be first */\n+ RelFileNumber relfilenumber; /* lookup key - must be first */\n Oid relid; /* pg_class.oid */\n } RelfilenumberMapEntry;\n\nThis feels like a bold change. Are you sure it's safe? i.e. Are you\ncertain that there's no way that a relfilenumber could repeat within a\ndatabase? If we're going to bank on that, we could adapt this more\nheavily, e.g. RelidByRelfilenumber() could lose the reltablespace\nparameter. I think maybe we should push this change into an 0002 patch\n(or later) and have 0001 just do a minimal adaptation for the changed\ndata type.\n\n Datum\n pg_control_checkpoint(PG_FUNCTION_ARGS)\n {\n- Datum values[18];\n- bool nulls[18];\n+ Datum values[19];\n+ bool nulls[19];\n\nDocumentation updated is needed.\n\n-Note that while a table's filenode often matches its OID, this is\n-<emphasis>not</emphasis> necessarily the case; some operations, like\n+Note that table's filenode are completely different than its OID. Although for\n+system catalogs initial filenode matches with its OID, but some\noperations, like\n <command>TRUNCATE</command>, <command>REINDEX</command>,\n<command>CLUSTER</command> and some forms\n of <command>ALTER TABLE</command>, can change the filenode while\npreserving the OID.\n-Avoid assuming that filenode and table OID are the same.\n\nSuggest: Note that a table's filenode will normally be different than\nthe OID. For system tables, the initial filenode will be equal to the\ntable OID, but it will be different if the table has ever been\nsubjected to a rewriting operation, such as TRUNCATE, REINDEX,\nCLUSTER, or some forms of ALTER TABLE. For user tables, even the\ninitial filenode will be different than the table OID.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Sep 2022 13:14:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 10:44 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\nThanks for the review, please see my response inline for some of the\ncomments, rest all are accepted.\n\n> On Fri, Sep 9, 2022 at 6:02 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > [ new patch ]\n>\n> +typedef pg_int64 RelFileNumber;\n>\n> This seems really random to me. First, why isn't this an unsigned\n> type? OID is unsigned and I don't see a reason to change to a signed\n> type. But even if we were going to change to a signed type, why\n> pg_int64? That is declared like this:\n>\n> /* Define a signed 64-bit integer type for use in client API declarations. */\n> typedef PG_INT64_TYPE pg_int64;\n>\n> Surely this is not a client API declaration....\n>\n> Note that if we change this a lot of references to INT64_FORMAT will\n> need to become UINT64_FORMAT.\n>\n> I think we should use int64 at the SQL level, because we don't have an\n> unsigned 64-bit SQL type, and a signed 64-bit type can hold 56 bits.\n> So it would still be Int64GetDatum((int64) rd_rel->relfilenode) or\n> similar. But internally I think using unsigned is cleaner.\n\nYeah you are right we can make it uint64. With respect to this, we\ncan not directly use uint64 because that is declared in c.h and that\ncan not be used in\npostgres_ext.h IIUC. So what are the other option maybe we can\ntypedef the RelFIleNumber similar to what c.h done for uint64 i.e.\n\n#ifdef HAVE_LONG_INT_64\ntypedef unsigned long int uint64;\n#elif defined(HAVE_LONG_LONG_INT_64)\ntypedef long long int int64;\n#endif\n\nAnd maybe same for UINT64CONST ?\n\nI am not liking duplicating this logic but is there any better\nalternative for doing this? Can we move the existing definitions from\nc.h file to some common file (common for client and server)?\n\n>\n> + if (relnumber >= ShmemVariableCache->loggedRelFileNumber)\n>\n> It probably doesn't make any difference, but to me it seems better to\n> test flushedRelFileNumber rather than logRelFileNumber here. What do\n> you think?\n\nActually based on this condition are logging more so it make more\nsense to check w.r.t loggedRelFileNumber, but OTOH technically,\nwithout flushing log we are not supposed to use the relfilenumber so\nmake more sense to test flushedRelFileNumber. But since both are the\nsame I am fine with flushedRelFileNumber.\n\n> /*\n> * We set up the lockRelId in case anything tries to lock the dummy\n> - * relation. Note that this is fairly bogus since relNumber may be\n> - * different from the relation's OID. It shouldn't really matter though.\n> - * In recovery, we are running by ourselves and can't have any lock\n> - * conflicts. While syncing, we already hold AccessExclusiveLock.\n> + * relation. Note we are setting relId to just FirstNormalObjectId which\n> + * is completely bogus. It shouldn't really matter though. In recovery,\n> + * we are running by ourselves and can't have any lock conflicts. While\n> + * syncing, we already hold AccessExclusiveLock.\n> */\n> rel->rd_lockInfo.lockRelId.dbId = rlocator.dbOid;\n> - rel->rd_lockInfo.lockRelId.relId = rlocator.relNumber;\n> + rel->rd_lockInfo.lockRelId.relId = FirstNormalObjectId;\n>\n> Boy, this makes me uncomfortable. The existing logic is pretty bogus,\n> and we're replacing it with some other bogus thing. Do we know whether\n> anything actually does try to use this for locking?\n>\n> One notable difference between the existing logic and your change is\n> that, with the existing logic, we use a bogus value that will differ\n> from one relation to the next, whereas with this change, it will\n> always be the same value. Perhaps el->rd_lockInfo.lockRelId.relId =\n> (Oid) rlocator.relNumber would be a more natural adaptation?\n>\n> +#define CHECK_RELFILENUMBER_RANGE(relfilenumber) \\\n> +do { \\\n> + if ((relfilenumber) < 0 || (relfilenumber) > MAX_RELFILENUMBER) \\\n> + ereport(ERROR, \\\n> + errcode(ERRCODE_INVALID_PARAMETER_VALUE), \\\n> + errmsg(\"relfilenumber %lld is out of range\", \\\n> + (long long) (relfilenumber))); \\\n> +} while (0)\n>\n> Here, you take the approach of casting the relfilenumber to long long\n> and then using %lld. But elsewhere, you use\n> INT64_FORMAT/UINT64_FORMAT. If we're going to use this technique, we\n> ought to use it everywhere.\n\nBased on the discussion [1], it seems we can not use\nINT64_FORMAT/UINT64_FORMAT while using ereport. But all other places\nI am using INT64_FORMAT/UINT64_FORMAT. Does this make sense?\n\n[1] https://www.postgresql.org/message-id/20220730113922.qd7qmenwcmzyacje%40alvherre.pgsql\n\n> typedef struct\n> {\n> - Oid reltablespace;\n> - RelFileNumber relfilenumber;\n> -} RelfilenumberMapKey;\n> -\n> -typedef struct\n> -{\n> - RelfilenumberMapKey key; /* lookup key - must be first */\n> + RelFileNumber relfilenumber; /* lookup key - must be first */\n> Oid relid; /* pg_class.oid */\n> } RelfilenumberMapEntry;\n>\n> This feels like a bold change. Are you sure it's safe? i.e. Are you\n> certain that there's no way that a relfilenumber could repeat within a\n> database?\n\nIIUC, as of now, CREATE DATABASE is the only option which can create\nthe duplicate relfilenumber but that would be in different databases.\nSo based on that theory I think it should be safe.\n\nIf we're going to bank on that, we could adapt this more\n> heavily, e.g. RelidByRelfilenumber() could lose the reltablespace\n> parameter.\n\nYeah we might, although we need a bool to identify whether it is\nshared relation or not.\n\nI think maybe we should push this change into an 0002 patch\n> (or later) and have 0001 just do a minimal adaptation for the changed\n> data type.\n\nYeah that make sense.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 21 Sep 2022 15:39:19 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 3:39 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> Yeah you are right we can make it uint64. With respect to this, we\n> can not directly use uint64 because that is declared in c.h and that\n> can not be used in\n> postgres_ext.h IIUC. So what are the other option maybe we can\n> typedef the RelFIleNumber similar to what c.h done for uint64 i.e.\n>\n> #ifdef HAVE_LONG_INT_64\n> typedef unsigned long int uint64;\n> #elif defined(HAVE_LONG_LONG_INT_64)\n> typedef long long int int64;\n> #endif\n>\n> I am not liking duplicating this logic but is there any better\n> alternative for doing this? Can we move the existing definitions from\n> c.h file to some common file (common for client and server)?\n\nHere is the updated patch which fixes all the agreed comments. Except\nthis one which needs more thoughts, for now I have used unsigned long\nint.\n\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 23 Sep 2022 09:53:48 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 7:46 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n\nThanks for the review\n\n> Here are a few minor suggestions I came across while reading this\n> patch, might be useful:\n>\n> +#ifdef USE_ASSERT_CHECKING\n> +\n> + {\n>\n> Unnecessary space after USE_ASSERT_CHECKING.\n\nChanged\n\n>\n> + return InvalidRelFileNumber; /* placate compiler */\n>\n> I don't think we needed this after the error on the latest branches.\n> --\n\nChanged\n\n> + LWLockAcquire(RelFileNumberGenLock, LW_SHARED);\n> + if (shutdown)\n> + checkPoint.nextRelFileNumber = ShmemVariableCache->nextRelFileNumber;\n> + else\n> + checkPoint.nextRelFileNumber = ShmemVariableCache->loggedRelFileNumber;\n> +\n> + LWLockRelease(RelFileNumberGenLock);\n>\n> This is done for the good reason, I think, it should have a comment\n> describing different checkPoint.nextRelFileNumber assignment\n> need and crash recovery perspective.\n> --\n\nDone\n\n> +#define SizeOfRelFileLocatorBackend \\\n> + (offsetof(RelFileLocatorBackend, backend) + sizeof(BackendId))\n>\n> Can append empty parenthesis \"()\" to the macro name, to look like a\n> function call at use or change the macro name to uppercase?\n> --\n\nYeah we could SizeOfXXX macros are general practice I see used\neverywhere in Postgres code so left as it is.\n\n> + if (val < 0 || val > MAX_RELFILENUMBER)\n> ..\n> if ((relfilenumber) < 0 || (relfilenumber) > MAX_RELFILENUMBER) \\\n>\n> How about adding a macro for this condition as RelFileNumberIsValid()?\n> We can replace all the checks referring to MAX_RELFILENUMBER with this.\n\nActually, RelFileNumberIsValid is used to just check whether it is\nInvalidRelFileNumber value i.e. 0. Maybe for this we can introduce\nRelFileNumberInValidRange() but I am not sure whether it would be\ncleaner than what we have now, so left as it is for now.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 23 Sep 2022 09:57:48 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, Sep 21, 2022 at 6:09 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Yeah you are right we can make it uint64. With respect to this, we\n> can not directly use uint64 because that is declared in c.h and that\n> can not be used in\n> postgres_ext.h IIUC.\n\nUgh.\n\n> Can we move the existing definitions from\n> c.h file to some common file (common for client and server)?\n\nYeah, I think that would be a good idea. Here's a quick patch that\nmoves them to common/relpath.h, which seems like a possibly-reasonable\nchoice, though perhaps you or someone else will have a better idea.\n\n> Based on the discussion [1], it seems we can not use\n> INT64_FORMAT/UINT64_FORMAT while using ereport. But all other places\n> I am using INT64_FORMAT/UINT64_FORMAT. Does this make sense?\n>\n> [1] https://www.postgresql.org/message-id/20220730113922.qd7qmenwcmzyacje%40alvherre.pgsql\n\nOh, hmm. So you're saying if the string is not translated then use\n(U)INT64_FORMAT but if it is translated then cast? I guess that makes\nsense. It feels a bit strange to have the style dependent on the\ncontext like that, but maybe it's fine. I'll reread with that idea in\nmind.\n\n> If we're going to bank on that, we could adapt this more\n> > heavily, e.g. RelidByRelfilenumber() could lose the reltablespace\n> > parameter.\n>\n> Yeah we might, although we need a bool to identify whether it is\n> shared relation or not.\n\nWhy?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 26 Sep 2022 12:26:17 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Mon, Sep 26, 2022 at 9:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>\n> > Can we move the existing definitions from\n> > c.h file to some common file (common for client and server)?\n>\n> Yeah, I think that would be a good idea. Here's a quick patch that\n> moves them to common/relpath.h, which seems like a possibly-reasonable\n> choice, though perhaps you or someone else will have a better idea.\n\nLooks fine to me.\n\n> > Based on the discussion [1], it seems we can not use\n> > INT64_FORMAT/UINT64_FORMAT while using ereport. But all other places\n> > I am using INT64_FORMAT/UINT64_FORMAT. Does this make sense?\n> >\n> > [1] https://www.postgresql.org/message-id/20220730113922.qd7qmenwcmzyacje%40alvherre.pgsql\n>\n> Oh, hmm. So you're saying if the string is not translated then use\n> (U)INT64_FORMAT but if it is translated then cast?\n\nRight\n\nI guess that makes\n> sense. It feels a bit strange to have the style dependent on the\n> context like that, but maybe it's fine. I'll reread with that idea in\n> mind.\n\nOk\n\n> > If we're going to bank on that, we could adapt this more\n> > > heavily, e.g. RelidByRelfilenumber() could lose the reltablespace\n> > > parameter.\n> >\n> > Yeah we might, although we need a bool to identify whether it is\n> > shared relation or not.\n>\n> Why?\n\nBecause if entry is not in cache then we need to look into the\nrelmapper and for that we need to know whether it is a shared relation\nor not. And I don't think we can identify that just by looking at\nrelfilenumber.\n\n\nAnother open comment which I missed in last reply\n\n> /*\n> * We set up the lockRelId in case anything tries to lock the dummy\n> - * relation. Note that this is fairly bogus since relNumber may be\n> - * different from the relation's OID. It shouldn't really matter though.\n> - * In recovery, we are running by ourselves and can't have any lock\n> - * conflicts. While syncing, we already hold AccessExclusiveLock.\n> + * relation. Note we are setting relId to just FirstNormalObjectId which\n> + * is completely bogus. It shouldn't really matter though. In recovery,\n> + * we are running by ourselves and can't have any lock conflicts. While\n> + * syncing, we already hold AccessExclusiveLock.\n> */\n> rel->rd_lockInfo.lockRelId.dbId = rlocator.dbOid;\n> - rel->rd_lockInfo.lockRelId.relId = rlocator.relNumber;\n> + rel->rd_lockInfo.lockRelId.relId = FirstNormalObjectId;\n>\n> Boy, this makes me uncomfortable. The existing logic is pretty bogus,\n> and we're replacing it with some other bogus thing. Do we know whether\n> anything actually does try to use this for locking?\n\nLooking at the code it seems it is not used for locking. I also test\nby setting some special value for relid in\nCreateFakeRelcacheEntry() and validating that id is never used for\nlocking in SET_LOCKTAG_RELATION. And ran check-world so I could not\nsee we are ever trying to create lock tag using fake relcache entry.\n\n> One notable difference between the existing logic and your change is\n> that, with the existing logic, we use a bogus value that will differ\n> from one relation to the next, whereas with this change, it will\n> always be the same value. Perhaps el->rd_lockInfo.lockRelId.relId =\n> (Oid) rlocator.relNumber would be a more natural adaptation?\n\nI agree, so changed it this way.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 27 Sep 2022 12:03:18 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Tue, Sep 27, 2022 at 2:33 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Looks fine to me.\n\nOK, committed. I also committed the 0002 patch with some wordsmithing,\nand I removed a < 0 test an an unsigned value because my compiler\ncomplained about it. 0001 turned out to make headerscheck sad, so I\njust pushed a fix for that, too.\n\nI'm not too sure about 0003. I think if we need an is_shared flag\nmaybe we might as well just pass the tablespace OID. The is_shared\nflag seems to just make things a bit complicated for the callers for\nno real benefit.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Sep 2022 13:54:17 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "Hi Dilip,\n\nI am very happy to see these commits. Here's some belated review for\nthe tombstone-removal patch.\n\n> v7-0004-Don-t-delay-removing-Tombstone-file-until-next.patch\n\nMore things you can remove:\n\n * sync_unlinkfiletag in struct SyncOps\n * the macro UNLINKS_PER_ABSORB\n * global variable pendingUnlinks\n\nThis comment after the question mark is obsolete:\n\n * XXX should we CHECK_FOR_INTERRUPTS in this loop?\nEscaping with an\n * error in the case of SYNC_UNLINK_REQUEST would leave the\n * no-longer-used file still present on disk, which\nwould be bad, so\n * I'm inclined to assume that the checkpointer will\nalways empty the\n * queue soon.\n\n(I think if the answer to the question is now yes, then we should\nreplace the stupid sleep with a condition variable sleep, but there's\nanother thread about that somewhere).\n\nIn a couple of places in dbcommands.c you could now make this change:\n\n /*\n- * Force a checkpoint before starting the copy. This will\nforce all dirty\n- * buffers, including those of unlogged tables, out to disk, to ensure\n- * source database is up-to-date on disk for the copy.\n- * FlushDatabaseBuffers() would suffice for that, but we also want to\n- * process any pending unlink requests. Otherwise, if a checkpoint\n- * happened while we're copying files, a file might be deleted just when\n- * we're about to copy it, causing the lstat() call in copydir() to fail\n- * with ENOENT.\n+ * Force all dirty buffers, including those of unlogged tables, out to\n+ * disk, to ensure source database is up-to-date on disk for the copy.\n */\n- RequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_FORCE |\n- CHECKPOINT_WAIT |\nCHECKPOINT_FLUSH_ALL);\n+ FlushDatabaseBuffers(src_dboid);\n\nMore obsolete comments you could change:\n\n * If we were copying database at block levels then drop pages for the\n * destination database that are in the shared buffer cache. And tell\n--> * checkpointer to forget any pending fsync and unlink\nrequests for files\n\n--> * Tell checkpointer to forget any pending fsync and unlink requests for\n * files in the database; else the fsyncs will fail at next\ncheckpoint, or\n * worse, it will delete file\n\nIn tablespace.c I think you could now make this change:\n\n if (!destroy_tablespace_directories(tablespaceoid, false))\n {\n- /*\n- * Not all files deleted? However, there can be\nlingering empty files\n- * in the directories, left behind by for example DROP\nTABLE, that\n- * have been scheduled for deletion at next checkpoint\n(see comments\n- * in mdunlink() for details). We could just delete\nthem immediately,\n- * but we can't tell them apart from important data\nfiles that we\n- * mustn't delete. So instead, we force a checkpoint\nwhich will clean\n- * out any lingering files, and try again.\n- */\n- RequestCheckpoint(CHECKPOINT_IMMEDIATE |\nCHECKPOINT_FORCE | CHECKPOINT_WAIT);\n-\n+#ifdef WIN32\n /*\n * On Windows, an unlinked file persists in the\ndirectory listing\n * until no process retains an open handle for the\nfile. The DDL\n@@ -523,6 +513,7 @@ DropTableSpace(DropTableSpaceStmt *stmt)\n\n /* And now try again. */\n if (!destroy_tablespace_directories(tablespaceoid, false))\n+#endif\n {\n /* Still not empty, the files must be important then */\n ereport(ERROR,\n\n\n",
"msg_date": "Wed, 28 Sep 2022 16:52:56 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, Sep 28, 2022 at 9:23 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> Hi Dilip,\n>\n> I am very happy to see these commits. Here's some belated review for\n> the tombstone-removal patch.\n>\n> > v7-0004-Don-t-delay-removing-Tombstone-file-until-next.patch\n>\n> More things you can remove:\n>\n> * sync_unlinkfiletag in struct SyncOps\n> * the macro UNLINKS_PER_ABSORB\n> * global variable pendingUnlinks\n>\n> This comment after the question mark is obsolete:\n>\n> * XXX should we CHECK_FOR_INTERRUPTS in this loop?\n> Escaping with an\n> * error in the case of SYNC_UNLINK_REQUEST would leave the\n> * no-longer-used file still present on disk, which\n> would be bad, so\n> * I'm inclined to assume that the checkpointer will\n> always empty the\n> * queue soon.\n>\n> (I think if the answer to the question is now yes, then we should\n> replace the stupid sleep with a condition variable sleep, but there's\n> another thread about that somewhere).\n>\n> In a couple of places in dbcommands.c you could now make this change:\n>\n> /*\n> - * Force a checkpoint before starting the copy. This will\n> force all dirty\n> - * buffers, including those of unlogged tables, out to disk, to ensure\n> - * source database is up-to-date on disk for the copy.\n> - * FlushDatabaseBuffers() would suffice for that, but we also want to\n> - * process any pending unlink requests. Otherwise, if a checkpoint\n> - * happened while we're copying files, a file might be deleted just when\n> - * we're about to copy it, causing the lstat() call in copydir() to fail\n> - * with ENOENT.\n> + * Force all dirty buffers, including those of unlogged tables, out to\n> + * disk, to ensure source database is up-to-date on disk for the copy.\n> */\n> - RequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_FORCE |\n> - CHECKPOINT_WAIT |\n> CHECKPOINT_FLUSH_ALL);\n> + FlushDatabaseBuffers(src_dboid);\n>\n> More obsolete comments you could change:\n>\n> * If we were copying database at block levels then drop pages for the\n> * destination database that are in the shared buffer cache. And tell\n> --> * checkpointer to forget any pending fsync and unlink\n> requests for files\n>\n> --> * Tell checkpointer to forget any pending fsync and unlink requests for\n> * files in the database; else the fsyncs will fail at next\n> checkpoint, or\n> * worse, it will delete file\n>\n> In tablespace.c I think you could now make this change:\n>\n> if (!destroy_tablespace_directories(tablespaceoid, false))\n> {\n> - /*\n> - * Not all files deleted? However, there can be\n> lingering empty files\n> - * in the directories, left behind by for example DROP\n> TABLE, that\n> - * have been scheduled for deletion at next checkpoint\n> (see comments\n> - * in mdunlink() for details). We could just delete\n> them immediately,\n> - * but we can't tell them apart from important data\n> files that we\n> - * mustn't delete. So instead, we force a checkpoint\n> which will clean\n> - * out any lingering files, and try again.\n> - */\n> - RequestCheckpoint(CHECKPOINT_IMMEDIATE |\n> CHECKPOINT_FORCE | CHECKPOINT_WAIT);\n> -\n> +#ifdef WIN32\n> /*\n> * On Windows, an unlinked file persists in the\n> directory listing\n> * until no process retains an open handle for the\n> file. The DDL\n> @@ -523,6 +513,7 @@ DropTableSpace(DropTableSpaceStmt *stmt)\n>\n> /* And now try again. */\n> if (!destroy_tablespace_directories(tablespaceoid, false))\n> +#endif\n> {\n> /* Still not empty, the files must be important then */\n> ereport(ERROR,\n\nThanks, Thomas, these all look fine to me. So far we have committed\nthe patch to make relfilenode 56 bits wide. The Tombstone file\nremoval patch is still pending to be committed, so when I will rebase\nthat patch I will accommodate all these comments in that patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 28 Sep 2022 14:10:13 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, Sep 28, 2022 at 9:40 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Thanks, Thomas, these all look fine to me. So far we have committed\n> the patch to make relfilenode 56 bits wide. The Tombstone file\n> removal patch is still pending to be committed, so when I will rebase\n> that patch I will accommodate all these comments in that patch.\n\nI noticed that your new unlinking algorithm goes like this:\n\nstat(\"x\")\nstat(\"x.1\")\nstat(\"x.2\")\nstat(\"x.3\") -> ENOENT /* now we know how many segments there are */\ntruncate(\"x.2\")\nunlink(\"x.2\")\ntruncate(\"x.1\")\nunlink(\"x.1\")\ntruncate(\"x\")\nunlink(\"x\")\n\nCould you say what problem this solves, and, guessing that it's just\nthat you want the 0 file to be \"in the way\" until the other files are\ngone (at least while we're running; who knows what'll be left if you\npower-cycle), could you do it like this instead?\n\ntruncate(\"x\")\ntruncate(\"x.1\")\ntruncate(\"x.2\")\ntruncate(\"x.3\") -> ENOENT /* now we know how many segments there are */\nunlink(\"x.2\")\nunlink(\"x.1\")\nunlink(\"x\")\n\n\n",
"msg_date": "Thu, 29 Sep 2022 11:53:02 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "Hi!\n\nI'm not in the context of this thread, but I've notice something strange by\nattempting to rebase my patch set from 64XID thread.\nAs far as I'm aware, this patch set is adding \"relfilenumber\". So, in\npg_control_checkpoint, we have next changes:\n\ndiff --git a/src/backend/utils/misc/pg_controldata.c\nb/src/backend/utils/misc/pg_controldata.c\nindex 781f8b8758..d441cd97e2 100644\n--- a/src/backend/utils/misc/pg_controldata.c\n+++ b/src/backend/utils/misc/pg_controldata.c\n@@ -79,8 +79,8 @@ pg_control_system(PG_FUNCTION_ARGS)\n Datum\n pg_control_checkpoint(PG_FUNCTION_ARGS)\n {\n- Datum values[18];\n- bool nulls[18];\n+ Datum values[19];\n+ bool nulls[19];\n TupleDesc tupdesc;\n HeapTuple htup;\n ControlFileData *ControlFile;\n@@ -129,6 +129,8 @@ pg_control_checkpoint(PG_FUNCTION_ARGS)\n XIDOID, -1, 0);\n TupleDescInitEntry(tupdesc, (AttrNumber) 18, \"checkpoint_time\",\n TIMESTAMPTZOID, -1, 0);\n+ TupleDescInitEntry(tupdesc, (AttrNumber) 19, \"next_relfilenumber\",\n+ INT8OID, -1, 0);\n tupdesc = BlessTupleDesc(tupdesc);\n\n /* Read the control file. */\n\nIn other words, we have 19 attributes. But tupdesc here is constructed for\n18 elements:\ntupdesc = CreateTemplateTupleDesc(18);\n\nIs that normal or not? Again, I'm not in this thread and if that is\ncompletely ok, I'm sorry about the noise.\n\n-- \nBest regards,\nMaxim Orlov.\n\nHi!I'm not in the context of this thread, but I've notice something strange by attempting to rebase my patch set from 64XID thread.As far as I'm aware, this patch set is adding \"relfilenumber\". So, in pg_control_checkpoint, we have next changes:diff --git a/src/backend/utils/misc/pg_controldata.c b/src/backend/utils/misc/pg_controldata.cindex 781f8b8758..d441cd97e2 100644--- a/src/backend/utils/misc/pg_controldata.c+++ b/src/backend/utils/misc/pg_controldata.c@@ -79,8 +79,8 @@ pg_control_system(PG_FUNCTION_ARGS) Datum pg_control_checkpoint(PG_FUNCTION_ARGS) {- Datum values[18];- bool nulls[18];+ Datum values[19];+ bool nulls[19]; TupleDesc tupdesc; HeapTuple htup; ControlFileData *ControlFile;@@ -129,6 +129,8 @@ pg_control_checkpoint(PG_FUNCTION_ARGS) XIDOID, -1, 0); TupleDescInitEntry(tupdesc, (AttrNumber) 18, \"checkpoint_time\", TIMESTAMPTZOID, -1, 0);+ TupleDescInitEntry(tupdesc, (AttrNumber) 19, \"next_relfilenumber\",+ INT8OID, -1, 0); tupdesc = BlessTupleDesc(tupdesc); /* Read the control file. */In other words, we have 19 attributes. But tupdesc here is constructed for 18 elements:tupdesc = CreateTemplateTupleDesc(18);Is that normal or not? Again, I'm not in this thread and if that is completely ok, I'm sorry about the noise.-- Best regards,Maxim Orlov.",
"msg_date": "Thu, 29 Sep 2022 17:50:20 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Thu, Sep 29, 2022 at 10:50 AM Maxim Orlov <orlovmg@gmail.com> wrote:\n> In other words, we have 19 attributes. But tupdesc here is constructed for 18 elements:\n> tupdesc = CreateTemplateTupleDesc(18);\n>\n> Is that normal or not? Again, I'm not in this thread and if that is completely ok, I'm sorry about the noise.\n\nI think that's a mistake. Thanks for the report.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 29 Sep 2022 10:57:32 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Sep 29, 2022 at 10:50 AM Maxim Orlov <orlovmg@gmail.com> wrote:\n>> In other words, we have 19 attributes. But tupdesc here is constructed for 18 elements:\n>> tupdesc = CreateTemplateTupleDesc(18);\n\n> I think that's a mistake. Thanks for the report.\n\nThe assertions in TupleDescInitEntry would have caught that,\nif only utils/misc/pg_controldata.c had more than zero test coverage.\nSeems like somebody ought to do something about that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Sep 2022 14:39:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Thu, Sep 29, 2022 at 02:39:44PM -0400, Tom Lane wrote:\n> The assertions in TupleDescInitEntry would have caught that,\n> if only utils/misc/pg_controldata.c had more than zero test coverage.\n> Seems like somebody ought to do something about that.\n\nWhile passing by, I have noticed this thread. We don't really care\nabout the contents returned by these functions, and one simple trick\nto check their execution is SELECT FROM. Like in the attached, for\nexample.\n--\nMichael",
"msg_date": "Fri, 30 Sep 2022 09:12:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> While passing by, I have noticed this thread. We don't really care\n> about the contents returned by these functions, and one simple trick\n> to check their execution is SELECT FROM. Like in the attached, for\n> example.\n\nHmmm ... I'd tend to do SELECT COUNT(*) FROM. But can't we provide\nany actual checks on the sanity of the output? I realize that the\noutput's far from static, but still ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Sep 2022 21:23:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Thu, Sep 29, 2022 at 09:23:38PM -0400, Tom Lane wrote:\n> Hmmm ... I'd tend to do SELECT COUNT(*) FROM. But can't we provide\n> any actual checks on the sanity of the output? I realize that the\n> output's far from static, but still ...\n\nHonestly, checking all the fields is not that exciting, but the\nmaximum I can think of that would be portable enough is something like\nthe attached. No arithmetic operators for xid limits things a bit,\nbut at least that's something.\n\nThoughts?\n--\nMichael",
"msg_date": "Fri, 21 Oct 2022 15:00:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Fri, 21 Oct 2022 at 11:31, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Sep 29, 2022 at 09:23:38PM -0400, Tom Lane wrote:\n> > Hmmm ... I'd tend to do SELECT COUNT(*) FROM. But can't we provide\n> > any actual checks on the sanity of the output? I realize that the\n> > output's far from static, but still ...\n>\n> Honestly, checking all the fields is not that exciting, but the\n> maximum I can think of that would be portable enough is something like\n> the attached. No arithmetic operators for xid limits things a bit,\n> but at least that's something.\n>\n> Thoughts?\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\n=== Applying patches on top of PostgreSQL commit ID\n33ab0a2a527e3af5beee3a98fc07201e555d6e45 ===\n=== applying patch ./controldata-regression-2.patch\npatching file src/test/regress/expected/misc_functions.out\nHunk #1 succeeded at 642 with fuzz 2 (offset 48 lines).\npatching file src/test/regress/sql/misc_functions.sql\nHunk #1 FAILED at 223.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/test/regress/sql/misc_functions.sql.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3711.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 4 Jan 2023 17:45:41 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
},
{
"msg_contents": "On Wed, Jan 4, 2023 at 5:45 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, 21 Oct 2022 at 11:31, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Thu, Sep 29, 2022 at 09:23:38PM -0400, Tom Lane wrote:\n> > > Hmmm ... I'd tend to do SELECT COUNT(*) FROM. But can't we provide\n> > > any actual checks on the sanity of the output? I realize that the\n> > > output's far from static, but still ...\n> >\n> > Honestly, checking all the fields is not that exciting, but the\n> > maximum I can think of that would be portable enough is something like\n> > the attached. No arithmetic operators for xid limits things a bit,\n> > but at least that's something.\n> >\n> > Thoughts?\n>\n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n>\n\nBecause of the extra WAL overhead, we are not continuing with the\npatch, I will withdraw it.\n\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 6 Jan 2023 11:43:43 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: making relfilenodes 56 bits"
}
] |
[
{
"msg_contents": "Greetings,\n\nI have spent a couple of days working on separated lib in PostgreSql, and I\nam facing some issues with the return of data using SPI (Server Programming\nInterface).\nI have this simple get_tuples function used in the main function foobar. My\nexample is very simple too (Using a previously created table a with a\nsingle column and with some data):\nSELECT foobar('select * from a');\nI am not able to get the array returned by get_tuples function, and I am\nthinking it's SPI_finish(). When I tried to print my array tuples itens\nafter SPI_finish(), It is not working.\n\n####################\n###### My Code ######\n####################\nstatic char**\nget_tuples(char *command) {\n int ret;\n int8 rows;\n char **tuples;\n\n SPI_connect();\n ret = SPI_exec(command, 0);\n rows = SPI_processed;\n\n tuples = palloc(sizeof(char*)*rows);\n\n if (ret > 0 && SPI_tuptable != NULL) {\n TupleDesc tupdesc = SPI_tuptable->tupdesc;\n SPITupleTable *tuptable = SPI_tuptable;\n uint64 j;\n\n for (j = 0; j < rows; j++){\n HeapTuple tuple = tuptable->vals[j];\n int i;\n for (i = 1; i <= tupdesc->natts; i++){\n char *rowid;\n rowid = SPI_getvalue(tuple, tupdesc, i);\n tuples[j] = palloc(strlen(rowid)*sizeof(char));\n tuples[j]= rowid;\n }\n }\n }\n\n // Printing my array to verify if I have all tuples, in fact I have all\nof the\n for (int i = 0; i < rows; ++i) {\n elog(INFO, \"Item: %s\", *(tuples + i));\n }\n\n pfree(command);\n SPI_finish();\n return tuples;\n}\n####################\nDatum\nfoobar (PG_FUNCTION_ARGS) {\n char *command;\n command = text_to_cstring(PG_GETARG_TEXT_PP(0));\n get_tuples(command);\n\n // *When I tried to do something like this, I am losing the connection\nwith the server (error)*\n //elog(INFO, \"*****: %s\", *(get_tuples(command) + 1));\n PG_RETURN_INT64(1);\n}\n####################\nCREATE FUNCTION foobar(text) RETURNS int8\nAS 'MODULE_PATHNAME'\nLANGUAGE C IMMUTABLE STRICT;\n####################\n\nregards,\n\n*Andjasubu Bungama, Patrick *\n\nGreetings,I have spent a couple of days working on separated lib in PostgreSql, and I am facing some issues with the return of data using SPI (Server Programming Interface).I have this simple get_tuples function used in the main function foobar. My example is very simple too (Using a previously created table a with a single column and with some data):SELECT foobar('select * from a');I am not able to get the array returned by get_tuples function, and I am thinking it's SPI_finish(). When I tried to print my array tuples itens after SPI_finish(), It is not working.########################## My Code ##########################static char**get_tuples(char *command) { int ret; int8 rows; char **tuples; SPI_connect(); ret = SPI_exec(command, 0); rows = SPI_processed; tuples = palloc(sizeof(char*)*rows); if (ret > 0 && SPI_tuptable != NULL) {\t TupleDesc tupdesc = SPI_tuptable->tupdesc; SPITupleTable *tuptable = SPI_tuptable; uint64 j; for (j = 0; j < rows; j++){ HeapTuple tuple = tuptable->vals[j]; int i; for (i = 1; i <= tupdesc->natts; i++){ \tchar *rowid; \trowid = SPI_getvalue(tuple, tupdesc, i); \ttuples[j] = palloc(strlen(rowid)*sizeof(char)); \ttuples[j]= rowid; } } } // Printing my array to verify if I have all tuples, in fact I have all of the for (int i = 0; i < rows; ++i) { elog(INFO, \"Item: %s\", *(tuples + i)); } pfree(command); SPI_finish(); return tuples;}####################Datumfoobar (PG_FUNCTION_ARGS) { char *command; command = text_to_cstring(PG_GETARG_TEXT_PP(0)); get_tuples(command); // When I tried to do something like this, I am losing the connection with the server (error) //elog(INFO, \"*****: %s\", *(get_tuples(command) + 1)); PG_RETURN_INT64(1);}####################CREATE FUNCTION foobar(text) RETURNS int8AS 'MODULE_PATHNAME'LANGUAGE C IMMUTABLE STRICT;####################regards,Andjasubu Bungama, Patrick",
"msg_date": "Thu, 4 Mar 2021 17:42:02 -0500",
"msg_from": "Patrick Handja <patrick.bungama@gmail.com>",
"msg_from_op": true,
"msg_subject": "SPI return"
},
{
"msg_contents": "Patrick Handja <patrick.bungama@gmail.com> writes:\n> I am not able to get the array returned by get_tuples function, and I am\n> thinking it's SPI_finish(). When I tried to print my array tuples itens\n> after SPI_finish(), It is not working.\n\nIndeed, SPI_finish() frees everything that was allocated by SPI functions,\nincluding result tuple tables. You need to do whatever it is you want\nto do with the tuples before calling SPI_finish.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Mar 2021 18:06:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SPI return"
}
] |
[
{
"msg_contents": "Hi.\n\nI created a patch to improve \\sleep meta command in pgbench.\n\nThere are two problems with the current pgbench implementation of \n\\sleep.\nFirst, when the input is like \"\\sleep foo s\" , this string will be \ntreated as 0 through atoi function, and no error will be raised.\nSecond, when the input is like \"\\sleep :some_bool s\" and some_bool is \nset to True or False, this bool will be treated as 0 as well.\nHowever, I think we should catch this error, so I suggest my patch to \ndetect this and raise errors.\n\nRegards.\n--\nKota Miyake",
"msg_date": "Fri, 05 Mar 2021 10:51:04 +0900",
"msg_from": "miyake_kouta <miyake_kouta@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] pgbench: improve \\sleep meta command"
},
{
"msg_contents": "Dear Miyake-san,\n\nI agree your suggestions and I think this patch is basically good.\nI put some comments:\n\n* When the following line is input, the error message is not happy.\n I think output should be \" \\sleep command argument must be an integer...\".\n\n\\sleep foo\n-> pgbench: fatal: test.sql:5: unrecognized time unit, must be us, ms or s (foo) in command \"sleep\"\n \\sleep foo\n ^ error found here\n\n I'm not sure but I think this is caused because `my_command->argv[2]` becomes \"foo\".\n\n* A blank is missed in some lines, for example:\n\n> +\t\t\t\tif (my_command->argc ==2)\n\n A blank should be added between a variable and an operator.\n\n\nCould you fix them?\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Fri, 5 Mar 2021 03:57:56 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] pgbench: improve \\sleep meta command"
},
{
"msg_contents": "Dear Miyake-san, \n\n> I'm not sure but I think this is caused because `my_command->argv[2]` becomes \"foo\".\n\nSorry, I missed some lines in your patch. Please ignore this analysis.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Fri, 5 Mar 2021 04:10:19 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] pgbench: improve \\sleep meta command"
},
{
"msg_contents": "Thank you for your comments!\nI fixed my patch based on your comment, and attached it to this mail.\n\n2021-03-05 12:57, kuroda.hayato@fujitsu.com wrote:\n\n> * When the following line is input, the error message is not happy.\n> I think output should be \" \\sleep command argument must be an \n> integer...\".\n> \n> \\sleep foo\n> -> pgbench: fatal: test.sql:5: unrecognized time unit, must be us, ms\n> or s (foo) in command \"sleep\"\n> \\sleep foo\n> ^ error found here\n\nWhen argc == 2, pgbench assumes that (1) argv[1] is just a number (e.g. \n\\sleep 10) or (2) argv[1] is a pair of a number and a time unit (e.g. \n\\sleep 10ms).\nSo I fixed the problem as follows:\n1) When argv[1] starts with a number, raises an error depending on \nwhether the time unit is correct or not.\n2) When argv[1] does not starts with a number, raises an error because \nit must be an integer.\n\nWith this modification, the behavior for input should be as follows.\n\"\\sleep 10\" -> pass\n\"\\sleep 10s\" -> pass\n\"\\sleep 10foo\" -> \"unrecognized time unit\" error\n\"\\sleep foo10\" -> \"\\sleep command argument must be an integer...\" error\n\nIs this OK? Please tell me what you think.\n\n> * A blank is missed in some lines, for example:\n> \n>> +\t\t\t\tif (my_command->argc ==2)\n> \n> A blank should be added between a variable and an operator.\n\nOK! I fixed it.\n\nRegards\n--\nKota Miyake",
"msg_date": "Mon, 08 Mar 2021 10:51:27 +0900",
"msg_from": "miyake_kouta <miyake_kouta@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] pgbench: improve \\sleep meta command"
},
{
"msg_contents": "Dear Miyake-san,\n\nThank you for updating the patch.\n\n> When argc == 2, pgbench assumes that (1) argv[1] is just a number (e.g. \n> \\sleep 10) or (2) argv[1] is a pair of a number and a time unit (e.g. \n> \\sleep 10ms).\n\nI see.\n\n> So I fixed the problem as follows:\n> 1) When argv[1] starts with a number, raises an error depending on \n> whether the time unit is correct or not.\n> 2) When argv[1] does not starts with a number, raises an error because \n> it must be an integer.\n> \n> With this modification, the behavior for input should be as follows.\n> \"\\sleep 10\" -> pass\n> \"\\sleep 10s\" -> pass\n> \"\\sleep 10foo\" -> \"unrecognized time unit\" error\n> \"\\sleep foo10\" -> \"\\sleep command argument must be an integer...\" error\n> \n> Is this OK? Please tell me what you think.\n\nI confirmed your code and how it works, it's OK.\nFinally the message should be \"a positive integer\" in order to handle\nthe following case:\n\n\\set time -1\n\\sleep :time\n\n-> pgbench: error: \\sleep command argument must be an integer (not \"-1\")\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Mon, 8 Mar 2021 05:58:04 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] pgbench: improve \\sleep meta command"
},
{
"msg_contents": "\n\nOn 2021/03/08 14:58, kuroda.hayato@fujitsu.com wrote:\n> Dear Miyake-san,\n> \n> Thank you for updating the patch.\n> \n>> When argc == 2, pgbench assumes that (1) argv[1] is just a number (e.g.\n>> \\sleep 10) or (2) argv[1] is a pair of a number and a time unit (e.g.\n>> \\sleep 10ms).\n> \n> I see.\n> \n>> So I fixed the problem as follows:\n>> 1) When argv[1] starts with a number, raises an error depending on\n>> whether the time unit is correct or not.\n>> 2) When argv[1] does not starts with a number, raises an error because\n>> it must be an integer.\n>>\n>> With this modification, the behavior for input should be as follows.\n>> \"\\sleep 10\" -> pass\n>> \"\\sleep 10s\" -> pass\n>> \"\\sleep 10foo\" -> \"unrecognized time unit\" error\n>> \"\\sleep foo10\" -> \"\\sleep command argument must be an integer...\" error\n>>\n>> Is this OK? Please tell me what you think.\n> \n> I confirmed your code and how it works, it's OK.\n> Finally the message should be \"a positive integer\" in order to handle\n> the following case:\n> \n> \\set time -1\n> \\sleep :time\n> \n> -> pgbench: error: \\sleep command argument must be an integer (not \"-1\")\n\nIsn't it better to accept even negative sleep time like currently pgbench does?\nOtherwise we always need to check the variable is a positive integer\n(for example, using \\if command) when using it as the sleep time in \\sleep\ncommand. That seems inconvenient.\n\n\n+\t\t\t\t\tsyntax_error(source, lineno, my_command->first_line, my_command->argv[0],\n+\t\t\t\t\t\t\t \"\\\\sleep command argument must be an integer\",\n\nI like the error message like \"invalid sleep time, must be an integer\".\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 8 Mar 2021 16:16:24 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench: improve \\sleep meta command"
},
{
"msg_contents": "Dear Fujii-san, Miyake-san\n\n> Isn't it better to accept even negative sleep time like currently pgbench does?\n> Otherwise we always need to check the variable is a positive integer\n> (for example, using \\if command) when using it as the sleep time in \\sleep\n> command. That seems inconvenient.\n\nBoth of them are acceptable for me.\nBut we should write down how it works when the negative value is input if we adopt.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Mon, 8 Mar 2021 07:33:05 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] pgbench: improve \\sleep meta command"
},
{
"msg_contents": "On 2021-Mar-08, kuroda.hayato@fujitsu.com wrote:\n\n> Dear Fujii-san, Miyake-san\n> \n> > Isn't it better to accept even negative sleep time like currently pgbench does?\n> > Otherwise we always need to check the variable is a positive integer\n> > (for example, using \\if command) when using it as the sleep time in \\sleep\n> > command. That seems inconvenient.\n> \n> Both of them are acceptable for me.\n> But we should write down how it works when the negative value is input if we adopt.\n\nNot sleeping at all seems a good reaction (same as for zero, I guess.)\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"XML!\" Exclaimed C++. \"What are you doing here? You're not a programming\nlanguage.\"\n\"Tell that to the people who use me,\" said XML.\n\n\n",
"msg_date": "Mon, 8 Mar 2021 11:10:26 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench: improve \\sleep meta command"
},
{
"msg_contents": "\n\nOn 2021/03/08 23:10, Alvaro Herrera wrote:\n> On 2021-Mar-08, kuroda.hayato@fujitsu.com wrote:\n> \n>> Dear Fujii-san, Miyake-san\n>>\n>>> Isn't it better to accept even negative sleep time like currently pgbench does?\n>>> Otherwise we always need to check the variable is a positive integer\n>>> (for example, using \\if command) when using it as the sleep time in \\sleep\n>>> command. That seems inconvenient.\n>>\n>> Both of them are acceptable for me.\n>> But we should write down how it works when the negative value is input if we adopt.\n> \n> Not sleeping at all seems a good reaction (same as for zero, I guess.)\n\n+1. BTW, IIUC currently \\sleep works in that way.\n\nRegards, \n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 9 Mar 2021 00:54:18 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench: improve \\sleep meta command"
},
{
"msg_contents": "On 2021/03/09 0:54, Fujii Masao wrote:\n> \n> \n> On 2021/03/08 23:10, Alvaro Herrera wrote:\n>> On 2021-Mar-08, kuroda.hayato@fujitsu.com wrote:\n>>\n>>> Dear Fujii-san, Miyake-san\n>>>\n>>>> Isn't it better to accept even negative sleep time like currently pgbench does?\n>>>> Otherwise we always need to check the variable is a positive integer\n>>>> (for example, using \\if command) when using it as the sleep time in \\sleep\n>>>> command. That seems inconvenient.\n>>>\n>>> Both of them are acceptable for me.\n>>> But we should write down how it works when the negative value is input if we adopt.\n>>\n>> Not sleeping at all seems a good reaction (same as for zero, I guess.)\n> \n> +1. BTW, IIUC currently \\sleep works in that way.\n\nAttached is the updated version of the patch.\n\nCurrently a variable whose value is a negative number is allowed to be\nspecified as a sleep time as follows. In this case \\sleep command doesn't\nsleep. The patch doesn't change this behavior at all.\n\n \\set hoge -1\n \\sleep :hoge s\n\nCurrently a variable whose value is a double is allowed to be specified as\na sleep time as follows. In this case the integer value that atoi() converts\nthe double number into is used as a sleep time. The patch also doesn't\nchange this behavior at all because I've not found any strong reason to\nban that usage.\n\n \\set hoge 10 / 4.0\n \\sleep :hoge s\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 17 Mar 2021 01:49:51 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench: improve \\sleep meta command"
},
{
"msg_contents": "Dear Fujii-san,\r\n\r\nThank you for updating the patch.\r\nI understand that you don't want to change the current specification.\r\n\r\n```diff\r\n+ if (usec == 0)\r\n+ {\r\n+ char *c = var;\r\n+\r\n+ /* Skip sign */\r\n+ if (*c == '+' || *c == '-')\r\n+ c++;\r\n```\r\n\r\nIn my understanding the skip is not necessary, because\r\nplus sign is already removed in the executeMetaCommand() and minus value can be returned by atoi().\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 17 Mar 2021 07:40:30 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] pgbench: improve \\sleep meta command"
},
{
"msg_contents": "On 2021/03/17 16:40, kuroda.hayato@fujitsu.com wrote:\n> Dear Fujii-san,\n> \n> Thank you for updating the patch.\n\nThanks for the review!\n\n\n> I understand that you don't want to change the current specification.\n> \n> ```diff\n> + if (usec == 0)\n> + {\n> + char *c = var;\n> +\n> + /* Skip sign */\n> + if (*c == '+' || *c == '-')\n> + c++;\n> ```\n> \n> In my understanding the skip is not necessary, because\n> plus sign is already removed in the executeMetaCommand() and minus value can be returned by atoi().\n\nYes, you're right. I removed that check from the patch.\nAttached is the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Thu, 18 Mar 2021 16:32:37 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench: improve \\sleep meta command"
},
{
"msg_contents": "Dear Fujii-san,\r\n\r\nI confirmed your patch and some parse functions, and I agree\r\nthe check condition in evaluateSleep() is correct.\r\nNo problem is found.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 19 Mar 2021 01:02:20 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] pgbench: improve \\sleep meta command"
},
{
"msg_contents": "\n\nOn 2021/03/19 10:02, kuroda.hayato@fujitsu.com wrote:\n> Dear Fujii-san,\n> \n> I confirmed your patch and some parse functions, and I agree\n> the check condition in evaluateSleep() is correct.\n> No problem is found.\n\nThanks for reviewing the patch!\nBarring any objection, I will commit this patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 19 Mar 2021 10:43:08 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench: improve \\sleep meta command"
},
{
"msg_contents": "\n\nOn 2021/03/19 10:43, Fujii Masao wrote:\n> \n> \n> On 2021/03/19 10:02, kuroda.hayato@fujitsu.com wrote:\n>> Dear Fujii-san,\n>>\n>> I confirmed your patch and some parse functions, and I agree\n>> the check condition in evaluateSleep() is correct.\n>> No problem is found.\n> \n> Thanks for reviewing the patch!\n> Barring any objection, I will commit this patch.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 22 Mar 2021 12:06:28 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pgbench: improve \\sleep meta command"
}
] |
[
{
"msg_contents": "Hello.\n\nI noticed that 011_crash_recovery.pl intermittently (that being said,\none out of three or so on my environment) fails in the second test.\n\n> t/011_crash_recovery.pl .. 2/3 \n> # Failed test 'new xid after restart is greater'\n> # at t/011_crash_recovery.pl line 56.\n> # '539'\n> # >\n> # '539'\n> \n> # Failed test 'xid is aborted after crash'\n> # at t/011_crash_recovery.pl line 60.\n> # got: 'committed'\n> # expected: 'aborted'\n> # Looks like you failed 2 tests of 3.\n> t/011_crash_recovery.pl .. Dubious, test returned 2 (wstat 512, 0x200)\n> Failed 2/3 subtests \n> \n> Test Summary Report\n> -------------------\n> t/011_crash_recovery.pl (Wstat: 512 Tests: 3 Failed: 2)\n> Failed tests: 2-3\n> Non-zero exit status: 2\n> Files=1, Tests=3, 3 wallclock secs ( 0.03 usr 0.01 sys + 1.90 cusr 0.39 csys = 2.33 CPU)\n> Result: FAIL\n\nIf the server crashed before emitting WAL records for the transaction\njust started, the restarted server cannot know the xid is even\nstarted. I'm not sure that is the intention of the test but we must\nmake sure the WAL to be emitted before crashing. CHECKPOINT ensures\nthat.\n\nThoughts? The attached seems to stabilize the test for me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 05 Mar 2021 11:50:11 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> I noticed that 011_crash_recovery.pl intermittently (that being said,\n> one out of three or so on my environment) fails in the second test.\n\nHmmm ... what environment is that? This test script hasn't changed\nmeaningfully in several years, and we have not seen any real issues\nwith it up to now.\n\n> If the server crashed before emitting WAL records for the transaction\n> just started, the restarted server cannot know the xid is even\n> started. I'm not sure that is the intention of the test but we must\n> make sure the WAL to be emitted before crashing. CHECKPOINT ensures\n> that.\n\nThe original commit for this test says\n\n----\ncommit 857ee8e391ff6654ef9dcc5dd8b658d7709d0a3c\nAuthor: Robert Haas <rhaas@postgresql.org>\nDate: Fri Mar 24 12:00:53 2017 -0400\n\n Add a txid_status function.\n \n If your connection to the database server is lost while a COMMIT is\n in progress, it may be difficult to figure out whether the COMMIT was\n successful or not. This function will tell you, provided that you\n don't wait too long to ask. It may be useful in other situations,\n too.\n \n Craig Ringer, reviewed by Simon Riggs and by me\n \n Discussion: http://postgr.es/m/CAMsr+YHQiWNEi0daCTboS40T+V5s_+dst3PYv_8v2wNVH+Xx4g@mail.gmail.com\n----\n\nIf the function needs a CHECKPOINT to give a reliable answer,\nis it actually good for the claimed purpose?\n\nIndependently of that, I doubt that adding a checkpoint call\nafter the pg_current_xact_id() call is going to help. The\nPerl script is able to move on as soon as it's read the\nfunction result. If we need this hack, it has to be put\nbefore that SELECT, AFAICS.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Mar 2021 22:32:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "On Thu, Mar 4, 2021 at 7:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hmmm ... what environment is that? This test script hasn't changed\n> meaningfully in several years, and we have not seen any real issues\n> with it up to now.\n\nDid you see this recent thread?\n\nhttps://postgr.es/m/20210208215206.mqmrnpkaqrdtm7fj@alap3.anarazel.de\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 4 Mar 2021 19:41:06 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Thu, Mar 4, 2021 at 7:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmmm ... what environment is that? This test script hasn't changed\n>> meaningfully in several years, and we have not seen any real issues\n>> with it up to now.\n\n> Did you see this recent thread?\n> https://postgr.es/m/20210208215206.mqmrnpkaqrdtm7fj@alap3.anarazel.de\n\nHadn't paid much attention at the time, but yeah, it looks like Andres\ntripped over some variant of this.\n\nI'd be kind of inclined to remove this test script altogether, on the\ngrounds that it's wasting cycles on a function that doesn't really\ndo what is claimed (and we should remove the documentation claim, too).\n\nHaving said that, it's still true that this test has been stable in\nthe buildfarm. Andres explained what he had to perturb to make it\nfail, so I'm still interested in what Horiguchi-san did to break it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Mar 2021 23:02:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "I wrote:\n> I'd be kind of inclined to remove this test script altogether, on the\n> grounds that it's wasting cycles on a function that doesn't really\n> do what is claimed (and we should remove the documentation claim, too).\n\nAlternatively, maybe we can salvage the function's usefulness by making it\nflush WAL before returning?\n\nIf we go that route, then we have the opposite problem with respect\nto the test script: rather than trying to make it paper over the\nfunction's problems, we ought to try to make it reliably fail with\nthe code as it stands.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Mar 2021 23:10:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "At Thu, 04 Mar 2021 23:02:09 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Peter Geoghegan <pg@bowt.ie> writes:\n> > On Thu, Mar 4, 2021 at 7:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Hmmm ... what environment is that? This test script hasn't changed\n> >> meaningfully in several years, and we have not seen any real issues\n> >> with it up to now.\n> \n> > Did you see this recent thread?\n> > https://postgr.es/m/20210208215206.mqmrnpkaqrdtm7fj@alap3.anarazel.de\n> \n> Hadn't paid much attention at the time, but yeah, it looks like Andres\n> tripped over some variant of this.\n> \n> I'd be kind of inclined to remove this test script altogether, on the\n> grounds that it's wasting cycles on a function that doesn't really\n> do what is claimed (and we should remove the documentation claim, too).\n> \n> Having said that, it's still true that this test has been stable in\n> the buildfarm. Andres explained what he had to perturb to make it\n> fail, so I'm still interested in what Horiguchi-san did to break it.\n\nCONFIGURE = '--enable-debug' '--enable-cassert' '--enable-tap-tests' '--enable-depend' '--enable-nls' '--with-icu' '--with-openssl' '--with-libxml' '--with-uuid=e2fs' '--with-tcl' '--with-llvm' '--prefix=/home/horiguti/bin/pgsql_work' 'LLVM_CONFIG=/usr/bin/llvm-config-64' 'CC=/usr/lib64/ccache/gcc' 'CLANG=/usr/lib64/ccache/clang' 'CFLAGS=-O0' '--with-wal-blocksize=16'\n\nthe WAL block size might have affected. I'll recheck without it.\n\nFWIW xfs/CentOS8/VirtuaBox/Windows10\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 05 Mar 2021 13:13:04 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "At Fri, 05 Mar 2021 13:13:04 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 04 Mar 2021 23:02:09 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> > Peter Geoghegan <pg@bowt.ie> writes:\n> > > On Thu, Mar 4, 2021 at 7:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> Hmmm ... what environment is that? This test script hasn't changed\n> > >> meaningfully in several years, and we have not seen any real issues\n> > >> with it up to now.\n> > \n> > > Did you see this recent thread?\n> > > https://postgr.es/m/20210208215206.mqmrnpkaqrdtm7fj@alap3.anarazel.de\n> > \n> > Hadn't paid much attention at the time, but yeah, it looks like Andres\n> > tripped over some variant of this.\n> > \n> > I'd be kind of inclined to remove this test script altogether, on the\n> > grounds that it's wasting cycles on a function that doesn't really\n> > do what is claimed (and we should remove the documentation claim, too).\n> > \n> > Having said that, it's still true that this test has been stable in\n> > the buildfarm. Andres explained what he had to perturb to make it\n> > fail, so I'm still interested in what Horiguchi-san did to break it.\n> \n> CONFIGURE = '--enable-debug' '--enable-cassert' '--enable-tap-tests' '--enable-depend' '--enable-nls' '--with-icu' '--with-openssl' '--with-libxml' '--with-uuid=e2fs' '--with-tcl' '--with-llvm' '--prefix=/home/horiguti/bin/pgsql_work' 'LLVM_CONFIG=/usr/bin/llvm-config-64' 'CC=/usr/lib64/ccache/gcc' 'CLANG=/usr/lib64/ccache/clang' 'CFLAGS=-O0' '--with-wal-blocksize=16'\n> \n> the WAL block size might have affected. I'll recheck without it.\n\nOk, I don't see the failure. It guess that the WAL records for the\nlast transaction crosses a block boundary with 8kB WAL blocks, but not\nwith 16kB blocks.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 05 Mar 2021 13:21:48 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "On Fri, Mar 5, 2021 at 5:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > I'd be kind of inclined to remove this test script altogether, on the\n> > grounds that it's wasting cycles on a function that doesn't really\n> > do what is claimed (and we should remove the documentation claim, too).\n>\n> Alternatively, maybe we can salvage the function's usefulness by making it\n> flush WAL before returning?\n\nTo make pg_xact_status()'s result reliable, don't you need to make\npg_current_xact_id() flush? In other words, isn't it at the point\nthat you *observe* the transaction that you have to make sure that\nthis transaction ID won't be reused after crash recovery. Before\nthat, it's simultaneously allocated and unallocated, like the cat.\n\n\n",
"msg_date": "Fri, 5 Mar 2021 17:30:58 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Fri, Mar 5, 2021 at 5:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Alternatively, maybe we can salvage the function's usefulness by making it\n>> flush WAL before returning?\n\n> To make pg_xact_status()'s result reliable, don't you need to make\n> pg_current_xact_id() flush? In other words, isn't it at the point\n> that you *observe* the transaction that you have to make sure that\n> this transaction ID won't be reused after crash recovery. Before\n> that, it's simultaneously allocated and unallocated, like the cat.\n\nWe need to be sure that the XID is written out to WAL before we\nlet the client see it, yeah. I've not looked to see exactly\nwhere in the code would be the best place.\n\nBTW, I tried simply removing the \"allows_streaming\" option from\nthe test, and it failed ten times out of ten tries for me.\nSo Andres is right that that makes it pretty reproducible in\na stock build.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Mar 2021 23:40:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "At Fri, 05 Mar 2021 13:21:48 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Fri, 05 Mar 2021 13:13:04 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > At Thu, 04 Mar 2021 23:02:09 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> > > Having said that, it's still true that this test has been stable in\n> > > the buildfarm. Andres explained what he had to perturb to make it\n> > > fail, so I'm still interested in what Horiguchi-san did to break it.\n> > \n> > CONFIGURE = '--enable-debug' '--enable-cassert' '--enable-tap-tests' '--enable-depend' '--enable-nls' '--with-icu' '--with-openssl' '--with-libxml' '--with-uuid=e2fs' '--with-tcl' '--with-llvm' '--prefix=/home/horiguti/bin/pgsql_work' 'LLVM_CONFIG=/usr/bin/llvm-config-64' 'CC=/usr/lib64/ccache/gcc' 'CLANG=/usr/lib64/ccache/clang' 'CFLAGS=-O0' '--with-wal-blocksize=16'\n> > \n> > the WAL block size might have affected. I'll recheck without it.\n> \n> Ok, I don't see the failure. It guess that the WAL records for the\n> last transaction crosses a block boundary with 8kB WAL blocks, but not\n> with 16kB blocks.\n\nIn the failure case with 16kB WAL blocks, tx538 ends with a commit\nrecord at 0/01648B98 and nothing follows (other than the recrods added\nafter restart).\n\nIn the successful case, tx538 ends at the same LSN and followed by\nINSERT@tx539 at0/1648CE0 up to INSERT_LEAF at 0/0165BFD8-1. So tx539\njust fits inside the block (0x1648000-0x164BFFF). That amount of WAL\nmust cross a 8kB bounary.\n\nActually with 8kB blocks, tx538 ends at 0/0164B1A8 and tx539 starts at\n0/0164B2A8 and ends at 0/0165E7C8, corsses a boundary at 0/0164C000\nand 0/016E000.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 05 Mar 2021 13:53:42 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "\n\n> 5 марта 2021 г., в 08:32, Tom Lane <tgl@sss.pgh.pa.us> написал(а):\n> \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n>> I noticed that 011_crash_recovery.pl intermittently (that being said,\n>> one out of three or so on my environment) fails in the second test.\n> \n> Hmmm ... what environment is that? This test script hasn't changed\n> meaningfully in several years, and we have not seen any real issues\n> with it up to now.\n\nMaybe it's offtopic here, but anyway...\nWhile working on \"lz4 for FPIs\" I've noticed that this test fails with wal_compression = on.\nI did not investigate the case at that moment, but I think that it would be good to run recovery regression tests with compression too.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Fri, 5 Mar 2021 10:07:06 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "At Thu, 04 Mar 2021 23:40:34 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> BTW, I tried simply removing the \"allows_streaming\" option from\n> the test, and it failed ten times out of ten tries for me.\n> So Andres is right that that makes it pretty reproducible in\n> a stock build.\n\nThe difference comes from the difference of shared_buffers. In the\n\"allows_streaming\" case, PostgresNode::init() *reduces* the number\ndown to '1MB'(128 blocks) which leads to only 8 XLOGbuffers, which\nwill very soon be wrap-arounded, which leads to XLogWrite.\n\nWhen allows_streaming=0 case, the default size of shared_buffers is\n128MB (16384 blocks). WAL buffer (512) doesn't get wrap-arounded\nduring the test and no WAL buffer is written out in that case.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 05 Mar 2021 16:51:17 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "At Fri, 5 Mar 2021 10:07:06 +0500, Andrey Borodin <x4mmm@yandex-team.ru> wrote in \n> Maybe it's offtopic here, but anyway...\n> While working on \"lz4 for FPIs\" I've noticed that this test fails with wal_compression = on.\n> I did not investigate the case at that moment, but I think that it would be good to run recovery regression tests with compression too.\n\nThe problem records has 15 pages of FPIs. Reducing of its size may\nprevent WAL-buffer wrap around and wal writes. If no wal retten the\ntest fails.\n\nregrds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 05 Mar 2021 16:57:40 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "(Sorry for my slippery fingers.)\n\nAt Fri, 5 Mar 2021 10:07:06 +0500, Andrey Borodin <x4mmm@yandex-team.ru> wrote in \n> Maybe it's offtopic here, but anyway...\n> While working on \"lz4 for FPIs\" I've noticed that this test fails with wal_compression = on.\n> I did not investigate the case at that moment, but I think that it would be good to run recovery regression tests with compression too.\n> \n> Best regards, Andrey Borodin.\n\nThe problem records have 15 pages of FPIs. The reduction of their\nsize may prevent WAL-buffer wrap around and wal writes. If no wal is\nwritten the test fails.\n\nregrds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 05 Mar 2021 17:00:41 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "\n\n> 5 марта 2021 г., в 13:00, Kyotaro Horiguchi <horikyota.ntt@gmail.com> написал(а):\n> \n> The problem records have 15 pages of FPIs. The reduction of their\n> size may prevent WAL-buffer wrap around and wal writes. If no wal is\n> written the test fails.\nThanks, I've finally understood the root cause.\nSo, test verifies guarantee that is not provided (durability of aborted transactions)?\nMaybe flip it to test that transaction effects are not committed\\visible?\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Fri, 5 Mar 2021 13:20:53 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "At Fri, 5 Mar 2021 13:20:53 +0500, Andrey Borodin <x4mmm@yandex-team.ru> wrote in \n> > 5 марта 2021 г., в 13:00, Kyotaro Horiguchi <horikyota.ntt@gmail.com> написал(а):\n> > \n> > The problem records have 15 pages of FPIs. The reduction of their\n> > size may prevent WAL-buffer wrap around and wal writes. If no wal is\n> > written the test fails.\n> Thanks, I've finally understood the root cause.\n> So, test verifies guarantee that is not provided (durability of aborted transactions)?\n\nI think that's right.\n\n> Maybe flip it to test that transaction effects are not committed\\visible?\n\nMaybe no. The objective of the test is to check if a maybe-comitted\ntransaction at crash is finally committed or aborted without directly\nconfirming the result data, I think. And that feature is found not to\nbe working as expected.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 05 Mar 2021 17:41:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "On Fri, Mar 5, 2021 at 5:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Fri, Mar 5, 2021 at 5:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Alternatively, maybe we can salvage the function's usefulness by making it\n> >> flush WAL before returning?\n>\n> > To make pg_xact_status()'s result reliable, don't you need to make\n> > pg_current_xact_id() flush? In other words, isn't it at the point\n> > that you *observe* the transaction that you have to make sure that\n> > this transaction ID won't be reused after crash recovery. Before\n> > that, it's simultaneously allocated and unallocated, like the cat.\n>\n> We need to be sure that the XID is written out to WAL before we\n> let the client see it, yeah. I've not looked to see exactly\n> where in the code would be the best place.\n\nOne idea would be for TransactionStateData's bool member didLogXid to\nbecome an LSN, initially invalid, that points one past the first\nrecord logged for this transaction, maintained by\nMarkCurrentTransactionIdLoggedIfAny(). Then, pg_current_xact_id()\n(and any similar xid-reporting function that we deem to be an\nofficially sanctioned way for a client to \"observe\" xids) could check\nif it's valid yet; if not, it could go ahead and log something\ncontaining the xid to make it valid. Then it could flush the log up\nto that LSN.\n\n\n",
"msg_date": "Fri, 5 Mar 2021 21:48:45 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "At Fri, 05 Mar 2021 16:51:17 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> The difference comes from the difference of shared_buffers. In the\n> \"allows_streaming\" case, PostgresNode::init() *reduces* the number\n> down to '1MB'(128 blocks) which leads to only 8 XLOGbuffers, which\n> will very soon be wrap-arounded, which leads to XLogWrite.\n> \n> When allows_streaming=0 case, the default size of shared_buffers is\n> 128MB (16384 blocks). WAL buffer (512) doesn't get wrap-arounded\n> during the test and no WAL buffer is written out in that case.\n\nSo I think we need to remove the shared_buffers setting for the\nallows_streamig case in PostgresNode.pm\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 05 Mar 2021 18:08:19 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> So I think we need to remove the shared_buffers setting for the\n> allows_streamig case in PostgresNode.pm\n\nThat would have uncertain effects on other TAP tests, so I'm disinclined\nto do it that way. 011_crash_recovery.pl doesn't actually use a standby\nserver, so just removing its use of the allows_streaming option seems\nsufficient.\n\nBut, of course, first we need a fix for the bug we now know exists.\nWas anyone volunteering to make the patch?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Mar 2021 11:16:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "On Fri, Mar 05, 2021 at 11:16:55AM -0500, Tom Lane wrote:\n> That would have uncertain effects on other TAP tests, so I'm disinclined\n> to do it that way.\n\n+1. There may be tests out-of-core that rely on this value as\ndefault.\n--\nMichael",
"msg_date": "Sat, 6 Mar 2021 10:25:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "At Sat, 6 Mar 2021 10:25:46 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Mar 05, 2021 at 11:16:55AM -0500, Tom Lane wrote:\n> > That would have uncertain effects on other TAP tests, so I'm disinclined\n> > to do it that way.\n> \n> +1. There may be tests out-of-core that rely on this value as\n> default.\n\nOn second thougt, that difference can be said to have revealed the\nproblem. Agreed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 08 Mar 2021 09:31:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "At Fri, 05 Mar 2021 11:16:55 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > So I think we need to remove the shared_buffers setting for the\n> > allows_streamig case in PostgresNode.pm\n> \n> That would have uncertain effects on other TAP tests, so I'm disinclined\n> to do it that way. 011_crash_recovery.pl doesn't actually use a standby\n> server, so just removing its use of the allows_streaming option seems\n> sufficient.\n> \n> But, of course, first we need a fix for the bug we now know exists.\n> Was anyone volunteering to make the patch?\n\nThomas' proposal sounds reasonable. If he is not going to do that for\nnow, I'm willing to work on that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 08 Mar 2021 09:39:15 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "On Mon, Mar 8, 2021 at 1:39 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Fri, 05 Mar 2021 11:16:55 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n> > Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > But, of course, first we need a fix for the bug we now know exists.\n> > Was anyone volunteering to make the patch?\n>\n> Thomas' proposal sounds reasonable. If he is not going to do that for\n> now, I'm willing to work on that.\n\nThanks! I'm afraid I wouldn't get around to it for a few weeks, so if\nyou have time, please do. (I'm not sure if it's strictly necessary to\nlog *this* xid, if a higher xid has already been logged, considering\nthat the goal is just to avoid getting confused about an xid that is\nrecycled after crash recovery, but coordinating that might be more\ncomplicated; I don't know.)\n\n\n",
"msg_date": "Mon, 8 Mar 2021 14:03:08 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Thanks! I'm afraid I wouldn't get around to it for a few weeks, so if\n> you have time, please do. (I'm not sure if it's strictly necessary to\n> log *this* xid, if a higher xid has already been logged, considering\n> that the goal is just to avoid getting confused about an xid that is\n> recycled after crash recovery, but coordinating that might be more\n> complicated; I don't know.)\n\nYeah, ideally the patch wouldn't add any unnecessary WAL flush,\nif there's some cheap way to determine that our XID must already\nhave been written out. But I'm not sure that it's worth adding\nany great amount of complexity to avoid that. For sure I would\nnot advocate adding any new bookkeeping overhead in the mainline\ncode paths to support it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Mar 2021 20:09:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "At Sun, 07 Mar 2021 20:09:33 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Thanks! I'm afraid I wouldn't get around to it for a few weeks, so if\n> > you have time, please do. (I'm not sure if it's strictly necessary to\n> > log *this* xid, if a higher xid has already been logged, considering\n> > that the goal is just to avoid getting confused about an xid that is\n> > recycled after crash recovery, but coordinating that might be more\n> > complicated; I don't know.)\n> \n> Yeah, ideally the patch wouldn't add any unnecessary WAL flush,\n> if there's some cheap way to determine that our XID must already\n> have been written out. But I'm not sure that it's worth adding\n> any great amount of complexity to avoid that. For sure I would\n> not advocate adding any new bookkeeping overhead in the mainline\n> code paths to support it.\n\nWe need to *write* an additional record if the current transaction\nhaven't yet written one (EnsureTopTransactionIdLogged()). One\nannoyance is the possibly most-common usage of calling\npg_current_xact_id() at the beginning of a transaction, which leads to\nan additional 8 byte-long log of XLOG_XACT_ASSIGNMENT. We could also\navoid that by detecting any larger xid is already flushed out.\n\nI haven't find a simple and clean way to tracking the maximum\nflushed-out XID. The new cooperation between xlog.c and xact.c\nrelated to XID and LSN happen on shared variable makes things\ncomplex...\n\nSo the attached doesn't contain the max-flushed-xid tracking feature.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 08 Mar 2021 17:32:42 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "On Mon, Mar 8, 2021 at 9:32 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Sun, 07 Mar 2021 20:09:33 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > Thanks! I'm afraid I wouldn't get around to it for a few weeks, so if\n> > > you have time, please do. (I'm not sure if it's strictly necessary to\n> > > log *this* xid, if a higher xid has already been logged, considering\n> > > that the goal is just to avoid getting confused about an xid that is\n> > > recycled after crash recovery, but coordinating that might be more\n> > > complicated; I don't know.)\n> >\n> > Yeah, ideally the patch wouldn't add any unnecessary WAL flush,\n> > if there's some cheap way to determine that our XID must already\n> > have been written out. But I'm not sure that it's worth adding\n> > any great amount of complexity to avoid that. For sure I would\n> > not advocate adding any new bookkeeping overhead in the mainline\n> > code paths to support it.\n>\n> We need to *write* an additional record if the current transaction\n> haven't yet written one (EnsureTopTransactionIdLogged()). One\n> annoyance is the possibly most-common usage of calling\n> pg_current_xact_id() at the beginning of a transaction, which leads to\n> an additional 8 byte-long log of XLOG_XACT_ASSIGNMENT. We could also\n> avoid that by detecting any larger xid is already flushed out.\n\nYeah, that would be very expensive for users doing that.\n\n> I haven't find a simple and clean way to tracking the maximum\n> flushed-out XID. The new cooperation between xlog.c and xact.c\n> related to XID and LSN happen on shared variable makes things\n> complex...\n>\n> So the attached doesn't contain the max-flushed-xid tracking feature.\n\nI guess that would be just as expensive if the user does that\nsequentially with small transactions (ie allocating xids one by one).\n\nI remembered this thread after seeing the failure of Michael's new\nbuild farm animal \"tanager\". I think we need to solve this somehow...\naccording to our documentation \"Applications might use this function,\nfor example, to determine whether their transaction committed or\naborted after the application and database server become disconnected\nwhile a COMMIT is in progress.\", but it's currently useless or\ndangerous for that purpose.\n\n\n",
"msg_date": "Wed, 25 Jan 2023 12:40:02 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 12:40:02PM +1300, Thomas Munro wrote:\n> I remembered this thread after seeing the failure of Michael's new\n> build farm animal \"tanager\". I think we need to solve this somehow...\n\nWell, this host has a problem, for what looks like a kernel issue, I\nguess.. This is repeatable across all the branches, randomly, with\nvarious errors with the POSIX DSM implementation:\n# [63cf68b7.5e5a:1] ERROR: could not open shared memory segment \"/PostgreSQL.543738922\": No such file or directory\n# [63cf68b7.5e58:6] ERROR: could not open shared memory segment \"/PostgreSQL.543738922\": No such file or directory\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tanager&dt=2023-01-24%2004%3A23%3A53\ndynamic_shared_memory_type should be using posix in this case.\nSwitching to mmap may help, perhaps.. I need to test it.\n\nAnyway, sorry for the noise on this one.\n--\nMichael",
"msg_date": "Wed, 25 Jan 2023 09:02:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 1:02 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Well, this host has a problem, for what looks like a kernel issue, I\n> guess.. This is repeatable across all the branches, randomly, with\n> various errors with the POSIX DSM implementation:\n> # [63cf68b7.5e5a:1] ERROR: could not open shared memory segment \"/PostgreSQL.543738922\": No such file or directory\n> # [63cf68b7.5e58:6] ERROR: could not open shared memory segment \"/PostgreSQL.543738922\": No such file or directory\n\nSomething to do with\nhttps://www.postgresql.org/docs/current/kernel-resources.html#SYSTEMD-REMOVEIPC\n?\n\nThe failure I saw looked like a straight up case of the bug reported\nin this thread to me.\n\n\n",
"msg_date": "Wed, 25 Jan 2023 13:20:39 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 01:20:39PM +1300, Thomas Munro wrote:\n> Something to do with\n> https://www.postgresql.org/docs/current/kernel-resources.html#SYSTEMD-REMOVEIPC\n> ?\n\nStill this is unrelated? This is a buildfarm instance, so the backend\ndoes not run with systemd.\n\n> The failure I saw looked like a straight up case of the bug reported\n> in this thread to me.\n\nOkay...\n--\nMichael",
"msg_date": "Wed, 25 Jan 2023 09:34:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Jan 25, 2023 at 01:20:39PM +1300, Thomas Munro wrote:\n>> Something to do with\n>> https://www.postgresql.org/docs/current/kernel-resources.html#SYSTEMD-REMOVEIPC\n>> ?\n\n> Still this is unrelated? This is a buildfarm instance, so the backend\n> does not run with systemd.\n\nThat systemd behavior affects IPC resources regardless of what process\ncreated them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 Jan 2023 19:42:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 07:42:06PM -0500, Tom Lane wrote:\n> That systemd behavior affects IPC resources regardless of what process\n> created them.\n\nThanks, my memory was fuzzy regarding that. I am curious if the error\nin the recovery tests will persist with that set up. The next run\nwill be in a few hours, so let's see..\n--\nMichael",
"msg_date": "Wed, 25 Jan 2023 10:32:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 10:32:10AM +0900, Michael Paquier wrote:\n> Thanks, my memory was fuzzy regarding that. I am curious if the error\n> in the recovery tests will persist with that set up. The next run\n> will be in a few hours, so let's see..\n\nSo it looks like tanaget is able to reproduce the failure of this\nthread much more frequently than the other animals:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tanager&dt=2023-01-25%2003%3A05%3A05\n\nThat's interesting. FWIW, this environment is just a Raspberry PI 4\nwith 8GB of memory with clang.\n--\nMichael",
"msg_date": "Wed, 25 Jan 2023 14:04:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: 011_crash_recovery.pl intermittently fails"
}
] |
[
{
"msg_contents": "Hi, all\n\n Recently, I retrieved CVE-2021-20229 on the NVD website which describes\nthe affected PG version are \"before 13.2, before 12.6, before 11.11, before\n10.16, before 9.6.21 and before 9.5.25\", but we I look the official website\nof PG and look the git commit log, I found only 13 version is affect. So I\nconfused?\n\n Best regards\n\n\nNVD link:\n\nhttps://nvd.nist.gov/vuln/detail/CVE-2021-20229#vulnCurrentDescriptionTitle\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Fri, 5 Mar 2021 00:32:43 -0700 (MST)",
"msg_from": "bchen90 <bchen90@163.com>",
"msg_from_op": true,
"msg_subject": "Which PG version does CVE-2021-20229 affected?"
},
{
"msg_contents": "On Fri, Mar 05, 2021 at 12:32:43AM -0700, bchen90 wrote:\n> NVD link:\n> \n> https://nvd.nist.gov/vuln/detail/CVE-2021-20229#vulnCurrentDescriptionTitle\n\nThis link includes incorrect information. CVE-2021-20229 is only a\nproblem in 13.0 and 13.1, fixed in 13.2. Please see for example here:\nhttps://www.postgresql.org/support/security/\n\nThe commit that fixed the issue is c028faf, mentioning 9ce77d7 as the\norigin point, a commit introduced in Postgres 13.\n--\nMichael",
"msg_date": "Fri, 5 Mar 2021 16:38:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Which PG version does CVE-2021-20229 affected?"
},
{
"msg_contents": "Michael Paquier schrieb am 05.03.2021 um 08:38:\n> On Fri, Mar 05, 2021 at 12:32:43AM -0700, bchen90 wrote:\n>> NVD link:\n>>\n>> https://nvd.nist.gov/vuln/detail/CVE-2021-20229#vulnCurrentDescriptionTitle\n>\n> This link includes incorrect information. CVE-2021-20229 is only a\n> problem in 13.0 and 13.1, fixed in 13.2. Please see for example here:\n> https://www.postgresql.org/support/security/\n>\n> The commit that fixed the issue is c028faf, mentioning 9ce77d7 as the\n> origin point, a commit introduced in Postgres 13.\n\nI think the information is correct as it says \"Up to (excluding) 13.2\"\n\nI understand the \"(excluding)\" part, such that the \"excluded\" version\nis _not_ affected by it.\n\nBut it's really a confusing way to present that kind of information.\n\n\n\n",
"msg_date": "Fri, 5 Mar 2021 09:19:21 +0100",
"msg_from": "Thomas Kellerer <shammat@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Which PG version does CVE-2021-20229 affected?"
},
{
"msg_contents": "On Fri, Mar 05, 2021 at 04:38:17PM +0900, Michael Paquier wrote:\n> On Fri, Mar 05, 2021 at 12:32:43AM -0700, bchen90 wrote:\n> > NVD link:\n> > \n> > https://nvd.nist.gov/vuln/detail/CVE-2021-20229#vulnCurrentDescriptionTitle\n> \n> This link includes incorrect information. CVE-2021-20229 is only a\n> problem in 13.0 and 13.1, fixed in 13.2. Please see for example here:\n> https://www.postgresql.org/support/security/\n\nProbably because the referenced Red Hat bugzilla bug claims it's\naffecting all back branches and they scrapes that info from there:\n\nhttps://bugzilla.redhat.com/show_bug.cgi?id=1925296\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB Mönchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 Mönchengladbach\nGeschäftsführung: Dr. Michael Meskes, Jörg Folz, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n",
"msg_date": "Fri, 5 Mar 2021 14:16:35 +0100",
"msg_from": "Michael Banck <michael.banck@credativ.de>",
"msg_from_op": false,
"msg_subject": "Re: Which PG version does CVE-2021-20229 affected?"
},
{
"msg_contents": "Michael Banck <michael.banck@credativ.de> writes:\n> On Fri, Mar 05, 2021 at 04:38:17PM +0900, Michael Paquier wrote:\n>> This link includes incorrect information. CVE-2021-20229 is only a\n>> problem in 13.0 and 13.1, fixed in 13.2. Please see for example here:\n>> https://www.postgresql.org/support/security/\n\n> Probably because the referenced Red Hat bugzilla bug claims it's\n> affecting all back branches and they scrapes that info from there:\n\n> https://bugzilla.redhat.com/show_bug.cgi?id=1925296\n\nIndeed. Must have been some internal miscommunication in Red Hat,\nbecause we certainly gave them the right info when we filed for the\nCVE number. I've commented on that BZ entry, hopefully that'll be\nenough to get them to update things.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Mar 2021 09:48:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Which PG version does CVE-2021-20229 affected?"
}
] |
[
{
"msg_contents": "Dear hacker:\r\n I am a Nanjing University student, Yang. I have forked a newly version of PostgreSQL source code to develop for my own use. Her is my question: I am trying to add a new system catalog to the system backend, how can I reach it? Is there any code or interface demonstration to show me?\r\n I am looking forward to your prompt reply. Heartfelt thanks.\nDear hacker: I am a Nanjing University student, Yang. I have forked a newly version of PostgreSQL source code to develop for my own use. Her is my question: I am trying to add a new system catalog to the system backend, how can I reach it? Is there any code or interface demonstration to show me? I am looking forward to your prompt reply. Heartfelt thanks.",
"msg_date": "Sat, 6 Mar 2021 17:01:55 +0800",
"msg_from": "\"=?gb18030?B?0e7S3bTm?=\" <1057206466@qq.com>",
"msg_from_op": true,
"msg_subject": "=?gb18030?B?SW5xdWlyaWVzIGFib3V0IFBvc3RncmVTUUwncyBz?=\n =?gb18030?B?eXN0ZW0gY2F0YWxvZyBkZXZlbG9wbWVudKGqoapm?=\n =?gb18030?B?cm9tIGEgc3R1ZGVudCBkZXZlbG9wZXIgb2YgTmFu?=\n =?gb18030?B?amluZyBVbml2ZXJzaXR5?="
},
{
"msg_contents": "\"=?gb18030?B?0e7S3bTm?=\" <1057206466@qq.com> writes:\n> I am a Nanjing University student, Yang. I have forked a newly version of PostgreSQL source code to develop for my own use. Her is my question: I am trying to add a new system catalog to the system backend, how can I reach it? Is there any code or interface demonstration to show me?\n> I am looking forward to your prompt reply. Heartfelt thanks.\n\nYou could try looking through the git history to find past commits\nthat added new system catalogs, and see what they did. Of course\nthere will be lots of details that are specific to the actual purpose\nof the new catalog, but this might be a useful guide anyway.\n\nOne point to know, if you are studying any such commit that is more\nthan a couple of years old, is that we have completely redesigned the\nway that initial catalog data is represented. Fortunately, that's\nalso documented now [1].\n\nIn general, studying the existing code to look for prototypes to copy\nis a good approach.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/devel/bki.html\n\n\n",
"msg_date": "Sat, 06 Mar 2021 10:08:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: =?gb18030?B?SW5xdWlyaWVzIGFib3V0IFBvc3RncmVTUUwncyBz?=\n =?gb18030?B?eXN0ZW0gY2F0YWxvZyBkZXZlbG9wbWVudKGqoapm?=\n =?gb18030?B?cm9tIGEgc3R1ZGVudCBkZXZlbG9wZXIgb2YgTmFu?=\n =?gb18030?B?amluZyBVbml2ZXJzaXR5?="
},
{
"msg_contents": "\nOn Sat, 06 Mar 2021 at 17:01, 杨逸存 <1057206466@qq.com> wrote:\n> Dear hacker:\n> I am a Nanjing University student, Yang. I have forked a newly version of PostgreSQL source code to develop for my own use. Her is my question: I am trying to add a new system catalog to the system backend, how can I reach it? Is there any code or interface demonstration to show me?\n> I am looking forward to your prompt reply. Heartfelt thanks.\n\nHere is a document on how to create a new system catalog for PostgreSQL 11.\n\nhttps://blog.japinli.top/2019/08/postgresql-new-catalog/\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Sun, 07 Mar 2021 13:31:54 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inquiries about PostgreSQL's system catalog =?utf-8?Q?develop?=\n =?utf-8?Q?ment=E2=80=94=E2=80=94from?= a\n student developer of Nanjing University"
}
] |
[
{
"msg_contents": "Hi,\n\nI was looking at the 'Catalog version access' patch, by Vik Fearing. I saw a succesful build by the cfbot but I could not build one here. Only then did I notice that the last apply of the patches by cfbot was on 3769e11 which is the 3rd march, some 10 commits ago.\n\nThere have been no new patches; one of the patches does not apply anymore. But it's not reported on the cfbot page.\n\nIs that the way it's supposed to be? I would have thought there was a regular schedule (hourly? 3-hourly? daily?) when all patches were taken for re-apply, and re-build, so that when a patch stops applying/building/whatever it can be seen on the cfbot page.\n\nMaybe I'm just mistaken, and the cfbot is supposed to only rebuild when there is a new patch. That would be kind-of logical too, although I for one would prefer a more continuous building.\n\nCan you tell me what is the intention at the moment? Is this a cfbot bug -- or just me being inadequately informed? ;)\n\nThanks,\n\nErik Rijkers\n\n\n",
"msg_date": "Sat, 6 Mar 2021 15:00:46 +0100 (CET)",
"msg_from": "er@xs4all.nl",
"msg_from_op": true,
"msg_subject": "is cfbot's apply aging intentional?"
},
{
"msg_contents": "On Sat, Mar 06, 2021 at 03:00:46PM +0100, er@xs4all.nl wrote:\n> \n> I was looking at the 'Catalog version access' patch, by Vik Fearing. I saw a succesful build by the cfbot but I could not build one here. Only then did I notice that the last apply of the patches by cfbot was on 3769e11 which is the 3rd march, some 10 commits ago.\n> \n> There have been no new patches; one of the patches does not apply anymore. But it's not reported on the cfbot page.\n> \n> Is that the way it's supposed to be? I would have thought there was a regular schedule (hourly? 3-hourly? daily?) when all patches were taken for re-apply, and re-build, so that when a patch stops applying/building/whatever it can be seen on the cfbot page.\n> \n> Maybe I'm just mistaken, and the cfbot is supposed to only rebuild when there is a new patch. That would be kind-of logical too, although I for one would prefer a more continuous building.\n> \n> Can you tell me what is the intention at the moment? Is this a cfbot bug -- or just me being inadequately informed? ;)\n\nThe cfbot will periodically try to rebuild all open patches on the current (and\nnext) commitfest, as the main goal is to quickly spot patches that have rotten.\nBut it's running on free platforms so Thomas put some mechanisms to avoid\nconsuming too many resources. Looking at the source it won't try to rebuild\nany patch more than once per hour, and will try to have all patch rebuilt every\n2 days in a best effort, at least with default configuration. Maybe there's\nsome less aggressive setting on the deployed instance. So this patch will\nprobably be checked soon.\n\n\n",
"msg_date": "Sat, 6 Mar 2021 22:21:35 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: is cfbot's apply aging intentional?"
},
{
"msg_contents": "On 2021-Mar-06, er@xs4all.nl wrote:\n\n> Is that the way it's supposed to be? I would have thought there was a regular schedule (hourly? 3-hourly? daily?) when all patches were taken for re-apply, and re-build, so that when a patch stops applying/building/whatever it can be seen on the cfbot page.\n> \n> Maybe I'm just mistaken, and the cfbot is supposed to only rebuild when there is a new patch. That would be kind-of logical too, although I for one would prefer a more continuous building.\n\nMy approach, if a patch used to apply cleanly and no longer does, is try\nto \"git checkout\" a commit at about the time it passed, and then apply\nthere. I can review and test the whole thing, and provide useful input.\nI can even attempt \"git merge\" to current branch head; sometimes the\nconflicts are ignorable, or easily fixable.\n\n(Of course, I have\n\n[merge]\n\tconflictstyle=diff3\n\nin .gitconfig, which makes conflicts much easier to deal with, though\nyou have to learn how to interpret the conflict reports.)\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Sat, 6 Mar 2021 12:28:46 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: is cfbot's apply aging intentional?"
},
{
"msg_contents": "On Sun, Mar 7, 2021 at 3:21 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Sat, Mar 06, 2021 at 03:00:46PM +0100, er@xs4all.nl wrote:\n> >\n> > I was looking at the 'Catalog version access' patch, by Vik Fearing. I saw a succesful build by the cfbot but I could not build one here. Only then did I notice that the last apply of the patches by cfbot was on 3769e11 which is the 3rd march, some 10 commits ago.\n> >\n> > There have been no new patches; one of the patches does not apply anymore. But it's not reported on the cfbot page.\n> >\n> > Is that the way it's supposed to be? I would have thought there was a regular schedule (hourly? 3-hourly? daily?) when all patches were taken for re-apply, and re-build, so that when a patch stops applying/building/whatever it can be seen on the cfbot page.\n> >\n> > Maybe I'm just mistaken, and the cfbot is supposed to only rebuild when there is a new patch. That would be kind-of logical too, although I for one would prefer a more continuous building.\n> >\n> > Can you tell me what is the intention at the moment? Is this a cfbot bug -- or just me being inadequately informed? ;)\n>\n> The cfbot will periodically try to rebuild all open patches on the current (and\n> next) commitfest, as the main goal is to quickly spot patches that have rotten.\n> But it's running on free platforms so Thomas put some mechanisms to avoid\n> consuming too many resources. Looking at the source it won't try to rebuild\n> any patch more than once per hour, and will try to have all patch rebuilt every\n> 2 days in a best effort, at least with default configuration. Maybe there's\n> some less aggressive setting on the deployed instance. So this patch will\n> probably be checked soon.\n\nRight, it currently tries to reassemble each branch every ~3 days. It\nactually has a target of doing it every 2 days but it can't reach that\ncurrently because it also has a constraint of not allowing more than 3\nbuilds at once (I could increase that, but I was trying to be\npolite...). It has around ~250 entries to get through, and they can\ntake up to 20 minutes to test. That said, most of that time is\nactually wasted doing stupid stuff, and tweak by tweak, the scripting\nit uses for CI builds is getting more efficient so the frequency will\nhopefully soon increase. cfbot really needs to steal a whole lot of\nfresh CI improvements from Andres, who has recently developed a set of\noptimised, small fast-start disk images compatible with Cirrus, that\nhave all the right packages pre-installed, plus lots of other\nimprovements (test parallelism, better log extraction, better core\nanalysis, ...). More on that soon, hopefully.\n\nBut ... hmm, there must be something else going wrong for Erik here.\nVik's \"Catalog version access\" patches apply and compile and test fine\nfor cfbot and for me locally, with both GNU patch (what cfbot uses) or\ngit am (what I just tested with locally).\n\n\n",
"msg_date": "Mon, 8 Mar 2021 13:30:30 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: is cfbot's apply aging intentional?"
},
{
"msg_contents": "\n> On 2021.03.08. 01:30 Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> \n> On Sun, Mar 7, 2021 at 3:21 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > On Sat, Mar 06, 2021 at 03:00:46PM +0100, er@xs4all.nl wrote:\n> > >\n> > > I was looking at the 'Catalog version access' patch, by Vik Fearing. I saw a succesful build by the cfbot but I \n\n> > > Can you tell me what is the intention at the moment? Is this a cfbot bug -- or just me being inadequately informed? ;)\n> >\n> > The cfbot will periodically try to rebuild all open patches on the current (and\n> > next) commitfest, as the main goal is to quickly spot patches that have rotten.\n> > But it's running on free platforms so Thomas put some mechanisms to avoid\n> \n> Right, it currently tries to reassemble each branch every ~3 days. It\n> actually has a target of doing it every 2 days but it can't reach that\n\nAh, I didn't realize the cycle could become 2-3 days. I assumed it to be much shorter.\n\n> But ... hmm, there must be something else going wrong for Erik here.\n> Vik's \"Catalog version access\" patches apply and compile and test fine\n\nYes, you're right, I made an (unrelated) trivial mistake.\n\nI just wanted to make sure if that apply age of three days was intentional - I now understand it can happen.\n\nThank you,\n\nErik\n\n\n",
"msg_date": "Mon, 8 Mar 2021 06:47:20 +0100 (CET)",
"msg_from": "er@xs4all.nl",
"msg_from_op": true,
"msg_subject": "Re: is cfbot's apply aging intentional?"
}
] |
[
{
"msg_contents": "Hi,\n\nIt's easy to answer the question...\n\n - What permissions are there on this specific object?\n\n...but to answer the question...\n\n - What permissions are there for a specific role in the database?\n\nyou need to manually query all relevant pg_catalog or information_schema.*_privileges views,\nwhich is a O(n) mental effort, while the first question is mentally O(1).\n\nI think this can be improved by providing humans a single pg_permissions system view\nwhich can be queried to answer the second question. This should save a lot of keyboard punches.\n\nDemo:\n\nSELECT * FROM pg_permissions WHERE 'joel' IN (grantor,grantee);\n regclass | obj_desc | grantor | grantee | privilege_type | is_grantable\n--------------+-----------------------------+---------+---------+----------------+--------------\npg_namespace | schema foo | joel | joel | USAGE | f\npg_namespace | schema foo | joel | joel | CREATE | f\npg_class | table foo.bar | joel | joel | INSERT | f\npg_class | table foo.bar | joel | joel | SELECT | f\npg_class | table foo.bar | joel | joel | UPDATE | f\npg_class | table foo.bar | joel | joel | DELETE | f\npg_class | table foo.bar | joel | joel | TRUNCATE | f\npg_class | table foo.bar | joel | joel | REFERENCES | f\npg_class | table foo.bar | joel | joel | TRIGGER | f\npg_attribute | column baz of table foo.bar | joel | joel | SELECT | f\npg_attribute | column baz of table foo.bar | joel | joel | UPDATE | f\n(11 rows)\n\nAll catalogs with _aclitem columns have been included in the view.\n\nI think a similar one for ownerships would be nice too.\nBut I'll let you digest this one first to see if the concept is fruitful.\n\n/Joel",
"msg_date": "Sat, 06 Mar 2021 20:03:17 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "[PATCH] pg_permissions"
},
{
"msg_contents": "On Sat, Mar 06, 2021 at 08:03:17PM +0100, Joel Jacobson wrote:\n> Hi,\n> \n> It's easy to answer the question...\n> \n> - What permissions are there on this specific object?\n> \n> ...but to answer the question...\n> \n> - What permissions are there for a specific role in the database?\n> \n> you need to manually query all relevant pg_catalog or information_schema.*_privileges views,\n> which is a O(n) mental effort, while the first question is mentally O(1).\n> \n> I think this can be improved by providing humans a single pg_permissions system view\n> which can be queried to answer the second question. This should save a lot of keyboard punches.\n> \n> Demo:\n> \n> SELECT * FROM pg_permissions WHERE 'joel' IN (grantor,grantee);\n> regclass | obj_desc | grantor | grantee | privilege_type | is_grantable\n> --------------+-----------------------------+---------+---------+----------------+--------------\n> pg_namespace | schema foo | joel | joel | USAGE | f\n> pg_namespace | schema foo | joel | joel | CREATE | f\n> pg_class | table foo.bar | joel | joel | INSERT | f\n> pg_class | table foo.bar | joel | joel | SELECT | f\n> pg_class | table foo.bar | joel | joel | UPDATE | f\n> pg_class | table foo.bar | joel | joel | DELETE | f\n> pg_class | table foo.bar | joel | joel | TRUNCATE | f\n> pg_class | table foo.bar | joel | joel | REFERENCES | f\n> pg_class | table foo.bar | joel | joel | TRIGGER | f\n> pg_attribute | column baz of table foo.bar | joel | joel | SELECT | f\n> pg_attribute | column baz of table foo.bar | joel | joel | UPDATE | f\n> (11 rows)\n> \n> All catalogs with _aclitem columns have been included in the view.\n> \n> I think a similar one for ownerships would be nice too.\n> But I'll let you digest this one first to see if the concept is fruitful.\n\n+1 for both this and the ownerships view.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Mon, 8 Mar 2021 01:09:22 +0000",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On Mon, Mar 8, 2021, at 02:09, David Fetter wrote:\n> +1 for both this and the ownerships view.\n> \n> Best,\n> David.\n\nI'm glad you like it.\n\nI've put some more effort into this patch, and developed a method to mechanically verify its correctness.\n\nAttached is a new patch with both pg_permissions and pg_ownerships in the same patch,\nbased on HEAD (8a812e5106c5db50039336288d376a188844e2cc).\n\nI've also added five catalogs to pg_ownerships that were discovered to be missing in the previous version:\n\npg_catalog.pg_database\npg_catalog.pg_default_acl\npg_catalog.pg_largeobject_metadata\npg_catalog.pg_publication\npg_catalog.pg_subscription\n\nHere is how I've verified correctness of complete coverage:\n\nAll catalogs with permissions have an aclitem[] column.\n\nThere are totally 13 such catalogs in HEAD:\n\nSELECT COUNT(DISTINCT table_name) FROM information_schema.columns WHERE table_schema = 'pg_catalog' AND udt_name = '_aclitem';\ncount\n-------\n 13\n(1 row)\n\nExpect the same number of rows in the patch:\n\n$ grep \"(aclexplode(aa.\" 0001-pg_permissions-and-pg_ownerships.patch | wc -l\n 13\n\nUsing the new awesome pg_get_catalog_foreign_keys() function in v14,\nwe can now query what catalogs are referencing pg_authid.oid,\nof which all named .*owner are known by convention to\nindicate ownership. Let's see what other columns there are\nreferencing pg_authid.oid that could possibly also indicate ownership:\n\nSELECT\n regexp_replace(fkcols[1],'.*owner$','.*owner') AS fkcol,\n COUNT(*)\nFROM pg_get_catalog_foreign_keys()\nWHERE pktable = 'pg_authid'::regclass\nAND pkcols[1] = 'oid'\nAND cardinality(fkcols) = 1\nGROUP BY 1\nORDER BY 2 DESC;\n\n fkcol | count\n------------+-------\n.*owner | 21\ndatdba | 1\ndefaclrole | 1\ngrantor | 1\nmember | 1\npolroles | 1\nroleid | 1\nsetrole | 1\numuser | 1\n(9 rows)\n\nIf we exclude the .*owner and also look at fktable we see:\n\nSELECT *\nFROM pg_get_catalog_foreign_keys()\nWHERE pktable = 'pg_authid'::regclass\nAND pkcols[1] = 'oid'\nAND cardinality(fkcols) = 1\nAND fkcols[1] !~ '.*owner$'\n\n fktable | fkcols | pktable | pkcols | is_array | is_opt\n--------------------+--------------+-----------+--------+----------+--------\npg_database | {datdba} | pg_authid | {oid} | f | f\npg_db_role_setting | {setrole} | pg_authid | {oid} | f | t\npg_auth_members | {roleid} | pg_authid | {oid} | f | f\npg_auth_members | {member} | pg_authid | {oid} | f | f\npg_auth_members | {grantor} | pg_authid | {oid} | f | f\npg_user_mapping | {umuser} | pg_authid | {oid} | f | t\npg_policy | {polroles} | pg_authid | {oid} | t | t\npg_default_acl | {defaclrole} | pg_authid | {oid} | f | f\n(8 rows)\n\nBy reading the documentation for these catalogs,\nI've come to the conclusion these columns also indicate ownership:\n\npg_database.datdba\npg_default_acl.defaclrole\npg_policy.polroles\n\nIn total, we should expect 21+3=24 catalogs.\n\nLet's see if this matches the patch:\n\n$ grep \"pg_authid.rolname\" 0001-pg_permissions-and-pg_ownerships.patch | wc -l\n 24\n\nAll good.\n\nI note it's not very often new catalogs are added,\nso hopefully we can have a routine to update these views\nwhen new catalogs with ownership- or permission columns are added.\n\nHowever, should we ever get out of sync, we can use the method above to sort things out.\n\n/Joel",
"msg_date": "Mon, 08 Mar 2021 07:28:30 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On Mon, Mar 8, 2021, at 07:28, Joel Jacobson wrote:\n>Attached is a new patch with both pg_permissions and pg_ownerships in the same patch,\n>based on HEAD (8a812e5106c5db50039336288d376a188844e2cc).\n>\n>Attachments:\n>0001-pg_permissions-and-pg_ownerships.patch\n\nI forgot to update src/test/regress/expected/rules.out.\nNew patch attached.\n\n/Joel",
"msg_date": "Mon, 08 Mar 2021 08:44:20 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On 3/6/21 2:03 PM, Joel Jacobson wrote:\n> ...but to answer the question...\n> \n> - What permissions are there for a specific role in the database?\n> \n> you need to manually query all relevant pg_catalog or\n> information_schema.*_privileges views,\n> which is a O(n) mental effort, while the first question is mentally O(1).\n> \n> I think this can be improved by providing humans a single pg_permissions system view\n> which can be queried to answer the second question. This should save a lot of\n> keyboard punches.\n\nWhile this is interesting and probably useful for troubleshooting, it does not\nprovide the complete picture if what you care about is something like \"what\nstuff can joel do in my database\".\n\nThe reasons for this include default grants to PUBLIC and role membership, and\neven that is convoluted by INHERIT/NOINHERIT role attributes.\n\nI won't try to describe all the implications here, but a while back I wrote a\nfairly comprehensive blog[1] about it.\n\nFWIW in the blog I reference an extension that I wrote to facilitate object and\nrole privilege inspection[2]. I have toyed with the idea of morphing that into a\nfeature I can submit for pg15, but I don't want to spend effort on the morphing\nunless there is both sufficient interest and lack of conceptual objections to\nthe feature. I'd love to hear from both sides of that scale.\n\nJoe\n\n[1]\nhttp://blog.crunchydata.com/blog/postgresql-defaults-and-impact-on-security-part-1\n[2] https://github.com/CrunchyData/crunchy_check_access\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Mon, 8 Mar 2021 09:35:52 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On Mon, Mar 8, 2021, at 15:35, Joe Conway wrote:\n> While this is interesting and probably useful for troubleshooting, it does not\n> provide the complete picture if what you care about is something like \"what\n> stuff can joel do in my database\".\n\nGood point, I agree.\n\nI think that's a different more complicated use-case though.\n\nPersonally, I use these views to resolve errors like this:\n\n$ dropuser test\ndropuser: error: removal of role \"test\" failed: ERROR: role \"test\" cannot be dropped because some objects depend on it\nDETAIL: 1 object in database joel\n\nHmmm. I wonder which 1 object that could be?\n\n$ psql\n# SELECT * FROM pg_ownerships WHERE rolname = 'test';\nregclass | obj_desc | rolname\n----------+----------+---------\npg_class | table t | test\npg_type | type t | test\npg_type | type t[] | test\n(3 rows)\n\nIt could also be due to permissions, so normally I would check both pg_ownership and pg_permissions at the same time,\nsince otherwise I could possibly get the same error again:\n\n$ dropuser test\ndropuser: error: removal of role \"test\" failed: ERROR: role \"test\" cannot be dropped because some objects depend on it\nDETAIL: 1 object in database joel\n\n# SELECT * FROM pg_permissions WHERE grantee = 'test';\nregclass | obj_desc | grantor | grantee | privilege_type | is_grantable\n----------+----------+---------+---------+----------------+--------------\npg_class | table t | joel | test | INSERT | f\n(1 row)\n\nNow, this situation is probably easiest to quickly resolve using REASSIGN OWNED BY ... TO ...\nbut I think that command is scary, I would rather prefer to resolve it manually\nto not blindly cause problems.\n\n/Joel\nOn Mon, Mar 8, 2021, at 15:35, Joe Conway wrote:While this is interesting and probably useful for troubleshooting, it does notprovide the complete picture if what you care about is something like \"whatstuff can joel do in my database\".Good point, I agree.I think that's a different more complicated use-case though.Personally, I use these views to resolve errors like this:$ dropuser testdropuser: error: removal of role \"test\" failed: ERROR: role \"test\" cannot be dropped because some objects depend on itDETAIL: 1 object in database joelHmmm. I wonder which 1 object that could be?$ psql# SELECT * FROM pg_ownerships WHERE rolname = 'test';regclass | obj_desc | rolname----------+----------+---------pg_class | table t | testpg_type | type t | testpg_type | type t[] | test(3 rows)It could also be due to permissions, so normally I would check both pg_ownership and pg_permissions at the same time,since otherwise I could possibly get the same error again:$ dropuser testdropuser: error: removal of role \"test\" failed: ERROR: role \"test\" cannot be dropped because some objects depend on itDETAIL: 1 object in database joel# SELECT * FROM pg_permissions WHERE grantee = 'test';regclass | obj_desc | grantor | grantee | privilege_type | is_grantable----------+----------+---------+---------+----------------+--------------pg_class | table t | joel | test | INSERT | f(1 row)Now, this situation is probably easiest to quickly resolve using REASSIGN OWNED BY ... TO ...but I think that command is scary, I would rather prefer to resolve it manuallyto not blindly cause problems./Joel",
"msg_date": "Mon, 08 Mar 2021 18:14:09 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On Sat, Mar 06, 2021 at 08:03:17PM +0100, Joel Jacobson wrote:\n> regclass | obj_desc | grantor | grantee |\nprivilege_type | is_grantable\n>\n--------------+-----------------------------+---------+---------+----------------+--------------\n\n1. Is there a reason not to make 'grantor' and 'grantee' of type regrole?\n In other words, s/rolname/oid::regrole/ throughout the view definition.\n It looks the same visually, but should be easier to build on in a larger\n query.\n\n Hmm, ok, a grantee of 'public' can't be expressed as a regrole. This\n seems an annoying little corner.[1] It can be represented by 0::regrole,\n but that displays as '-'. Hmm again, you can even '-'::regrole and get 0.\n\n2. Also to facilitate use in a larger query, how about columns for the\n objid and objsubid, in addition to the human-friendly obj_desc?\n And I'm not sure about using pg_attribute as the regclass for\n attributes; it's nice to look at, but could also plant the wrong idea\n that attributes have pg_attribute as their classid, when it's really\n pg_class with an objsubid. Anyway, there's the human-friendly obj_desc\n to tell you it's a column.\n\nOn 03/08/21 12:14, Joel Jacobson wrote:\n> On Mon, Mar 8, 2021, at 15:35, Joe Conway wrote:\n>> While this is interesting and probably useful for troubleshooting, it does not\n>> provide the complete picture if what you care about is something like \"what\n>> stuff can joel do in my database\".\n> \n> Good point, I agree.\n> \n> I think that's a different more complicated use-case though.\n\nI could agree that the role membership and inherit/noinherit part is\na more complicated problem that could be solved by a larger query built\nover this view (facilitated by giving grantor and grantee regrole type)\nand some recursive-CTEness with the roles.\n\nBut I think it would be useful for this view to handle the part of the story\nthat involves acldefault() when the stored aclitem[] is null. I've long\nwanted a view that actually shows you all of the permissions that apply\nto something, even the ones you're supposed to Just Know, and indeed\nI wrote such a thing for $work.\n\nThen you could even query the view for an answer to the question \"what\nare all the permissions 'public' (or '-') can exercise here?\"\n\nOn 03/06/21 19:08, Joel Jacobson wrote:\n> SELECT * FROM ownerships WHERE rolname = 'joel' LIMIT 5;\n> regclass | obj_desc | rolname\n> ------------------+-----------------------------------+---------\n\nHere again, I'd repeat the suggestions to present the owner as a regrole\n(and in this case there is no need to deal with 'public'), and to include\nthe objid as well as the human-friendly obj_desc.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Mon, 8 Mar 2021 22:01:27 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On Tue, Mar 9, 2021, at 04:01, Chapman Flack wrote:\n> On Sat, Mar 06, 2021 at 08:03:17PM +0100, Joel Jacobson wrote:\n> > regclass | obj_desc | grantor | grantee |\n> privilege_type | is_grantable\n> >\n> --------------+-----------------------------+---------+---------+----------------+--------------\n> \n> 1. Is there a reason not to make 'grantor' and 'grantee' of type regrole?\n\nI considered it, but this view is tailored for human-use,\nto be used by experienced as well as beginner users.\n\n> In other words, s/rolname/oid::regrole/ throughout the view definition.\n> It looks the same visually, but should be easier to build on in a larger\n> query. \n\nIf using regrole, the users would need to know they would need to cast it to text, to search for values, e.g.:\n\nSELECT * FROM pg_permissions WHERE grantee = 'foobar';\nERROR: invalid input syntax for type oid: \"foobar\"\nLINE 1: SELECT * FROM pg_permissions WHERE grantee = 'foobar';\n\nSELECT * FROM pg_permissions WHERE grantee LIKE 'foo%';\nERROR: operator does not exist: regrole ~~ unknown\nLINE 1: SELECT * FROM pg_permissions WHERE grantee LIKE 'foo%';\n\n> 2. Also to facilitate use in a larger query, how about columns for the\n> objid and objsubid, in addition to the human-friendly obj_desc?\n\nI think it's good to prevent users from abusing this view,\nby not including oids and other columns needed for proper\nintegration in larger queries/systems.\n\nOtherwise there is a risk users will be sloppy and just join pg_permissions,\nwhen they really should be joining some specific catalog.\n\nAlso, lots of extra columns not needed by humans just makes the view less readable,\nsince you would more often need to \\x when the width of the output does't fit.\n\nPersonally, I'm on a 15\" MacBook Pro and I usually have two 117x24 terminals next to each other,\nin which both pg_permissions and pg_ownerships output usually fits fine.\n\n> And I'm not sure about using pg_attribute as the regclass for\n> attributes; it's nice to look at, but could also plant the wrong idea\n> that attributes have pg_attribute as their classid, when it's really\n> pg_class with an objsubid. Anyway, there's the human-friendly obj_desc\n> to tell you it's a column.\n\nWhile pg_class is the \"origin class\", I think we convey more meaningful information,\nby using the regclass for the table which stores the aclitem[] column,\nin your example, pg_attribute. This makes it more obvious to the user the permission\nis on some column, rather than on the table. In the case where you try to drop a user\nand don't understand why you can't, and then look in pg_permissions what could be the\nreason, it's more helpful to show pg_attribute than pg_class, since you hopefully then\nunderstand you should revoke permissions for some column, and not the table.\n\nYou get this information in obj_desc as well, but I think regclass complements it nicely.\n\nAnd it's also more precise, the permission *is* really on pg_attribute,\nit just happens to be that pg_attribute has a multi-key primary key,\nwhere one of the keys is referencing pg_class.oid.\n\n> But I think it would be useful for this view to handle the part of the story\n> that involves acldefault() when the stored aclitem[] is null. I've long\n> wanted a view that actually shows you all of the permissions that apply\n> to something, even the ones you're supposed to Just Know, and indeed\n> I wrote such a thing for $work.\n> Then you could even query the view for an answer to the question \"what\n> are all the permissions 'public' (or '-') can exercise here?\"\n\nSeems useful, but maybe that's a different view/function?\nCould it be integrated into these views without increasing complexity?\n\n/Joel\n\nOn Tue, Mar 9, 2021, at 04:01, Chapman Flack wrote:On Sat, Mar 06, 2021 at 08:03:17PM +0100, Joel Jacobson wrote:> regclass | obj_desc | grantor | grantee |privilege_type | is_grantable>--------------+-----------------------------+---------+---------+----------------+--------------1. Is there a reason not to make 'grantor' and 'grantee' of type regrole?I considered it, but this view is tailored for human-use,to be used by experienced as well as beginner users. In other words, s/rolname/oid::regrole/ throughout the view definition. It looks the same visually, but should be easier to build on in a larger query. If using regrole, the users would need to know they would need to cast it to text, to search for values, e.g.:SELECT * FROM pg_permissions WHERE grantee = 'foobar';ERROR: invalid input syntax for type oid: \"foobar\"LINE 1: SELECT * FROM pg_permissions WHERE grantee = 'foobar';SELECT * FROM pg_permissions WHERE grantee LIKE 'foo%';ERROR: operator does not exist: regrole ~~ unknownLINE 1: SELECT * FROM pg_permissions WHERE grantee LIKE 'foo%';2. Also to facilitate use in a larger query, how about columns for the objid and objsubid, in addition to the human-friendly obj_desc?I think it's good to prevent users from abusing this view,by not including oids and other columns needed for properintegration in larger queries/systems.Otherwise there is a risk users will be sloppy and just join pg_permissions,when they really should be joining some specific catalog.Also, lots of extra columns not needed by humans just makes the view less readable,since you would more often need to \\x when the width of the output does't fit.Personally, I'm on a 15\" MacBook Pro and I usually have two 117x24 terminals next to each other,in which both pg_permissions and pg_ownerships output usually fits fine. And I'm not sure about using pg_attribute as the regclass for attributes; it's nice to look at, but could also plant the wrong idea that attributes have pg_attribute as their classid, when it's really pg_class with an objsubid. Anyway, there's the human-friendly obj_desc to tell you it's a column.While pg_class is the \"origin class\", I think we convey more meaningful information,by using the regclass for the table which stores the aclitem[] column,in your example, pg_attribute. This makes it more obvious to the user the permissionis on some column, rather than on the table. In the case where you try to drop a userand don't understand why you can't, and then look in pg_permissions what could be thereason, it's more helpful to show pg_attribute than pg_class, since you hopefully thenunderstand you should revoke permissions for some column, and not the table.You get this information in obj_desc as well, but I think regclass complements it nicely.And it's also more precise, the permission *is* really on pg_attribute,it just happens to be that pg_attribute has a multi-key primary key,where one of the keys is referencing pg_class.oid.But I think it would be useful for this view to handle the part of the storythat involves acldefault() when the stored aclitem[] is null. I've longwanted a view that actually shows you all of the permissions that applyto something, even the ones you're supposed to Just Know, and indeedI wrote such a thing for $work.Then you could even query the view for an answer to the question \"whatare all the permissions 'public' (or '-') can exercise here?\"Seems useful, but maybe that's a different view/function?Could it be integrated into these views without increasing complexity?/Joel",
"msg_date": "Tue, 09 Mar 2021 07:34:36 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On Tue, Mar 9, 2021, at 07:34, Joel Jacobson wrote:\n> On Tue, Mar 9, 2021, at 04:01, Chapman Flack wrote:\n>> 1. Is there a reason not to make 'grantor' and 'grantee' of type regrole?\n\nHaving digested your idea, I actually agree with you.\n\nSince we have the regrole-type, I agree we should use it,\neven though we need to cast, no biggie.\n\nI realized my arguments were silly since I already exposed the class as regclass,\nwhich has the same problem.\n\nI'll send a new patch soon.\n\n/Joel\nOn Tue, Mar 9, 2021, at 07:34, Joel Jacobson wrote:On Tue, Mar 9, 2021, at 04:01, Chapman Flack wrote:1. Is there a reason not to make 'grantor' and 'grantee' of type regrole?Having digested your idea, I actually agree with you.Since we have the regrole-type, I agree we should use it,even though we need to cast, no biggie.I realized my arguments were silly since I already exposed the class as regclass,which has the same problem.I'll send a new patch soon./Joel",
"msg_date": "Tue, 09 Mar 2021 17:11:14 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On 03/09/21 11:11, Joel Jacobson wrote:\n> On Tue, Mar 9, 2021, at 07:34, Joel Jacobson wrote:\n>> On Tue, Mar 9, 2021, at 04:01, Chapman Flack wrote:\n>>> 1. Is there a reason not to make 'grantor' and 'grantee' of type regrole?\n> \n> Having digested your idea, I actually agree with you.\n> \n> Since we have the regrole-type, I agree we should use it,\n> even though we need to cast, no biggie.\n\nThis does highlight [topicshift] one sort of\ninconvenience I've observed before in other settings: how fussy\nit may be to write WHERE grantee = 'bob' when there is no user 'bob'.\n\nA simple cast 'bob'::regrole raises undefined_object (in class\n\"Syntax Error or Access Rule Violation\") rather than just returning\nno rows because no grantee is bob.\n\nIt's a more general issue: I first noticed it when I had proudly\nimplemented my first PostgreSQL type foo that would only accept\nvalid foos as values, and the next thing that happened was my\ncolleague in frontend development wrote mean Python comments about me\nbecause he couldn't simply search for a foo in a table without either\nfirst duplicating the validation of the value or trapping the error\nif the user had entered a non-foo to search for.\n\nWe could solve that, of course, by implementing = and <> (foo,text)\nto simply return false (resp. true) if the text arg isn't castable\nto foo.\n\nBut the na�ve way of writing such an operator repeats the castability\ntest for every row compared. If I were to build such an operator now,\nI might explore whether a planner support function could be used\nto check the castability once, and replace the whole comparison with\nconstant false if that fails.\n\nAnd this strikes me as a situation that might be faced often enough\nto wonder if some kind of meta-support-function would be worth supplying\nthat could do that for any type foo.\n[/topicshift]\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Tue, 9 Mar 2021 11:41:22 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On Tue, Mar 9, 2021, at 04:01, Chapman Flack wrote:\n> On Sat, Mar 06, 2021 at 08:03:17PM +0100, Joel Jacobson wrote:\n> > regclass | obj_desc | grantor | grantee |\n> privilege_type | is_grantable\n> >\n> --------------+-----------------------------+---------+---------+----------------+--------------\n> \n> 1. Is there a reason not to make 'grantor' and 'grantee' of type regrole?\n> In other words, s/rolname/oid::regrole/ throughout the view definition.\n> It looks the same visually, but should be easier to build on in a larger\n> query.\n> \n> Hmm, ok, a grantee of 'public' can't be expressed as a regrole. This\n> seems an annoying little corner.[1] It can be represented by 0::regrole,\n> but that displays as '-'. Hmm again, you can even '-'::regrole and get 0.\n> \n> \n> 2. Also to facilitate use in a larger query, how about columns for the\n> objid and objsubid, in addition to the human-friendly obj_desc?\n> And I'm not sure about using pg_attribute as the regclass for\n> attributes; it's nice to look at, but could also plant the wrong idea\n> that attributes have pg_attribute as their classid, when it's really\n> pg_class with an objsubid. Anyway, there's the human-friendly obj_desc\n> to tell you it's a column.\n\nThanks for coming up with these two good ideas. I was wrong, they are great.\n\nBoth have now been implemented.\n\nNew patch attached.\n\nExample usage:\n\nCREATE ROLE test_user;\nCREATE ROLE test_group;\nCREATE ROLE test_owner;\nCREATE SCHEMA test AUTHORIZATION test_owner;\nGRANT ALL ON SCHEMA test TO test_group;\nGRANT test_group TO test_user;\n\nSELECT * FROM pg_permissions WHERE grantor = 'test_owner'::regrole;\n classid | objid | objsubid | objdesc | grantor | grantee | privilege_type | is_grantable\n--------------+-------+----------+-------------+------------+------------+----------------+--------------\npg_namespace | 16390 | 0 | schema test | test_owner | test_owner | USAGE | f\npg_namespace | 16390 | 0 | schema test | test_owner | test_owner | CREATE | f\npg_namespace | 16390 | 0 | schema test | test_owner | test_group | USAGE | f\npg_namespace | 16390 | 0 | schema test | test_owner | test_group | CREATE | f\n(4 rows)\n\nSET ROLE TO test_user;\nCREATE TABLE test.a ();\nRESET ROLE;\n\nSELECT * FROM pg_ownerships WHERE owner = 'test_owner'::regrole;\n classid | objid | objsubid | objdesc | owner\n--------------+-------+----------+-------------+------------\npg_namespace | 16390 | 0 | schema test | test_owner\n(1 row)\n\nALTER TABLE test.a OWNER TO test_owner;\n\nSELECT * FROM pg_ownerships WHERE owner = 'test_owner'::regrole;\n classid | objid | objsubid | objdesc | owner\n--------------+-------+----------+-------------+------------\npg_class | 16391 | 0 | table a | test_owner\npg_namespace | 16390 | 0 | schema test | test_owner\npg_type | 16393 | 0 | type a | test_owner\npg_type | 16392 | 0 | type a[] | test_owner\n(4 rows)\n\nGRANT INSERT ON test.a TO test_group;\n\nSELECT * FROM pg_permissions WHERE grantee = 'test_group'::regrole;\n classid | objid | objsubid | objdesc | grantor | grantee | privilege_type | is_grantable\n--------------+-------+----------+-------------+------------+------------+----------------+--------------\npg_class | 16391 | 0 | table a | test_owner | test_group | INSERT | f\npg_namespace | 16390 | 0 | schema test | test_owner | test_group | USAGE | f\npg_namespace | 16390 | 0 | schema test | test_owner | test_group | CREATE | f\n(3 rows)\n\n/Joel",
"msg_date": "Tue, 09 Mar 2021 18:48:58 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "New version attached.\n\nChanges:\n\n* Added documentation in catalogs.sgml\n* Dropped \"objsubid\" from pg_ownerships since columns have no owner, only tables\n\nDo we prefer \"pg_permissions\" or \"pg_privileges\"?\n\nI can see \"privileges\" occur 2325 times in the sources,\nwhile \"permissions\" occur only 1097 times.\n\nPersonally, I would prefer \"pg_permissions\" since it seems more common in general.\n\"database permissions\" gives 195 000 000 results on Google,\nwhile \"database privileges\" only gives 46 800 000 Google results.\n\nIf we would have consistently used only \"privileges\" so far,\nI would vote for \"pg_privileges\", but since there is already a mix,\nI think \"pg_permissions\" would be nicer, it's easier to get right typing also.\n\n/Joel",
"msg_date": "Thu, 11 Mar 2021 08:00:38 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On Thu, Mar 11, 2021, at 08:00, Joel Jacobson wrote:\n> 0004-pg_permissions-and-pg_ownerships.patch\n\nHaving gotten some hands-on experience of these views for a while,\nI notice I quite often want to check the ownerships/permissions\nfor some specific type of objects, or in some specific schema.\n\nThe current patch returns pg_describe_object() as the \"objdesc\" column.\n\nWould it be a better idea to instead return the fields from pg_identify_object()?\nThis would allow specifically filtering on \"type\", \"schema\", \"name\" or \"identity\"\ninstead of having to apply a regex/LIKE on the object description.\n\n/Joel\nOn Thu, Mar 11, 2021, at 08:00, Joel Jacobson wrote:> 0004-pg_permissions-and-pg_ownerships.patchHaving gotten some hands-on experience of these views for a while,I notice I quite often want to check the ownerships/permissionsfor some specific type of objects, or in some specific schema.The current patch returns pg_describe_object() as the \"objdesc\" column.Would it be a better idea to instead return the fields from pg_identify_object()?This would allow specifically filtering on \"type\", \"schema\", \"name\" or \"identity\"instead of having to apply a regex/LIKE on the object description./Joel",
"msg_date": "Tue, 23 Mar 2021 21:16:20 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On 2021-Mar-23, Joel Jacobson wrote:\n\n> On Thu, Mar 11, 2021, at 08:00, Joel Jacobson wrote:\n> > 0004-pg_permissions-and-pg_ownerships.patch\n> \n> Having gotten some hands-on experience of these views for a while,\n> I notice I quite often want to check the ownerships/permissions\n> for some specific type of objects, or in some specific schema.\n> \n> The current patch returns pg_describe_object() as the \"objdesc\" column.\n> \n> Would it be a better idea to instead return the fields from pg_identify_object()?\n> This would allow specifically filtering on \"type\", \"schema\", \"name\" or \"identity\"\n> instead of having to apply a regex/LIKE on the object description.\n\nFor programmatic use it is certainly better to use the object identity\nrather than the description. Particularly because the description gets\ntranslated, and the identity doesn't.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Tue, 23 Mar 2021 17:31:25 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On 2021-Mar-08, Joel Jacobson wrote:\n\n> $ dropuser test\n> dropuser: error: removal of role \"test\" failed: ERROR: role \"test\" cannot be dropped because some objects depend on it\n> DETAIL: 1 object in database joel\n> \n> Hmmm. I wonder which 1 object that could be?\n\nBTW the easiest way to find out the answer to this question with current\ntech is to connect to database joel and attempt \"DROP USER joel\"; it\nwill print a list of objects the user owns or has privileges for.\n\n> # SELECT * FROM pg_ownerships WHERE rolname = 'test';\n> # SELECT * FROM pg_permissions WHERE grantee = 'test';\n\nI wonder if these views should be defined on top of pg_shdepend instead\nof querying every single catalog. That would make for much shorter\nqueries.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Tue, 23 Mar 2021 17:39:37 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On Tue, Mar 23, 2021, at 21:39, Alvaro Herrera wrote:\n>I wonder if these views should be defined on top of pg_shdepend instead\n>of querying every single catalog. That would make for much shorter\n>queries.\n\n+1\n\npg_shdepend doesn't contain the aclitem info though,\nso it won't work for pg_permissions if we want to expose\nprivilege_type, is_grantable and grantor.\n\npg_shdepend should work fine for pg_ownerships though.\n\nThe semantics will not be entirely the same,\nsince internal objects are not tracked in pg_shdepend,\nbut I think this is an improvement.\n\nExample:\n\ncreate role baz;\ncreate type foobar as ( foo int, bar boolean );\nalter type foobar owner to baz;\n\n-- UNION ALL variant:\n\nselect * from pg_ownerships where owner = 'baz'::regrole;\nclassid | objid | objsubid | owner | type | schema | name | identity\n----------+--------+----------+-------+----------------+--------+---------+-----------------\npg_class | 407858 | 0 | baz | composite type | public | foobar | public.foobar\npg_type | 407860 | 0 | baz | type | public | foobar | public.foobar\npg_type | 407859 | 0 | baz | type | public | _foobar | public.foobar[]\n(3 rows)\n\n-- pg_shdepend variant:\n\nselect * from pg_ownerships where owner = 'baz'::regrole;\nclassid | objid | objsubid | owner | type | schema | name | identity\n---------+--------+----------+-------+------+--------+--------+---------------\n 1247 | 407860 | 0 | baz | type | public | foobar | public.foobar\n(1 row)\n\nI'll update the patch.\n\n/Joel\nOn Tue, Mar 23, 2021, at 21:39, Alvaro Herrera wrote:>I wonder if these views should be defined on top of pg_shdepend instead>of querying every single catalog. That would make for much shorter>queries.+1pg_shdepend doesn't contain the aclitem info though,so it won't work for pg_permissions if we want to exposeprivilege_type, is_grantable and grantor.pg_shdepend should work fine for pg_ownerships though.The semantics will not be entirely the same,since internal objects are not tracked in pg_shdepend,but I think this is an improvement.Example:create role baz;create type foobar as ( foo int, bar boolean );alter type foobar owner to baz;-- UNION ALL variant:select * from pg_ownerships where owner = 'baz'::regrole;classid | objid | objsubid | owner | type | schema | name | identity----------+--------+----------+-------+----------------+--------+---------+-----------------pg_class | 407858 | 0 | baz | composite type | public | foobar | public.foobarpg_type | 407860 | 0 | baz | type | public | foobar | public.foobarpg_type | 407859 | 0 | baz | type | public | _foobar | public.foobar[](3 rows)-- pg_shdepend variant:select * from pg_ownerships where owner = 'baz'::regrole;classid | objid | objsubid | owner | type | schema | name | identity---------+--------+----------+-------+------+--------+--------+--------------- 1247 | 407860 | 0 | baz | type | public | foobar | public.foobar(1 row)I'll update the patch./Joel",
"msg_date": "Thu, 25 Mar 2021 10:48:55 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On 2021-Mar-25, Joel Jacobson wrote:\n\n> pg_shdepend doesn't contain the aclitem info though,\n> so it won't work for pg_permissions if we want to expose\n> privilege_type, is_grantable and grantor.\n\nAh, of course -- the only way to obtain the acl columns is by going\nthrough the catalogs individually, so it won't be possible. I think\nthis could be fixed with some very simple, quick function pg_get_acl()\nthat takes a catalog OID and object OID and returns the ACL; then\nuse aclexplode() to obtain all those details.\n\n> The semantics will not be entirely the same,\n> since internal objects are not tracked in pg_shdepend,\n> but I think this is an improvement.\n\nI just realized that pg_shdepend will not show anything for pinned users\n(the bootstrap superuser). I *think* this is not a problem.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"E pur si muove\" (Galileo Galilei)\n\n\n",
"msg_date": "Thu, 25 Mar 2021 12:16:37 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On Thu, Mar 25, 2021, at 16:16, Alvaro Herrera wrote:\n> On 2021-Mar-25, Joel Jacobson wrote:\n> \n> > pg_shdepend doesn't contain the aclitem info though,\n> > so it won't work for pg_permissions if we want to expose\n> > privilege_type, is_grantable and grantor.\n> \n> Ah, of course -- the only way to obtain the acl columns is by going\n> through the catalogs individually, so it won't be possible. I think\n> this could be fixed with some very simple, quick function pg_get_acl()\n> that takes a catalog OID and object OID and returns the ACL; then\n> use aclexplode() to obtain all those details.\n\n+1 for adding pg_get_acl().\nDo you want to write a patch for that?\nI could try implementing it otherwise, but would be good with buy-in\nfrom some more hackers on if we want these system views at all first.\n\nMaybe we can try to decide on that first,\ni.e. if we want them and what they should return?\n\nIn the meantime, if people want to try out the views,\nI've modified the patch to use pg_shdepend for pg_ownerships,\nwhile pg_permissions is still UNION ALL.\n\nBoth views now also use pg_identify_object().\n\nExample usage:\n\nCREATE ROLE test_user;\nCREATE ROLE test_group;\nCREATE ROLE test_owner;\nCREATE SCHEMA test AUTHORIZATION test_owner;\nGRANT ALL ON SCHEMA test TO test_group;\nGRANT test_group TO test_user;\n\nSELECT * FROM pg_permissions WHERE grantor = 'test_owner'::regrole;\n classid | objid | objsubid | type | schema | name | identity | grantor | grantee | privilege_type | is_grantable\n--------------+-------+----------+--------+--------+------+----------+------------+------------+----------------+--------------\npg_namespace | 37128 | 0 | schema | | test | test | test_owner | test_owner | USAGE | f\npg_namespace | 37128 | 0 | schema | | test | test | test_owner | test_owner | CREATE | f\npg_namespace | 37128 | 0 | schema | | test | test | test_owner | test_group | USAGE | f\npg_namespace | 37128 | 0 | schema | | test | test | test_owner | test_group | CREATE | f\n(4 rows)\n\nSET ROLE TO test_user;\nCREATE TABLE test.a ();\nRESET ROLE;\nALTER TABLE test.a OWNER TO test_owner;\n\nSELECT * FROM pg_ownerships WHERE owner = 'test_owner'::regrole;\nclassid | objid | objsubid | type | schema | name | identity | owner\n---------+-------+----------+--------+--------+------+----------+------------\n 1259 | 37129 | 0 | table | test | a | test.a | test_owner\n 2615 | 37128 | 0 | schema | | test | test | test_owner\n(2 rows)\n\nGRANT INSERT ON test.a TO test_group;\n\nSELECT * FROM pg_permissions WHERE grantee = 'test_group'::regrole;\n classid | objid | objsubid | type | schema | name | identity | grantor | grantee | privilege_type | is_grantable\n--------------+-------+----------+--------+--------+------+----------+------------+------------+----------------+--------------\npg_class | 37129 | 0 | table | test | a | test.a | test_owner | test_group | INSERT | f\npg_namespace | 37128 | 0 | schema | | test | test | test_owner | test_group | USAGE | f\npg_namespace | 37128 | 0 | schema | | test | test | test_owner | test_group | CREATE | f\n(3 rows)\n\nLooks good or can we improve them further?\n\n> \n> > The semantics will not be entirely the same,\n> > since internal objects are not tracked in pg_shdepend,\n> > but I think this is an improvement.\n> \n> I just realized that pg_shdepend will not show anything for pinned users\n> (the bootstrap superuser). I *think* this is not a problem.\n\nI also think it's not a problem.\n\nDoing a \"SELECT * FROM pg_ownerships\" would be very noisy\nif such objects would be included, as all pre-installed catalog objects would show up,\nbut by excluding them, the user will only see relevant ownerships explicitly owned by \"real\" roles.\n\nWe would get the same improvement for pg_permissions if pg_shdepend would be use there as well.\nRight now it's very noisy, as all permissions also for the bootstrap superuser are included.\n\n/Joel",
"msg_date": "Thu, 25 Mar 2021 17:46:21 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> On Thu, Mar 25, 2021, at 16:16, Alvaro Herrera wrote:\n>> Ah, of course -- the only way to obtain the acl columns is by going\n>> through the catalogs individually, so it won't be possible. I think\n>> this could be fixed with some very simple, quick function pg_get_acl()\n>> that takes a catalog OID and object OID and returns the ACL; then\n>> use aclexplode() to obtain all those details.\n\n> +1 for adding pg_get_acl().\n\nI wonder what performance will be like with lots o' objects.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Mar 2021 12:51:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On Thu, Mar 25, 2021, at 17:51, Tom Lane wrote:\n> \"Joel Jacobson\" <joel@compiler.org <mailto:joel%40compiler.org>> writes:\n> > On Thu, Mar 25, 2021, at 16:16, Alvaro Herrera wrote:\n> >> Ah, of course -- the only way to obtain the acl columns is by going\n> >> through the catalogs individually, so it won't be possible. I think\n> >> this could be fixed with some very simple, quick function pg_get_acl()\n> >> that takes a catalog OID and object OID and returns the ACL; then\n> >> use aclexplode() to obtain all those details.\n> \n> > +1 for adding pg_get_acl().\n> \n> I wonder what performance will be like with lots o' objects.\n\nI guess pg_get_acl() would need to be implemented using a switch(classid) with 36 cases (one for each class)?\n\nIs your performance concern on how such switch statement will be optimized by the C-compiler?\n\nI can see how it would be annoyingly slow if the compiler would pick a branch table or binary search,\ninstead of producing a O(2) fast jump table.\n\nOn the topic of C switch statements:\n\nI think the Clang/GCC-compiler folks (anyone here?) could actually be inspired by PostgreSQL's PerfectHash.pm.\nI think the same strategy could be used in C compilers to optimize switch statements with sparse case values,\nwhich currently produce slow binary search code O(log n) while a perfect hash solution would be O(2).\n\nExample showing the unintelligent binary search code produced by GCC: https://godbolt.org/z/1G6G3vcjx (Clang is just as bad.) This is a hypothetical example with sparse case values. This is not the case here, since the classid case values are nicely ordered from OCLASS_CLASS..OCLASS_TRANSFORM (0..37), so they should produce O(2) fast jump tables.\n\nMaybe there is some other performance concern to reason about that I'm missing here?\n\n/Joel\nOn Thu, Mar 25, 2021, at 17:51, Tom Lane wrote:\"Joel Jacobson\" <joel@compiler.org> writes:> On Thu, Mar 25, 2021, at 16:16, Alvaro Herrera wrote:>> Ah, of course -- the only way to obtain the acl columns is by going>> through the catalogs individually, so it won't be possible. I think>> this could be fixed with some very simple, quick function pg_get_acl()>> that takes a catalog OID and object OID and returns the ACL; then>> use aclexplode() to obtain all those details.> +1 for adding pg_get_acl().I wonder what performance will be like with lots o' objects.I guess pg_get_acl() would need to be implemented using a switch(classid) with 36 cases (one for each class)?Is your performance concern on how such switch statement will be optimized by the C-compiler?I can see how it would be annoyingly slow if the compiler would pick a branch table or binary search,instead of producing a O(2) fast jump table.On the topic of C switch statements:I think the Clang/GCC-compiler folks (anyone here?) could actually be inspired by PostgreSQL's PerfectHash.pm.I think the same strategy could be used in C compilers to optimize switch statements with sparse case values,which currently produce slow binary search code O(log n) while a perfect hash solution would be O(2).Example showing the unintelligent binary search code produced by GCC: https://godbolt.org/z/1G6G3vcjx (Clang is just as bad.) This is a hypothetical example with sparse case values. This is not the case here, since the classid case values are nicely ordered from OCLASS_CLASS..OCLASS_TRANSFORM (0..37), so they should produce O(2) fast jump tables.Maybe there is some other performance concern to reason about that I'm missing here?/Joel",
"msg_date": "Fri, 26 Mar 2021 07:53:04 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On Fri, Mar 26, 2021, at 07:53, Joel Jacobson wrote:\n> On Thu, Mar 25, 2021, at 17:51, Tom Lane wrote:\n>> \"Joel Jacobson\" <joel@compiler.org <mailto:joel%40compiler.org>> writes:\n>> > On Thu, Mar 25, 2021, at 16:16, Alvaro Herrera wrote:\n>> >> Ah, of course -- the only way to obtain the acl columns is by going\n>> >> through the catalogs individually, so it won't be possible. I think\n>> >> this could be fixed with some very simple, quick function pg_get_acl()\n>> >> that takes a catalog OID and object OID and returns the ACL; then\n>> >> use aclexplode() to obtain all those details.\n>> \n>> > +1 for adding pg_get_acl().\n>> \n>> I wonder what performance will be like with lots o' objects.\n> \n> I guess pg_get_acl() would need to be implemented using a switch(classid) with 36 cases (one for each class)?\n> \n> Is your performance concern on how such switch statement will be optimized by the C-compiler?\n> ...\n> the classid case values are nicely ordered from OCLASS_CLASS..OCLASS_TRANSFORM (0..37), so they should produce O(2) fast jump tables.\n> \n> Maybe there is some other performance concern to reason about that I'm missing here?\n\nHmm, I think I understand your performance concern now:\n\nAm I right guessing the problem even with a jump table is going to be branch prediction,\nwhich will be poor due to many classids being common?\n\nInteresting, the long UNION ALL variant does not seem to suffer from this problem,\nthanks to explicitly specifying where to find the aclitem/owner-column.\nWe pay the lookup-cost \"compile time\" when writing the pg_ownerships/pg_permissions system views,\ninstead of having to lookup the classids at run-time to go fetch aclitem/owner-info.\n\nThe query planner is also smart enough to understand not all the individuals queries\nneeds to be executed, for the use-case when filtering on a specific classid.\n\n/Joel\n\n\nOn Fri, Mar 26, 2021, at 07:53, Joel Jacobson wrote:On Thu, Mar 25, 2021, at 17:51, Tom Lane wrote:\"Joel Jacobson\" <joel@compiler.org> writes:> On Thu, Mar 25, 2021, at 16:16, Alvaro Herrera wrote:>> Ah, of course -- the only way to obtain the acl columns is by going>> through the catalogs individually, so it won't be possible. I think>> this could be fixed with some very simple, quick function pg_get_acl()>> that takes a catalog OID and object OID and returns the ACL; then>> use aclexplode() to obtain all those details.> +1 for adding pg_get_acl().I wonder what performance will be like with lots o' objects.I guess pg_get_acl() would need to be implemented using a switch(classid) with 36 cases (one for each class)?Is your performance concern on how such switch statement will be optimized by the C-compiler?...the classid case values are nicely ordered from OCLASS_CLASS..OCLASS_TRANSFORM (0..37), so they should produce O(2) fast jump tables.Maybe there is some other performance concern to reason about that I'm missing here?Hmm, I think I understand your performance concern now:Am I right guessing the problem even with a jump table is going to be branch prediction,which will be poor due to many classids being common?Interesting, the long UNION ALL variant does not seem to suffer from this problem,thanks to explicitly specifying where to find the aclitem/owner-column.We pay the lookup-cost \"compile time\" when writing the pg_ownerships/pg_permissions system views,instead of having to lookup the classids at run-time to go fetch aclitem/owner-info.The query planner is also smart enough to understand not all the individuals queriesneeds to be executed, for the use-case when filtering on a specific classid./Joel",
"msg_date": "Fri, 26 Mar 2021 10:38:15 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On 2021-Mar-26, Joel Jacobson wrote:\n\n> On Thu, Mar 25, 2021, at 17:51, Tom Lane wrote:\n\n> > I wonder what performance will be like with lots o' objects.\n> \n> I guess pg_get_acl() would need to be implemented using a switch(classid) with 36 cases (one for each class)?\n\nNo, we have a generalized object query mechanism, see objectaddress.c\n\n> Is your performance concern on how such switch statement will be optimized by the C-compiler?\n\nI guess he is concerned about the number of catalog accesses.\n\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Saca el libro que tu religi�n considere como el indicado para encontrar la\noraci�n que traiga paz a tu alma. Luego rebootea el computador\ny ve si funciona\" (Carlos Ducl�s)\n\n\n",
"msg_date": "Fri, 26 Mar 2021 07:30:37 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On Fri, Mar 26, 2021, at 11:30, Alvaro Herrera wrote:\n> On 2021-Mar-26, Joel Jacobson wrote:\n> \n> > On Thu, Mar 25, 2021, at 17:51, Tom Lane wrote:\n> \n> > > I wonder what performance will be like with lots o' objects.\n> > \n> > I guess pg_get_acl() would need to be implemented using a switch(classid) with 36 cases (one for each class)?\n> \n> No, we have a generalized object query mechanism, see objectaddress.c\n\nThat's where I was looking actually and noticed the switch with 36 cases, in the function getObjectDescription().\n\n/Joel\n\nOn Fri, Mar 26, 2021, at 11:30, Alvaro Herrera wrote:On 2021-Mar-26, Joel Jacobson wrote:> On Thu, Mar 25, 2021, at 17:51, Tom Lane wrote:> > I wonder what performance will be like with lots o' objects.> > I guess pg_get_acl() would need to be implemented using a switch(classid) with 36 cases (one for each class)?No, we have a generalized object query mechanism, see objectaddress.cThat's where I was looking actually and noticed the switch with 36 cases, in the function getObjectDescription()./Joel",
"msg_date": "Fri, 26 Mar 2021 13:33:52 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On 2021-Mar-26, Joel Jacobson wrote:\n\n> On Fri, Mar 26, 2021, at 11:30, Alvaro Herrera wrote:\n> > On 2021-Mar-26, Joel Jacobson wrote:\n> > \n> > > On Thu, Mar 25, 2021, at 17:51, Tom Lane wrote:\n> > \n> > > > I wonder what performance will be like with lots o' objects.\n> > > \n> > > I guess pg_get_acl() would need to be implemented using a switch(classid) with 36 cases (one for each class)?\n> > \n> > No, we have a generalized object query mechanism, see objectaddress.c\n> \n> That's where I was looking actually and noticed the switch with 36 cases, in the function getObjectDescription().\n\nAh! well, you don't have to repeat that.\n\nAFAICS the way to do it is like AlterObjectOwner_internal obtains data\n-- first do get_catalog_object_by_oid (gives you the HeapTuple that\nrepresents the object), then\nheap_getattr( ..., get_object_attnum_acl(), ..), and there you have the\nACL which you can \"explode\" (or maybe just return as-is).\n\nAFAICS if you do this, it's just one cache lookups per object, or\none indexscan for the cases with no by-OID syscache. It should be much\ncheaper than the UNION ALL query. And you use pg_shdepend to guide\nthis, so you only do it for the objects that you already know are\ninteresting.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"La conclusi�n que podemos sacar de esos estudios es que\nno podemos sacar ninguna conclusi�n de ellos\" (Tanenbaum)\n\n\n",
"msg_date": "Fri, 26 Mar 2021 09:43:27 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Mar-26, Joel Jacobson wrote:\n>> On Thu, Mar 25, 2021, at 17:51, Tom Lane wrote:\n>> I wonder what performance will be like with lots o' objects.\n\n> I guess he is concerned about the number of catalog accesses.\n\nMy concern is basically that you're forcing the join between\npg_shdepend and $everything_else to be done as a nested loop.\nIt will work well, up to where you have so many objects that\nit doesn't ... but the planner will have no way to improve it.\n\nHaving said that, I don't really see a better way either.\nMaterializing $everything_else via a UNION ALL seems like\nno fun from a maintenance perspective, plus we're not that\ngreat on optimizing such constructs either.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Mar 2021 09:16:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On Fri, Mar 26, 2021, at 14:16, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org <mailto:alvherre%40alvh.no-ip.org>> writes:\n> > On 2021-Mar-26, Joel Jacobson wrote:\n> >> On Thu, Mar 25, 2021, at 17:51, Tom Lane wrote:\n> >> I wonder what performance will be like with lots o' objects.\n> \n> > I guess he is concerned about the number of catalog accesses.\n> \n> My concern is basically that you're forcing the join between\n> pg_shdepend and $everything_else to be done as a nested loop.\n> It will work well, up to where you have so many objects that\n> it doesn't ... but the planner will have no way to improve it.\n\nThanks Alvaro and Tom for explaining.\n\n> Having said that, I don't really see a better way either.\n> Materializing $everything_else via a UNION ALL seems like\n> no fun from a maintenance perspective, plus we're not that\n> great on optimizing such constructs either.\n\nI see why pg_shdepend+pg_get_acl() is to prefer.\n\nThat said, I think maintenance of UNION ALL would actually not be too bad,\nsince the system views could initially be generated by a query using information_schema,\nand the same query could update them when new catalogs are added.\n\n/Joel\nOn Fri, Mar 26, 2021, at 14:16, Tom Lane wrote:Alvaro Herrera <alvherre@alvh.no-ip.org> writes:> On 2021-Mar-26, Joel Jacobson wrote:>> On Thu, Mar 25, 2021, at 17:51, Tom Lane wrote:>> I wonder what performance will be like with lots o' objects.> I guess he is concerned about the number of catalog accesses.My concern is basically that you're forcing the join betweenpg_shdepend and $everything_else to be done as a nested loop.It will work well, up to where you have so many objects thatit doesn't ... but the planner will have no way to improve it.Thanks Alvaro and Tom for explaining.Having said that, I don't really see a better way either.Materializing $everything_else via a UNION ALL seems likeno fun from a maintenance perspective, plus we're not thatgreat on optimizing such constructs either.I see why pg_shdepend+pg_get_acl() is to prefer.That said, I think maintenance of UNION ALL would actually not be too bad,since the system views could initially be generated by a query using information_schema,and the same query could update them when new catalogs are added./Joel",
"msg_date": "Sat, 27 Mar 2021 21:47:40 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On 11.03.21 08:00, Joel Jacobson wrote:\n> Do we prefer \"pg_permissions\" or \"pg_privileges\"?\n\npg_privileges would be better. \"Permissions\" is not an SQL term.\n\n\n\n",
"msg_date": "Tue, 31 Aug 2021 18:52:52 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "I would be happy to review this patch, but a look through the email leaves me\r\nthinking it may still be waiting on a C implementation of pg_get_acl(). Is that\r\nright? And perhaps a view rename to pg_privileges, following Peter's comment?",
"msg_date": "Fri, 25 Feb 2022 21:12:16 +0000",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On Fri, Feb 25, 2022, at 22:12, Chapman Flack wrote:\n> I would be happy to review this patch, but a look through the email leaves me\n> thinking it may still be waiting on a C implementation of pg_get_acl(). Is that\n> right?\n\nNot sure.\n\n> And perhaps a view rename to pg_privileges, following Peter's comment?\n\n+1\n\n/Joel\n\n\n",
"msg_date": "Sat, 26 Feb 2022 09:27:21 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On 02/26/22 03:27, Joel Jacobson wrote:\n> On Fri, Feb 25, 2022, at 22:12, Chapman Flack wrote:\n>> I would be happy to review this patch, but a look through the email leaves me\n>> thinking it may still be waiting on a C implementation of pg_get_acl(). Is that\n>> right?\n> \n> Not sure.\n\nIt looked to me as if the -hackers messages of 25 and 26 March 2021 had\nfound a consensus that a pg_get_acl() function would be a good thing,\nwith the views to be implemented over that.\n\nI'm just not seeing any later patch that adds such a function.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Thu, 10 Mar 2022 16:02:13 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On Thu, Mar 10, 2022, at 22:02, Chapman Flack wrote:\n> It looked to me as if the -hackers messages of 25 and 26 March 2021 had\n> found a consensus that a pg_get_acl() function would be a good thing,\n> with the views to be implemented over that.\n>\n> I'm just not seeing any later patch that adds such a function.\n\nMy apologies for late reply. Here it comes.\n\nRecap: This patch is about adding two new system views: pg_privileges and pg_ownerships.\n\nChanges since patch 0005 from 2021-03-25:\n\n- Implement SQL-callable pg_get_acl()\nThis is a stripped down version of AlterObjectOwner_internal() from alter.c.\n\n- Rename pg_permissions -> pg_privileges\n\n- Use pg_shdepend + pg_get_acl() in pg_privileges, to avoid slow UNION ALL.\n\n- Fix indentation of the new system views to be consistent with the other views.\n\n- Add documentation of pg_get_acl() to func.sgml\n\n- Move documentation of system views from catalogs.sgml to system-views.sgml\n\n- Much smaller patch, thanks to getting rid of the long UNION ALL view definition:\n 1 file changed, 195 insertions(+), 460 deletions(-)\n\n/Joel",
"msg_date": "Thu, 13 Jun 2024 00:14:09 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "On Thu, Jun 13, 2024, at 00:14, Joel Jacobson wrote:\n> Changes since patch 0005 from 2021-03-25:\n> * 0006-pg_privileges-and-pg_ownerships.patch\n\n- Also much faster now thanks to pg_get_acl():\n\nTest with 100000 tables:\n\nSELECT COUNT(*) FROM pg_permissions_union_all;\nTime: 1466.504 ms (00:01.467)\nTime: 1435.520 ms (00:01.436)\nTime: 1459.396 ms (00:01.459)\n\nSELECT COUNT(*) FROM pg_privileges;\nTime: 292.257 ms\nTime: 288.406 ms\nTime: 294.831 ms\n\n\n",
"msg_date": "Thu, 13 Jun 2024 04:00:03 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pg_permissions"
},
{
"msg_contents": "Hmm, strange, the commitfest system didn't pick up the email with patch 0006 for some reason,\nwith message id 0c5a6b79-408c-4910-9b2e-4aa9a7b30f3a@app.fastmail.com\n\nIt's rebased to latest HEAD, so not sure why.\n\nMaybe it got confused when I quickly afterwards sent a new email without a patch?\n\nHere is a new attempt, file content unchanged, just named to 0007 and added \"pg_get_acl\" to the name.\n\nOn Thu, Jun 13, 2024, at 04:00, Joel Jacobson wrote:\n> On Thu, Jun 13, 2024, at 00:14, Joel Jacobson wrote:\n>> Changes since patch 0005 from 2021-03-25:\n>> * 0006-pg_privileges-and-pg_ownerships.patch\n>\n> - Also much faster now thanks to pg_get_acl():\n>\n> Test with 100000 tables:\n>\n> SELECT COUNT(*) FROM pg_permissions_union_all;\n> Time: 1466.504 ms (00:01.467)\n> Time: 1435.520 ms (00:01.436)\n> Time: 1459.396 ms (00:01.459)\n>\n> SELECT COUNT(*) FROM pg_privileges;\n> Time: 292.257 ms\n> Time: 288.406 ms\n> Time: 294.831 ms\n\n-- \nKind regards,\n\nJoel",
"msg_date": "Thu, 13 Jun 2024 07:34:30 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] pg_permissions"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile I was looking at the Citus code base for a project at work, I\nnoticed a really ugly thing. It was a UDF called\nalter_columnar_table_set(). It's clearly there because our current DDL\nis a few bricks shy of a load, as others have phrased such things,\nwhen it comes to accommodating the table AM API. A similar problem\nexists when it comes to changing any GUC ending in \"_libraries\". There\nis no way provided to change an existing value in place, only to\noverwrite it. This is maybe OK if you'll only ever have between 0 and 1\n.SOs loaded, but it gets to be a potentially serious problem if, for\nexample, you have two files in conf.d that set one. Then, we get\nsurprises caused by questions extension implementers really shouldn't\nbe saddled with, like \"what order do such files get included in, and\nwhat happens when a new one appears?\"\n\nThe general issue, as I see it, is one that we can address by\nproviding reference implementations, however tiny, pointless, and\ntrivial, of each of our extension points\nhttps://wiki.postgresql.org/wiki/PostgresServerExtensionPoints. I\nlearned about one, rendezvous variables, while writing this email.\nBeing public APIs, these all really need to be documented in our\nofficial documentation, a thing I started on with the patch I'm\nworking on for the hooks system.\n\nAt a code level, this would be in the spirit of unit tests to ensure\nthat those extension points keep working by putting them in, say,\n`make check-world` so as not to burden casual test runs.\n\nSeparately, it would be reasonable to make some efforts to ensure that\ninteractions among them are either safe or disallowed when attempted,\nwhichever seems reasonable to do. We can't cover the entire\ncombinatorial explosion, but adding a cross-check when someone has\nreported a problem we can reasonably anticipate could recur would be a\nbig improvement.\n\nWe could start with \"black hole\" implementations, as Andrew Dunstan,\nMicha�l Paquier, and possibly others, have done, but actual working\nsystems would expose more weak points.\n\nWould documenting these APIs be the right place to start?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sat, 6 Mar 2021 19:11:26 +0000",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Public APIs"
}
] |
[
{
"msg_contents": "Hi,\n\nAttached is a suggestion of adding a convenience view,\nallowing quickly looking up all objects owned by a given user.\n\nExample:\n\nSELECT * FROM ownerships WHERE rolname = 'joel' LIMIT 5;\n regclass | obj_desc | rolname\n------------------+-----------------------------------+---------\npg_class | table t | joel\npg_class | table foobar.foobar_policed_table | joel\npg_class | view ownerships | joel\npg_collation | collation foobar.foobar | joel\npg_event_trigger | event trigger foobar | joel\n(5 rows)\n\nThis is similar to the suggested pg_permissions system view in other thread.\n\n/Joel",
"msg_date": "Sun, 07 Mar 2021 01:08:34 +0100",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": true,
"msg_subject": "[PATCH] pg_ownerships system view"
},
{
"msg_contents": "On 3/7/21 1:08 AM, Joel Jacobson wrote:\n> Attached is a suggestion of adding a convenience view,\n> allowing quickly looking up all objects owned by a given user.\n\nThis definitely seems like a useful feature. I know I am guilty of \ncreating tables as the wrong role more than one time.\n\nAndreas\n\n\n",
"msg_date": "Mon, 8 Mar 2021 17:04:15 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_ownerships system view"
}
] |
[
{
"msg_contents": "Hi hackers.\n\nI propose a small optimization can be added to the tablesync replication code.\n\nThis proposal (and simple patch) was first discussed here [1].\n\nBasic idea is the tablesync could/should detect if there is anything\nto do *before* it enters the apply main loop. Calling\nprocess_sync_tables() before the apply main loop offers a quick way\nout, so the message handling will not be unnecessarily between\nworkers. This will be a small optimization.\n\nBut also, IMO this is a more natural separation of work. E.g tablesync\nworker will finish when the table is synced - not go one extra step...\n\n~~\n\nThis patch was already successfully used for several versions\n(v43-v50) of another 2PC patch [2], but it was eventually removed from\nthere because, although it has its own independent value, it was not\nrequired for that patch series [3].\n\n----\n[1] https://www.postgresql.org/message-id/CAHut%2BPtjk-Qgd3R1a1_tr62CmiswcYphuv0pLmVA-%2B2s8r0Bkw%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/CAHut%2BPsd5nyg-HG6rGO2_5jzXuSA1Eq5%2BB5J2VJo0Q2QWi-1HQ%40mail.gmail.com#1c268eeee3756b32e267d96b7177ba95\n[3] https://www.postgresql.org/message-id/CAA4eK1Jxu-3qxtkfA_dKoquQgGZVcB%2Bk9_-yT5%3D9GDEW84TF%2BA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Sun, 7 Mar 2021 12:56:26 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Tablesync early exit"
},
{
"msg_contents": "On Sun, Mar 7, 2021 at 7:26 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi hackers.\n>\n> I propose a small optimization can be added to the tablesync replication code.\n>\n> This proposal (and simple patch) was first discussed here [1].\n>\n\nIt might be better if you attach your proposed patch to this thread.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sun, 7 Mar 2021 08:02:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Tablesync early exit"
},
{
"msg_contents": "On Sun, Mar 7, 2021 at 1:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Mar 7, 2021 at 7:26 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Hi hackers.\n> >\n> > I propose a small optimization can be added to the tablesync replication code.\n> >\n> > This proposal (and simple patch) was first discussed here [1].\n> >\n>\n> It might be better if you attach your proposed patch to this thread.\n\nPSA.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 9 Mar 2021 16:56:18 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tablesync early exit"
},
{
"msg_contents": "Patch v2 is the same; it only needed re-basing to the latest HEAD.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Mon, 30 Aug 2021 13:20:30 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tablesync early exit"
},
{
"msg_contents": "This patch has been through five CFs without any review. It's a very\nshort patch and I'm assuming the only issue is that it's not clear\nwhether it's a good idea or not and there are few developers familiar\nwith this area of code?\n\nAmit, it looks like you were the one who asked for it to be split off\nfrom the logical decoding of 2PC patch in [1]. Can you summarize what\nquestions remain here? Should we just commit this or is there any\nissue that needs to be debated?\n\n[1] https://www.postgresql.org/message-id/CAA4eK1Jxu-3qxtkfA_dKoquQgGZVcB+k9_-yT5=9GDEW84TF+A@mail.gmail.com\n\n\nOn Sat, 6 Mar 2021 at 20:56, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi hackers.\n>\n> I propose a small optimization can be added to the tablesync replication code.\n>\n> This proposal (and simple patch) was first discussed here [1].\n>\n> Basic idea is the tablesync could/should detect if there is anything\n> to do *before* it enters the apply main loop. Calling\n> process_sync_tables() before the apply main loop offers a quick way\n> out, so the message handling will not be unnecessarily between\n> workers. This will be a small optimization.\n>\n> But also, IMO this is a more natural separation of work. E.g tablesync\n> worker will finish when the table is synced - not go one extra step...\n>\n> ~~\n>\n> This patch was already successfully used for several versions\n> (v43-v50) of another 2PC patch [2], but it was eventually removed from\n> there because, although it has its own independent value, it was not\n> required for that patch series [3].\n>\n> ----\n> [1] https://www.postgresql.org/message-id/CAHut%2BPtjk-Qgd3R1a1_tr62CmiswcYphuv0pLmVA-%2B2s8r0Bkw%40mail.gmail.com\n> [2] https://www.postgresql.org/message-id/flat/CAHut%2BPsd5nyg-HG6rGO2_5jzXuSA1Eq5%2BB5J2VJo0Q2QWi-1HQ%40mail.gmail.com#1c268eeee3756b32e267d96b7177ba95\n> [3] https://www.postgresql.org/message-id/CAA4eK1Jxu-3qxtkfA_dKoquQgGZVcB%2Bk9_-yT5%3D9GDEW84TF%2BA%40mail.gmail.com\n>\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia\n>\n>\n\n\n--\ngreg\n\n\n",
"msg_date": "Tue, 15 Mar 2022 15:45:43 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Tablesync early exit"
},
{
"msg_contents": "On Mon, Aug 30, 2021 at 8:50 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Patch v2 is the same; it only needed re-basing to the latest HEAD.\n>\n\nWhy do you think it is correct to exit before trying to receive any\nmessage? How will we ensure whether the apply worker has processed any\nmessage? At the beginning of function LogicalRepApplyLoop(),\nlast_received is the LSN where the copy has finished in the case of\ntablesync worker. I think we need to receive the message before trying\nto ensure whether we have synced with the apply worker or not.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 16 Mar 2022 10:36:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Tablesync early exit"
},
{
"msg_contents": "On Wed, Mar 16, 2022 at 1:16 AM Greg Stark <stark@mit.edu> wrote:\n>\n> This patch has been through five CFs without any review. It's a very\n> short patch and I'm assuming the only issue is that it's not clear\n> whether it's a good idea or not and there are few developers familiar\n> with this area of code?\n>\n> Amit, it looks like you were the one who asked for it to be split off\n> from the logical decoding of 2PC patch in [1]. Can you summarize what\n> questions remain here? Should we just commit this or is there any\n> issue that needs to be debated?\n>\n\nLooking closely at this, I am not sure whether this is a good idea or\nnot. Responded accordingly.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 16 Mar 2022 10:38:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Tablesync early exit"
},
{
"msg_contents": "On Wed, Mar 16, 2022 at 4:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Aug 30, 2021 at 8:50 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Patch v2 is the same; it only needed re-basing to the latest HEAD.\n> >\n>\n> Why do you think it is correct to exit before trying to receive any\n> message?\n\nI think the STATE_CATCHUP state guarantees the apply worker must have\nreceived (or tried to receive) a message. See the next answer.\n\n> How will we ensure whether the apply worker has processed any\n> message?\n\nAll this patch code does is call process_syncing_tables, which\ndelegates to process_syncing_tables_for_sync (because the call is from\na tablesync worker). This function code can’t do anything unless the\ntablesync worker is in STATE_CATCHUP state, and that cannot happen\nunless it was explicitly set to that state by the apply worker.\n\nOn the other side of the coin, the apply worker can only set that\nsyncworker->relstate = SUBREL_STATE_CATCHUP from within function\nprocess_syncing_tables_for_apply, and AFAIK that function is only\ncalled when the apply worker has either handled a message, (or the\nwalrcv_receive in the LogicalRepApplyLoop received nothing).\n\nSo I think the STATE_CATCHUP mechanism itself ensures the apply worker\n*must* have already processed a message (or there was no message to\nprocess).\n\n> At the beginning of function LogicalRepApplyLoop(),\n> last_received is the LSN where the copy has finished in the case of\n> tablesync worker. I think we need to receive the message before trying\n> to ensure whether we have synced with the apply worker or not.\n>\n\nI think the STATE_CATCHUP guarantees the apply worker must have\nreceived (or tried to receive) a message. See the previous answer.\n\n~~~\n\nAFAIK this patch is OK, but since it is not particularly urgent I've\nbumped this to the next CommitFest [1] instead of trying to jam it\ninto PG15 at the last minute.\n\nBTW - There were some useful logfiles I captured a very long time ago\n[2]. They show the behaviour without/with this patch.\n\n------\n[1] https://commitfest.postgresql.org/37/3062/\n[2] https://www.postgresql.org/message-id/CAHut+Ptjk-Qgd3R1a1_tr62CmiswcYphuv0pLmVA-+2s8r0Bkw@mail.gmail.com\n\nKind Regards,\nPeter Smith\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 1 Apr 2022 19:22:20 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tablesync early exit"
},
{
"msg_contents": "On Fri, Apr 1, 2022 at 1:52 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Mar 16, 2022 at 4:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I think the STATE_CATCHUP guarantees the apply worker must have\n> received (or tried to receive) a message. See the previous answer.\n>\n\nSorry, I intend to say till the sync worker has received any message.\nThe point is that LSN till where the copy has finished might actually\nbe later than some of the in-progress transactions on the server. It\nmay not be a good idea to blindly skip those changes if the apply\nworker has already received those changes (say via a 'streaming'\nmode). Today, all such changes would be written to the file and\napplied at commit time but tomorrow, we can have an implementation\nwhere we can apply such changes (via some background worker) by\nskipping changes related to the table for which the table-sync worker\nis in-progress. Now, in such a scenario, unless, we allow the table\nsync worker to process more messages, we will end up losing some\nchanges for that particular table.\n\nAs per my understanding, this is safe as per the current code but it\ncan't be guaranteed for future implementations and the amount of extra\nwork is additional work to receive the messages for one transaction. I\nstill don't think that it is a good idea to pursue this patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 2 Apr 2022 11:47:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Tablesync early exit"
},
{
"msg_contents": "On Sat, Apr 2, 2022 at 5:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Apr 1, 2022 at 1:52 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Wed, Mar 16, 2022 at 4:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I think the STATE_CATCHUP guarantees the apply worker must have\n> > received (or tried to receive) a message. See the previous answer.\n> >\n>\n> Sorry, I intend to say till the sync worker has received any message.\n> The point is that LSN till where the copy has finished might actually\n> be later than some of the in-progress transactions on the server. It\n> may not be a good idea to blindly skip those changes if the apply\n> worker has already received those changes (say via a 'streaming'\n> mode). Today, all such changes would be written to the file and\n> applied at commit time but tomorrow, we can have an implementation\n> where we can apply such changes (via some background worker) by\n> skipping changes related to the table for which the table-sync worker\n> is in-progress. Now, in such a scenario, unless, we allow the table\n> sync worker to process more messages, we will end up losing some\n> changes for that particular table.\n>\n> As per my understanding, this is safe as per the current code but it\n> can't be guaranteed for future implementations and the amount of extra\n> work is additional work to receive the messages for one transaction. I\n> still don't think that it is a good idea to pursue this patch.\n\nIIUC you are saying that my patch is good today, but it may cause\nproblems in a hypothetical future if the rest of the replication logic\nis implemented differently.\n\nAnyway, it seems there is no chance of this getting committed, so it\nis time for me to stop flogging this dead horse.\n\nI will remove this from the CF.\n\n------\nKind Regards,\nPeter Smith\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 5 Apr 2022 14:07:16 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tablesync early exit"
},
{
"msg_contents": "On Tue, Apr 5, 2022 at 9:37 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Sat, Apr 2, 2022 at 5:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Apr 1, 2022 at 1:52 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > On Wed, Mar 16, 2022 at 4:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I think the STATE_CATCHUP guarantees the apply worker must have\n> > > received (or tried to receive) a message. See the previous answer.\n> > >\n> >\n> > Sorry, I intend to say till the sync worker has received any message.\n> > The point is that LSN till where the copy has finished might actually\n> > be later than some of the in-progress transactions on the server. It\n> > may not be a good idea to blindly skip those changes if the apply\n> > worker has already received those changes (say via a 'streaming'\n> > mode). Today, all such changes would be written to the file and\n> > applied at commit time but tomorrow, we can have an implementation\n> > where we can apply such changes (via some background worker) by\n> > skipping changes related to the table for which the table-sync worker\n> > is in-progress. Now, in such a scenario, unless, we allow the table\n> > sync worker to process more messages, we will end up losing some\n> > changes for that particular table.\n> >\n> > As per my understanding, this is safe as per the current code but it\n> > can't be guaranteed for future implementations and the amount of extra\n> > work is additional work to receive the messages for one transaction. I\n> > still don't think that it is a good idea to pursue this patch.\n>\n> IIUC you are saying that my patch is good today, but it may cause\n> problems in a hypothetical future if the rest of the replication logic\n> is implemented differently.\n>\n\nThe approach I have alluded to above is already proposed earlier on\n-hackers [1] to make streaming transactions perform better. So, it is\nnot completely hypothetical.\n\n[1] - https://www.postgresql.org/message-id/8eda5118-2dd0-79a1-4fe9-eec7e334de17%40postgrespro.ru\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 5 Apr 2022 11:25:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Tablesync early exit"
}
] |
[
{
"msg_contents": "Hi all,\n\nAttached is a proof-of-concept patch that allows Postgres to perform\npg_upgrade if the instance has Millions of objects.\n\nIt would be great if someone could take a look and see if this patch is in\nthe right direction. There are some pending tasks (such as documentation /\npg_resetxlog vs pg_resetwal related changes) but for now, the patch helps\nremove a stalemate where if a Postgres instance has a large number\n(accurately speaking 146+ Million) of Large Objects, pg_upgrade fails. This\nis easily reproducible and besides deleting Large Objects before upgrade,\nthere is no other (apparent) way for pg_upgrade to complete.\n\nThe patch (attached):\n- Applies cleanly on REL9_6_STABLE -\nc7a4fc3dd001646d5938687ad59ab84545d5d043\n- 'make check' passes\n- Allows the user to provide a constant via pg_upgrade command-line, that\noverrides the 2 billion constant in pg_resetxlog [1] thereby increasing the\n(window of) Transaction IDs available for pg_upgrade to complete. \n\n\nSample argument for pg_upgrade:\n$ /opt/postgres/96/bin/pg_upgrade --max-limit-xid 1000000000 --old-bindir\n...\n\n\nWith this patch, pg_upgrade is now able to upgrade a v9.5 cluster with 500\nmillion Large Objects successfully to v9.6 - some stats below:\n\nSource Postgres - v9.5.24\nTarget Version - v9.6.21\nLarge Object Count: 500 Million Large Objects\nMachine - r5.4xlarge (16vCPU / 128GB RAM + 1TB swap)\nMemory used during pg_upgrade - ~350GB\nTime taken - 25+ hrs. (tested twice) - (All LOs processed sequentially ->\nScope for optimization)\n\nAlthough counter-intuitive, for this testing purpose all Large Objects were\nsmall (essentially the idea was to test the count) and created by using\nsomething like this:\n\nseq 1 50000 | xargs -n 1 -i -P 10 /opt/postgres/95/bin/psql -c \"select\nlo_from_bytea(0, '\\xffffff00') from generate_series(1,10000);\" > /dev/null\n\nI am not married to the patch (especially the argument name) but ideally I'd\nprefer a way to get this upgrade going without a patch. For now, I am unable\nto find any other way to upgrade a v9.5 Postgres database in this scenario,\nfacing End-of-Life.\n\nReference:\n1) 2 Billion constant -\nhttps://github.com/postgres/postgres/blob/ca3b37487be333a1d241dab1bbdd17a211\na88f43/src/bin/pg_resetwal/pg_resetwal.c#L444\n\nThanks,\nRobins Tharakan\n\n> -----Original Message-----\n> From: Tharakan, Robins\n> Sent: Wednesday, 3 March 2021 10:36 PM\n> To: pgsql-hackers@postgresql.org\n> Subject: pg_upgrade failing for 200+ million Large Objects\n> \n> Hi,\n> \n> While reviewing a failed upgrade from Postgres v9.5 (to v9.6) I saw that\n> the\n> instance had ~200 million (in-use) Large Objects. I was able to reproduce\n> this on a test instance which too fails with a similar error.\n> \n> \n> pg_restore: executing BLOB 4980622\n> pg_restore: WARNING: database with OID 0 must be vacuumed within 1000001\n> transactions\n> HINT: To avoid a database shutdown, execute a database-wide VACUUM in\n> that\n> database.\n> You might also need to commit or roll back old prepared transactions.\n> pg_restore: executing BLOB 4980623\n> pg_restore: [archiver (db)] Error while PROCESSING TOC:\n> pg_restore: [archiver (db)] Error from TOC entry 2565; 2613 4980623 BLOB\n> 4980623 postgres\n> pg_restore: [archiver (db)] could not execute query: ERROR: database is\n> not\n> accepting commands to avoid wraparound data loss in database with OID 0\n> HINT: Stop the postmaster and vacuum that database in single-user mode.\n> You might also need to commit or roll back old prepared transactions.\n> Command was: SELECT pg_catalog.lo_create('4980623');\n> \n> \n> \n> To remove the obvious possibilities, these Large Objects that are still\n> in-use (so vacuumlo wouldn't help), giving more system resources doesn't\n> help, moving Large Objects around to another database doesn't help (since\n> this is cluster-wide restriction), the source instance is nowhere close\n> to\n> wraparound and lastly recent-most minor versions don't help either (I\n> tried\n> compiling 9_6_STABLE + upgrade database with 150 million LO and still\n> encountered the same issue).\n> \n> Do let me know if I am missing something obvious but it appears that this\n> is\n> happening owing to 2 things coming together:\n> \n> * Each Large Object is migrated in its own transaction during pg_upgrade\n> * pg_resetxlog appears to be narrowing the window (available for\n> pg_upgrade)\n> to ~146 Million XIDs (2^31 - 1 million XID wraparound margin - 2 billion\n> which is a hard-coded constant - see [1] - in what appears to be an\n> attempt\n> to force an Autovacuum Wraparound session soon after upgrade completes).\n> \n> Ideally such an XID based restriction, is limiting for an instance that's\n> actively using a lot of Large Objects. Besides forcing AutoVacuum\n> Wraparound\n> logic to kick in soon after, I am unclear what much else it aims to do.\n> What\n> it does seem to be doing is to block Major Version upgrades if the\n> pre-upgrade instance has >146 Million Large Objects (half that, if the LO\n> additionally requires ALTER LARGE OBJECT OWNER TO for each of those\n> objects\n> during pg_restore)\n> \n> For long-term these ideas came to mind, although am unsure which are\n> low-hanging fruits and which outright impossible - For e.g. clubbing\n> multiple objects in a transaction [2] / Force AutoVacuum post upgrade\n> (and\n> thus remove this limitation altogether) or see if \"pg_resetxlog -x\" (from\n> within pg_upgrade) could help in some way to work-around this limitation.\n> \n> Is there a short-term recommendation for this scenario?\n> \n> I can understand a high number of small-sized objects is not a great way\n> to\n> use pg_largeobject (since Large Objects was intended to be for, well,\n> 'large\n> objects') but this magic number of Large Objects is now a stalemate at\n> this\n> point (with respect to v9.5 EOL).\n> \n> \n> Reference:\n> 1) pg_resetxlog -\n> https://github.com/postgres/postgres/blob/ca3b37487be333a1d241dab1bbdd17a\n> 211\n> a88f43/src/bin/pg_resetwal/pg_resetwal.c#L444\n> 2)\n> https://www.postgresql.org/message-id/ed7d86a1-b907-4f53-9f6e-\n> 63482d2f2bac%4\n> 0manitou-mail.org\n> \n> -\n> Thanks\n> Robins Tharakan",
"msg_date": "Sun, 7 Mar 2021 08:43:28 +0000",
"msg_from": "\"Tharakan, Robins\" <tharar@amazon.com>",
"msg_from_op": true,
"msg_subject": "RE: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "> On 7 Mar 2021, at 09:43, Tharakan, Robins <tharar@amazon.com> wrote:\n\n> The patch (attached):\n> - Applies cleanly on REL9_6_STABLE -\n> c7a4fc3dd001646d5938687ad59ab84545d5d043\n\nDid you target 9.6 because that's where you want to upgrade to, or is this not\na problem on HEAD? If it's still a problem on HEAD you should probably submit\nthe patch against there. You probably also want to add it to the next commit\nfest to make sure it's not forgotten about: https://commitfest.postgresql.org/33/\n\n> I am not married to the patch (especially the argument name) but ideally I'd\n> prefer a way to get this upgrade going without a patch. For now, I am unable\n> to find any other way to upgrade a v9.5 Postgres database in this scenario,\n> facing End-of-Life.\n\nIt's obviously not my call to make in any shape or form, but this doesn't\nreally seem to fall under what is generally backported into a stable release?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n",
"msg_date": "Sun, 7 Mar 2021 23:41:42 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On 07.03.21 09:43, Tharakan, Robins wrote:\n> Attached is a proof-of-concept patch that allows Postgres to perform\n> pg_upgrade if the instance has Millions of objects.\n> \n> It would be great if someone could take a look and see if this patch is in\n> the right direction. There are some pending tasks (such as documentation /\n> pg_resetxlog vs pg_resetwal related changes) but for now, the patch helps\n> remove a stalemate where if a Postgres instance has a large number\n> (accurately speaking 146+ Million) of Large Objects, pg_upgrade fails. This\n> is easily reproducible and besides deleting Large Objects before upgrade,\n> there is no other (apparent) way for pg_upgrade to complete.\n> \n> The patch (attached):\n> - Applies cleanly on REL9_6_STABLE -\n> c7a4fc3dd001646d5938687ad59ab84545d5d043\n> - 'make check' passes\n> - Allows the user to provide a constant via pg_upgrade command-line, that\n> overrides the 2 billion constant in pg_resetxlog [1] thereby increasing the\n> (window of) Transaction IDs available for pg_upgrade to complete.\n\nCould you explain what your analysis of the problem is and why this \npatch (might) fix it?\n\nRight now, all I see here is, pass a big number via a command-line \noption and hope it works.\n\n\n",
"msg_date": "Mon, 8 Mar 2021 11:25:13 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
}
] |
[
{
"msg_contents": "The code in pg_stat_get_subscription() appears to believe that it\ncan return a set of rows, but its pg_proc entry does not have\nproretset set. It may be that this somehow accidentally fails\nto malfunction when the function is used via the system views,\nbut if you try to call it directly it falls over:\n\nregression=# select pg_stat_get_subscription(0);\nERROR: set-valued function called in context that cannot accept a set\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Mar 2021 13:29:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Why isn't pg_stat_get_subscription() marked as proretset?"
},
{
"msg_contents": "I wrote:\n> The code in pg_stat_get_subscription() appears to believe that it\n> can return a set of rows, but its pg_proc entry does not have\n> proretset set. It may be that this somehow accidentally fails\n> to malfunction when the function is used via the system views,\n> but if you try to call it directly it falls over:\n> regression=# select pg_stat_get_subscription(0);\n> ERROR: set-valued function called in context that cannot accept a set\n\nIndeed, the reason we have not noticed this mistake is that\nnodeFunctionscan.c and execSRF.c do not complain if a function-in-FROM\nreturns a tuplestore without having been marked as proretset.\nThat seems like a bad idea: it only accidentally works at all,\nI think, and we might break such cases unknowingly via future code\nrearrangement in that area. There are also bad consequences\nelsewhere, such as that the planner mistakenly expects the function\nto return just one row, which could result in poor plan choices.\n\nSo I think we should not just fix the bogus pg_proc marking, but\nalso change the executor to complain if a non-SRF tries to return\na set. I propose the attached.\n\n(I initially had it complain if a non-SRF returns returnMode ==\nSFRM_ValuePerCall and isDone == ExprEndResult, but it turns out that\nplperl sometimes does that as a way of returning NULL. I'm not\nsufficiently excited about this to go change that, so the patch lets\nthat case pass.)\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 08 Mar 2021 14:25:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Why isn't pg_stat_get_subscription() marked as proretset?"
}
] |
[
{
"msg_contents": "Currently pgbench uses plain COPY to populate pgbench_accounts\ntable. With adding FREEZE option to COPY, the time to perform \"pgbench\n-i\" will be significantly reduced.\n\nCurent master:\npgbench -i -s 100\n:\n:\ndone in 70.78 s (drop tables 0.21 s, create tables 0.02 s, client-side generate 12.42 s, vacuum 51.11 s, primary keys 7.02 s).\n\nUsing FREEZE:\ndone in 16.86 s (drop tables 0.20 s, create tables 0.01 s, client-side generate 11.86 s, vacuum 0.25 s, primary keys 4.53 s).\n\nAs you can see total time drops from 70.78 seconds to 16.86 seconds,\nthat is 4.1 times faster. This is mainly because vacuum takes only\n0.25 seconds after COPY FREEZE while unpatched pgbench takes 51.11\nseconds, which is 204 times slower.\n\nThanks for the COPY FREEZE patch recently committed:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=7db0cd2145f2bce84cac92402e205e4d2b045bf2\n\nAttached is one line patch for this.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Mon, 08 Mar 2021 14:39:07 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Using COPY FREEZE in pgbench"
},
{
"msg_contents": "On Mon, 2021-03-08 at 14:39 +0900, Tatsuo Ishii wrote:\n> Currently pgbench uses plain COPY to populate pgbench_accounts\n> table. With adding FREEZE option to COPY, the time to perform \"pgbench\n> -i\" will be significantly reduced.\n> \n> Curent master:\n> pgbench -i -s 100\n> :\n> :\n> done in 70.78 s (drop tables 0.21 s, create tables 0.02 s, client-side generate 12.42 s, vacuum 51.11 s, primary keys 7.02 s).\n> \n> Using FREEZE:\n> done in 16.86 s (drop tables 0.20 s, create tables 0.01 s, client-side generate 11.86 s, vacuum 0.25 s, primary keys 4.53 s).\n> \n> As you can see total time drops from 70.78 seconds to 16.86 seconds,\n> that is 4.1 times faster. This is mainly because vacuum takes only\n> 0.25 seconds after COPY FREEZE while unpatched pgbench takes 51.11\n> seconds, which is 204 times slower.\n> \n> Thanks for the COPY FREEZE patch recently committed:\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=7db0cd2145f2bce84cac92402e205e4d2b045bf2\n> \n> Attached is one line patch for this.\n\nThat is indeed low hanging fruit and an improvement.\n\n> -\tres = PQexec(con, \"copy pgbench_accounts from stdin\");\n> +\tres = PQexec(con, \"copy pgbench_accounts from stdin freeze\");\n\nI think it would be better to use the official syntax and put the \"freeze\"\nin parentheses. Perhaps the old syntax will be desupported some day.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 08 Mar 2021 11:19:55 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "\nHello Tatsuo-san,\n\n> Currently pgbench uses plain COPY to populate pgbench_accounts\n> table. With adding FREEZE option to COPY, the time to perform \"pgbench\n> -i\" will be significantly reduced.\n>\n> Curent master:\n> pgbench -i -s 100\n> done in 70.78 s (drop tables 0.21 s, create tables 0.02 s, client-side generate 12.42 s, vacuum 51.11 s, primary keys 7.02 s).\n>\n> Using FREEZE:\n> done in 16.86 s (drop tables 0.20 s, create tables 0.01 s, client-side generate 11.86 s, vacuum 0.25 s, primary keys 4.53 s).\n\nThat looks good!\n\nAs COPY FREEZE was introduced in 9.3, this means that loading data would \nbreak with previous versions. Pgbench attempts at being compatible with \nolder versions. I'm wondering whether we should not care or if we should \nattempt some compatibility layer. It seems enough to test \n\"PQserverVersion() >= 90300\"?\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 8 Mar 2021 11:32:42 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "Hi Fabien,\n\n> That looks good!\n> \n> As COPY FREEZE was introduced in 9.3, this means that loading data\n> would break with previous versions. Pgbench attempts at being\n> compatible with older versions. I'm wondering whether we should not\n> care or if we should attempt some compatibility layer. It seems enough\n> to test \"PQserverVersion() >= 90300\"?\n\nGood point.\n\nUnfortunately with pre-14 COPY FREEZE we cannot get the speed up\neffect because it requires the commit:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=7db0cd2145f2bce84cac92402e205e4d2b045bf2\nwhich was there only in the master branch as of Jan 17, 2021.\n\nSo I think adding \"freeze\" to the copy statement should only happen in\nPostgreSQL 14 or later. Probably the test should be\n\"PQserverVersion() >= 140000\" I think. Attached is the patch doing\nwhat you suggest.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Mon, 08 Mar 2021 20:22:21 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": ">> -\tres = PQexec(con, \"copy pgbench_accounts from stdin\");\n>> +\tres = PQexec(con, \"copy pgbench_accounts from stdin freeze\");\n> \n> I think it would be better to use the official syntax and put the \"freeze\"\n> in parentheses. Perhaps the old syntax will be desupported some day.\n\nAgreed.\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 08 Mar 2021 20:23:07 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "> Hi Fabien,\n> \n>> That looks good!\n>> \n>> As COPY FREEZE was introduced in 9.3, this means that loading data\n>> would break with previous versions. Pgbench attempts at being\n>> compatible with older versions. I'm wondering whether we should not\n>> care or if we should attempt some compatibility layer. It seems enough\n>> to test \"PQserverVersion() >= 90300\"?\n> \n> Good point.\n> \n> Unfortunately with pre-14 COPY FREEZE we cannot get the speed up\n> effect because it requires the commit:\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=7db0cd2145f2bce84cac92402e205e4d2b045bf2\n> which was there only in the master branch as of Jan 17, 2021.\n> \n> So I think adding \"freeze\" to the copy statement should only happen in\n> PostgreSQL 14 or later. Probably the test should be\n> \"PQserverVersion() >= 140000\" I think. Attached is the patch doing\n> what you suggest.\n\nI have created a CommitFest entry for this.\nhttps://commitfest.postgresql.org/33/3034/\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 08 Mar 2021 20:52:41 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "Hello Tatsuo-san,\n\n>> So I think adding \"freeze\" to the copy statement should only happen in\n>> PostgreSQL 14 or later. Probably the test should be\n>> \"PQserverVersion() >= 140000\" I think. Attached is the patch doing\n>> what you suggest.\n>\n> I have created a CommitFest entry for this.\n> https://commitfest.postgresql.org/33/3034/\n\nMy 0.02 ᅵ\n\nAfter giving it some thought, ISTM that there could also be a performance \nimprovement with copy freeze from older version, so I'd suggest to add it \nafter 9.3 where the option was added, i.e. 90300.\n\nAlso, I do not think it is worth to fail on a 0 pqserverversion, just keep \nthe number and consider it a very old version?\n\nQuestion: should there be a word about copy using the freeze option? I'd \nsay yes, in the section describing \"g\".\n\n-- \nFabien.",
"msg_date": "Sat, 13 Mar 2021 22:16:47 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": ">> I have created a CommitFest entry for this.\n>> https://commitfest.postgresql.org/33/3034/\n> \n> My 0.02 €\n> \n> After giving it some thought, ISTM that there could also be a\n> performance improvement with copy freeze from older version, so I'd\n> suggest to add it after 9.3 where the option was added, i.e. 90300.\n\nYou misunderstand about COPY FREEZE. Pre-13 COPY FREEZE does not\ncontribute a performance improvement. See discussions for more details.\nhttps://www.postgresql.org/message-id/CAMkU%3D1w3osJJ2FneELhhNRLxfZitDgp9FPHee08NT2FQFmz_pQ@mail.gmail.com\nhttps://www.postgresql.org/message-id/flat/CABOikdN-ptGv0mZntrK2Q8OtfUuAjqaYMGmkdU1dCKFtUxVLrg%40mail.gmail.com\n\n> Also, I do not think it is worth to fail on a 0 pqserverversion, just\n> keep the number and consider it a very old version?\n\nIf pqserverversion fails, then following PQ calls are likely fail\ntoo. What's a benefit to continue after pqserverversion returns 0?\n\n> Question: should there be a word about copy using the freeze option?\n> I'd say yes, in the section describing \"g\".\n\nCan you elaborate about \"section describing \"g\"? I am not sure what\nyou mean.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sun, 14 Mar 2021 09:22:21 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": ">> After giving it some thought, ISTM that there could also be a\n>> performance improvement with copy freeze from older version, so I'd\n>> suggest to add it after 9.3 where the option was added, i.e. 90300.\n> \n> You misunderstand about COPY FREEZE. Pre-13 COPY FREEZE does not\n\nOops. I meant Pre-14, not pre-13.\n\n> contribute a performance improvement. See discussions for more details.\n> https://www.postgresql.org/message-id/CAMkU%3D1w3osJJ2FneELhhNRLxfZitDgp9FPHee08NT2FQFmz_pQ@mail.gmail.com\n> https://www.postgresql.org/message-id/flat/CABOikdN-ptGv0mZntrK2Q8OtfUuAjqaYMGmkdU1dCKFtUxVLrg%40mail.gmail.com\n> \n>> Also, I do not think it is worth to fail on a 0 pqserverversion, just\n>> keep the number and consider it a very old version?\n> \n> If pqserverversion fails, then following PQ calls are likely fail\n> too. What's a benefit to continue after pqserverversion returns 0?\n> \n>> Question: should there be a word about copy using the freeze option?\n>> I'd say yes, in the section describing \"g\".\n> \n> Can you elaborate about \"section describing \"g\"? I am not sure what\n> you mean.\n> \n> Best regards,\n> --\n> Tatsuo Ishii\n> SRA OSS, Inc. Japan\n> English: http://www.sraoss.co.jp/index_en.php\n> Japanese:http://www.sraoss.co.jp\n> \n> \n\n\n",
"msg_date": "Sun, 14 Mar 2021 09:28:34 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "\nHello,\n\n>> After giving it some thought, ISTM that there could also be a\n>> performance improvement with copy freeze from older version, so I'd\n>> suggest to add it after 9.3 where the option was added, i.e. 90300.\n>\n> You misunderstand about COPY FREEZE. Pre-13 COPY FREEZE does not\n> contribute a performance improvement. See discussions for more details.\n> https://www.postgresql.org/message-id/CAMkU%3D1w3osJJ2FneELhhNRLxfZitDgp9FPHee08NT2FQFmz_pQ@mail.gmail.com\n> https://www.postgresql.org/message-id/flat/CABOikdN-ptGv0mZntrK2Q8OtfUuAjqaYMGmkdU1dCKFtUxVLrg%40mail.gmail.com\n\nOk. ISTM that the option should be triggered as soon as it is available, \nbut you do as you wish.\n\n>> Also, I do not think it is worth to fail on a 0 pqserverversion, just\n>> keep the number and consider it a very old version?\n>\n> If pqserverversion fails, then following PQ calls are likely fail\n> too. What's a benefit to continue after pqserverversion returns 0?\n\nI'm unsure how this could happen ever. The benefit of not caring is less \nlines of codes in pgbench and a later call will catch the issue anyway, if \nlibpq is corrupted.\n\n>> Question: should there be a word about copy using the freeze option?\n>> I'd say yes, in the section describing \"g\".\n>\n> Can you elaborate about \"section describing \"g\"? I am not sure what\n> you mean.\n\nThe \"g\" item in the section describing initialization steps (i.e. option \n-I). I'd suggest just to replace \"COPY\" with \"COPY FREEZE\" in the \nsentence.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 14 Mar 2021 09:48:22 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "> Ok. ISTM that the option should be triggered as soon as it is\n> available, but you do as you wish.\n\nCan you elaborate why you think that using COPY FREEZE before 14 is\nbeneficial? Or do you want to standardize to use COPY FREEZE?\n\n> I'm unsure how this could happen ever. The benefit of not caring is\n> less lines of codes in pgbench and a later call will catch the issue\n> anyway, if libpq is corrupted.\n\nI have looked in the code of PQprotocolVersion. The only case in which\nit returns 0 is there's no connection. Yes, you are right. Once the\nconnection has been successfuly established, there's no chance it\nfails. So I agree with you.\n\n>>> Question: should there be a word about copy using the freeze option?\n>>> I'd say yes, in the section describing \"g\".\n>>\n>> Can you elaborate about \"section describing \"g\"? I am not sure what\n>> you mean.\n> \n> The \"g\" item in the section describing initialization steps\n> (i.e. option -I). I'd suggest just to replace \"COPY\" with \"COPY\n> FREEZE\" in the sentence.\n\nOk. The section is needed to be modified.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sun, 14 Mar 2021 21:14:49 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "> I have looked in the code of PQprotocolVersion. The only case in which\n> it returns 0 is there's no connection. Yes, you are right. Once the\n> connection has been successfuly established, there's no chance it\n> fails. So I agree with you.\n\nAttached v3 patch addresses this.\n\n>> The \"g\" item in the section describing initialization steps\n>> (i.e. option -I). I'd suggest just to replace \"COPY\" with \"COPY\n>> FREEZE\" in the sentence.\n> \n> Ok. The section is needed to be modified.\n\nThis is also addressed in the patch.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Fri, 19 Mar 2021 13:53:34 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "\nHello Tatsuo-san,\n\n>> I have looked in the code of PQprotocolVersion. The only case in which\n>> it returns 0 is there's no connection. Yes, you are right. Once the\n>> connection has been successfuly established, there's no chance it\n>> fails. So I agree with you.\n>\n> Attached v3 patch addresses this.\n>\n>>> The \"g\" item in the section describing initialization steps\n>>> (i.e. option -I). I'd suggest just to replace \"COPY\" with \"COPY\n>>> FREEZE\" in the sentence.\n>>\n>> Ok. The section is needed to be modified.\n>\n> This is also addressed in the patch.\n\nV3 works for me and looks ok. I changed it to ready in the CF app.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 20 Mar 2021 14:35:00 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "Hi Fabien,\n\n> Hello Tatsuo-san,\n> \n>>> I have looked in the code of PQprotocolVersion. The only case in which\n>>> it returns 0 is there's no connection. Yes, you are right. Once the\n>>> connection has been successfuly established, there's no chance it\n>>> fails. So I agree with you.\n>>\n>> Attached v3 patch addresses this.\n>>\n>>>> The \"g\" item in the section describing initialization steps\n>>>> (i.e. option -I). I'd suggest just to replace \"COPY\" with \"COPY\n>>>> FREEZE\" in the sentence.\n>>>\n>>> Ok. The section is needed to be modified.\n>>\n>> This is also addressed in the patch.\n> \n> V3 works for me and looks ok. I changed it to ready in the CF app.\n\nThank you for your review!\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sun, 21 Mar 2021 08:26:48 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": ">> V3 works for me and looks ok. I changed it to ready in the CF app.\n> \n> Thank you for your review!\n\nUnfortunately it seems cfbot is not happy with the patch.\n\n# Failed test 'pgbench scale 1 initialization status (got 1 vs expected 0)'\n# at t/001_pgbench_with_server.pl line 116.\n# Failed test 'pgbench scale 1 initialization stderr /(?^:creating foreign keys)/'\n# at t/001_pgbench_with_server.pl line 116.\n# 'dropping old tables...\n# creating tables...\n# creating 2 partitions...\n# creating primary keys...\n# vacuuming...\n# generating data (client-side)...\n# 100000 of 100000 tuples (100%) done (elapsed 0.02 s, remaining 0.00 s)\n# ERROR: cannot perform COPY FREEZE on a partitioned table\n# pgbench: fatal: PQendcopy failed\n\nI think pgbench needs to check whether partitioning option is\nspecified or not. If specified, pgbench should not use COPY\nFREEZE. Attached v4 patch does this.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Sun, 21 Mar 2021 15:10:15 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "\n>>> V3 works for me and looks ok. I changed it to ready in the CF app.\n>>\n>> Thank you for your review!\n>\n> Unfortunately it seems cfbot is not happy with the patch.\n\nArgh. Indeed, I did not thought of testing on a partitioned table:-( ISTM \nI did \"make check\" in pgbench to trigger tap tests, but possibly it was in \na dream:-(\n\nThe feature is a little disappointing because if someone has partition \ntables then probably they have a lot of data and probably they would like \nfreeze to work there. Maybe freeze would work on table partitions \nthemselves, but I do not think it is worth the effort to do that.\n\nAbout v4 there is a typo in the doc \"portioning\":\n\n <command>pgbench</command> uses FREEZE option with 14 or later\n version of <productname>PostgreSQL</productname> to speed up\n subsequent <command>VACUUM</command> if portioning option is not\n specified.\n\nI'd suggest:\n\n <command>pgbench</command> uses the FREEZE option with 14 or later\n version of <productname>PostgreSQL</productname> to speed up\n subsequent <command>VACUUM</command>, unless partitions are enabled.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 21 Mar 2021 09:49:43 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "> The feature is a little disappointing because if someone has partition\n> tables then probably they have a lot of data and probably they would\n> like freeze to work there. Maybe freeze would work on table partitions\n> themselves, but I do not think it is worth the effort to do that.\n\nAgreed.\n\n> About v4 there is a typo in the doc \"portioning\":\n> \n> <command>pgbench</command> uses FREEZE option with 14 or later\n> version of <productname>PostgreSQL</productname> to speed up\n> subsequent <command>VACUUM</command> if portioning option is not\n> specified.\n> \n> I'd suggest:\n> \n> <command>pgbench</command> uses the FREEZE option with 14 or later\n> version of <productname>PostgreSQL</productname> to speed up\n> subsequent <command>VACUUM</command>, unless partitions are enabled.\n\nThanks for pointing it out. Also I think that in \"with 14 or later\nversion\", \"version\" should be \"versions\".\n\nAttached is the v5 patch.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Sun, 21 Mar 2021 18:30:59 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "\n> Attached is the v5 patch.\n\nAbout v5: doc gen ok, global and local make check ok.\n\nI did a few tests on my laptop. Is seems that copying takes a little more \ntime, say about 10%, but vacuum is indeed very significantly reduced, so \nthat the total time for copying and vacuuming is reduced by 10% on \noverall.\n\nSo it is okay for me.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 21 Mar 2021 17:30:57 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "> I did a few tests on my laptop. Is seems that copying takes a little\n> more time, say about 10%, but vacuum is indeed very significantly\n> reduced, so that the total time for copying and vacuuming is reduced\n> by 10% on overall.\n> \n> So it is okay for me.\n\nThanks for the test.\n\nI wrote:\n> Curent master:\n> pgbench -i -s 100\n> :\n> :\n> done in 70.78 s (drop tables 0.21 s, create tables 0.02 s, client-side generate 12.42 s, vacuum 51.11 s, primary keys 7.02 s).\n> \n> Using FREEZE:\n> done in 16.86 s (drop tables 0.20 s, create tables 0.01 s, client-side generate 11.86 s, vacuum 0.25 s, primary keys 4.53 s).\n> \n> As you can see total time drops from 70.78 seconds to 16.86 seconds,\n> that is 4.1 times faster. This is mainly because vacuum takes only\n> 0.25 seconds after COPY FREEZE while unpatched pgbench takes 51.11\n> seconds, which is 204 times slower.\n\nI did same test again.\n\n13.2 pgbench + master branch server:\ndone in 15.47 s (drop tables 0.19 s, create tables 0.01 s, client-side generate 9.07 s, vacuum 2.07 s, primary keys 4.13 s).\n\nWith patch on master branch:\ndone in 13.38 s (drop tables 0.19 s, create tables 0.01 s, client-side generate 9.68 s, vacuum 0.23 s, primary keys 3.27 s).\n\nThis time current pgbench performs much faster than I wrote (15.47 s\nvs. 70.78 s). I don't why.\n\nAnyway, this time total pgbench time is reduced by 14% over all\nhere. I hope people agree that the patch is worth the gain.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 22 Mar 2021 09:22:54 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "\nHello Tatsuo-san,\n\n> 13.2 pgbench + master branch server:\n> done in 15.47 s (drop tables 0.19 s, create tables 0.01 s, client-side generate 9.07 s, vacuum 2.07 s, primary keys 4.13 s).\n>\n> With patch on master branch:\n> done in 13.38 s (drop tables 0.19 s, create tables 0.01 s, client-side generate 9.68 s, vacuum 0.23 s, primary keys 3.27 s).\n\nYes, this is the kind of figures I got on my laptop.\n\n> This time current pgbench performs much faster than I wrote (15.47 s vs. \n> 70.78 s). I don't why.\n\nDuno.\n\n> Anyway, this time total pgbench time is reduced by 14% over all\n> here. I hope people agree that the patch is worth the gain.\n\nYes, because (1) why not take +10% and (2) it exercises an option.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 22 Mar 2021 08:47:42 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "On Sun, Mar 21, 2021 at 5:23 PM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> Anyway, this time total pgbench time is reduced by 14% over all\n> here. I hope people agree that the patch is worth the gain.\n\nMost of the time when I run pgbench I use my own shell script, which does this:\n\nPGOPTIONS='-c vacuum_freeze_min_age=0 -c wal_compression=off' pgbench\n-i -s $SCALE\n\nHave you considered this case? In other words, have you considered the\nbenefits of this patch for users that currently deliberately force\nfreezing by VACUUM, just because it matters to their benchmark?\n\n(BTW you might be surprised how much wal_compression=off matters here.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 2 Apr 2021 15:55:54 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "> Most of the time when I run pgbench I use my own shell script, which does this:\n> \n> PGOPTIONS='-c vacuum_freeze_min_age=0 -c wal_compression=off' pgbench\n> -i -s $SCALE\n> \n> Have you considered this case? In other words, have you considered the\n> benefits of this patch for users that currently deliberately force\n> freezing by VACUUM, just because it matters to their benchmark?\n\nI am not sure how many people use this kind of options while running\npgbench -i but we could add yet another switch to fall back to none\nFREEZE COPY if you want.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sat, 03 Apr 2021 08:58:41 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "On Fri, Apr 2, 2021 at 4:58 PM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> I am not sure how many people use this kind of options while running\n> pgbench -i but we could add yet another switch to fall back to none\n> FREEZE COPY if you want.\n\nI was unclear. What I meant was that your patch isn't just useful\nbecause it speeds up \"pgbench -i\" for everybody. It's also useful\nbecause having all of the tuples already frozen after bulk loading\nseems like a good benchmarking practice, at least most of the time.\n\nThe patch changes the initial state of the database with \"pgbench -i\",\nI think. But that's good.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 2 Apr 2021 18:03:59 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "> I was unclear. What I meant was that your patch isn't just useful\n> because it speeds up \"pgbench -i\" for everybody. It's also useful\n> because having all of the tuples already frozen after bulk loading\n> seems like a good benchmarking practice, at least most of the time.\n> \n> The patch changes the initial state of the database with \"pgbench -i\",\n> I think. But that's good.\n\nOh, ok. Thanks for the explanation!\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sat, 03 Apr 2021 10:34:42 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "After this commit:\nhttps://git.postgresql.org/pg/commitdiff/8e03eb92e9ad54e2f1ed8b5a73617601f6262f81\nI was worried about that the benefit of COPY FREEZE patch is somewhat\nreduced or gone. So I ran a pgbench test again.\n\nCurrent master:\n\n$ pgbench -i -s 100 test\n:\n:\ndone in 20.23 s (drop tables 0.00 s, create tables 0.02 s, client-side generate 13.54 s, vacuum 2.34 s, primary keys 4.33 s).\n\nWith v5 patch:\ndone in 16.92 s (drop tables 0.21 s, create tables 0.01 s, client-side generate 12.68 s, vacuum 0.24 s, primary keys 3.77 s).\n\nSo overall gain by the patch is around 15%, whereas the last test\nbefore the commit was 14%. It seems the patch is still beneficial\nafter the commit.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sun, 04 Jul 2021 11:11:36 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "\nHello Tatsuo-san,\n\n> So overall gain by the patch is around 15%, whereas the last test before \n> the commit was 14%. It seems the patch is still beneficial after the \n> commit.\n\nYes, that's good!\n\nI had a quick look again, and about the comment:\n\n /*\n * If partitioning is not enabled and server version is 14.0 or later, we\n * can take account of freeze option of copy.\n */\n\nI'd suggest instead the shorter:\n\n /* use COPY with FREEZE on v14 and later without partioning */\n\nOr maybe even to fully drop the comment, because the code is clear enough \nand the doc already says it.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 4 Jul 2021 07:22:12 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "Hi fabien,\n\n>> So overall gain by the patch is around 15%, whereas the last test\n>> before the commit was 14%. It seems the patch is still beneficial\n>> after the commit.\n> \n> Yes, that's good!\n\nYeah!\n\n> I had a quick look again, and about the comment:\n> \n> /*\n> * If partitioning is not enabled and server version is 14.0 or later, we\n> * can take account of freeze option of copy.\n> */\n> \n> I'd suggest instead the shorter:\n> \n> /* use COPY with FREEZE on v14 and later without partioning */\n> \n> Or maybe even to fully drop the comment, because the code is clear\n> enough and the doc already says it.\n\nI'd prefer to leave a comment. People (including me) tend to forget\nthings in the future, that are obvious now:-)\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Sun, 04 Jul 2021 17:31:56 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "On Sun, 4 Jul 2021 at 09:32, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>\n> >> So overall gain by the patch is around 15%, whereas the last test\n> >> before the commit was 14%. It seems the patch is still beneficial\n> >> after the commit.\n> >\n> > Yes, that's good!\n>\n> Yeah!\n>\n\nI tested this with -s100 and got similar results, but with -s1000 it\nwas more like 1.5x faster:\n\nmaster:\ndone in 111.33 s (drop tables 0.00 s, create tables 0.01 s,\nclient-side generate 52.45 s, vacuum 32.30 s, primary keys 26.58 s)\n\npatch:\ndone in 74.04 s (drop tables 0.46 s, create tables 0.04 s, client-side\ngenerate 51.81 s, vacuum 2.11 s, primary keys 19.63 s)\n\nNice!\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 6 Jul 2021 21:10:02 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": "> I tested this with -s100 and got similar results, but with -s1000 it\n> was more like 1.5x faster:\n> \n> master:\n> done in 111.33 s (drop tables 0.00 s, create tables 0.01 s,\n> client-side generate 52.45 s, vacuum 32.30 s, primary keys 26.58 s)\n> \n> patch:\n> done in 74.04 s (drop tables 0.46 s, create tables 0.04 s, client-side\n> generate 51.81 s, vacuum 2.11 s, primary keys 19.63 s)\n> \n> Nice!\n> \n> Regards,\n> Dean\n\nIf there's no objection, I am going to commit/push to master branch in\nearly September.\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n\n",
"msg_date": "Mon, 30 Aug 2021 14:11:43 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
},
{
"msg_contents": ">> I tested this with -s100 and got similar results, but with -s1000 it\n>> was more like 1.5x faster:\n>> \n>> master:\n>> done in 111.33 s (drop tables 0.00 s, create tables 0.01 s,\n>> client-side generate 52.45 s, vacuum 32.30 s, primary keys 26.58 s)\n>> \n>> patch:\n>> done in 74.04 s (drop tables 0.46 s, create tables 0.04 s, client-side\n>> generate 51.81 s, vacuum 2.11 s, primary keys 19.63 s)\n>> \n>> Nice!\n>> \n>> Regards,\n>> Dean\n> \n> If there's no objection, I am going to commit/push to master branch in\n> early September.\n\nI have pushed the patch to master branch.\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=06ba4a63b85e5aa47b325c3235c16c05a0b58b96\n\nThank you for those who gave me the valuable reviews!\nReviewed-by: Fabien COELHO, Laurenz Albe, Peter Geoghegan, Dean Rasheed\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Thu, 02 Sep 2021 10:56:57 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Using COPY FREEZE in pgbench"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nAs part of commit 0aa8a0\n<https://github.com/postgres/postgres/commit/0aa8a01d04c8fe200b7a106878eebc3d0af9105c>\n,\nnew plugin methods (callbacks) were defined for enabling two_phase commits.\n5 callbacks were required:\n* begin_prepare\n* prepare\n* commit_prepared\n* rollback_prepared\n* stream_prepare\n\nand 1 callback was optional:\n* filter_prepare\n\nI don't think stream_prepare should be made a required callback for\nenabling two_phase commits. stream_prepare callback is required when a\nlogical replication slot is configured both for streaming in-progress\ntransactions and two_phase commits. Plugins can and should be allowed to\ndisallow this combination of allowing both streaming and two_phase at the\nsame time. In which case, stream_prepare should be an optional callback.\nAttaching a patch that makes this change. Let me know if you have any\ncomments.\n\nregards,\nAjin Cherian\nFujitsu Australia.",
"msg_date": "Mon, 8 Mar 2021 20:13:07 +1100",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": true,
"msg_subject": "Make stream_prepare an optional callback"
},
{
"msg_contents": "On Mon, Mar 8, 2021 at 2:43 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> Hi Hackers,\n>\n> As part of commit 0aa8a0 , new plugin methods (callbacks) were defined for enabling two_phase commits.\n> 5 callbacks were required:\n> * begin_prepare\n> * prepare\n> * commit_prepared\n> * rollback_prepared\n> * stream_prepare\n>\n> and 1 callback was optional:\n> * filter_prepare\n>\n> I don't think stream_prepare should be made a required callback for enabling two_phase commits. stream_prepare callback is required when a logical replication slot is configured both for streaming in-progress transactions and two_phase commits. Plugins can and should be allowed to disallow this combination of allowing both streaming and two_phase at the same time. In which case, stream_prepare should be an optional callback.\n>\n\nSounds reasonable to me. I also don't see a reason why we need to make\nthis a necessary callback. Some plugin authors might just want 2PC\nwithout streaming support.\n\nMarkus, others working on logical decoding plugins, do you have any\nopinion on this?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 9 Mar 2021 12:10:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make stream_prepare an optional callback"
},
{
"msg_contents": "On 09.03.21 07:40, Amit Kapila wrote:\n> Sounds reasonable to me. I also don't see a reason why we need to make\n> this a necessary callback. Some plugin authors might just want 2PC\n> without streaming support.\n\nSounds okay to me. Probably means we'll have to check for this callback \nand always skip the prepare for streamed transactions, w/o even \ntriggering filter_prepare, right? (Because the extension requesting not \nto filter it, but not providing the corresponding callback does not make \nsense.)\n\nIf you're going to together a patch Ajin, I'm happy to review.\n\nBest Regards\n\nMarkus\n\n\n",
"msg_date": "Tue, 9 Mar 2021 09:25:43 +0100",
"msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Make stream_prepare an optional callback"
},
{
"msg_contents": "On Tue, Mar 9, 2021 at 1:55 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 09.03.21 07:40, Amit Kapila wrote:\n> > Sounds reasonable to me. I also don't see a reason why we need to make\n> > this a necessary callback. Some plugin authors might just want 2PC\n> > without streaming support.\n>\n> Sounds okay to me. Probably means we'll have to check for this callback\n> and always skip the prepare for streamed transactions,\n>\n\nI think so. The behavior has to be similar to other optional callbacks\nlike message_cb, truncate_cb, stream_truncate_cb. Basically, we don't\nneed to error out if those callbacks are not provided.\n\n> w/o even\n> triggering filter_prepare, right?\n>\n\nI think the filter check is before we try to send the actual message.\n\n> (Because the extension requesting not\n> to filter it, but not providing the corresponding callback does not make\n> sense.)\n>\n\nThe extension can request two_phase without streaming.\n\n> If you're going to together a patch Ajin, I'm happy to review.\n>\n\nIt is attached with the initial email.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 9 Mar 2021 14:09:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make stream_prepare an optional callback"
},
{
"msg_contents": "On 09.03.21 09:39, Amit Kapila wrote:\n > It is attached with the initial email.\n\nOh, sorry, I looked up the initial email, but still didn't see the patch.\n\n> I think so. The behavior has to be similar to other optional callbacks\n> like message_cb, truncate_cb, stream_truncate_cb. Basically, we don't\n> need to error out if those callbacks are not provided.\n\nRight, but the patch proposes to error out. I wonder whether that could \nbe avoided.\n\n> The extension can request two_phase without streaming.\n\nSure. I'm worried about the case both are requested, but filter_prepare \nreturns false, i.e. asking for a streamed prepare without providing the \ncorresponding callback.\n\nI wonder whether Postgres could deny the stream_prepare right away and \nnot even invoke filter_prepare. And instead just skip it because the \noutput plugin did not provide an appropriate callback.\n\nAn error is not as nice, but I'm okay with that as well.\n\nBest Regards\n\nMarkus\n\n\n",
"msg_date": "Tue, 9 Mar 2021 09:53:44 +0100",
"msg_from": "Markus Wanner <markus@bluegap.ch>",
"msg_from_op": false,
"msg_subject": "Re: Make stream_prepare an optional callback"
},
{
"msg_contents": "On Tue, Mar 9, 2021 at 2:23 PM Markus Wanner <markus@bluegap.ch> wrote:\n>\n> On 09.03.21 09:39, Amit Kapila wrote:\n> > It is attached with the initial email.\n>\n> Oh, sorry, I looked up the initial email, but still didn't see the patch.\n>\n> > I think so. The behavior has to be similar to other optional callbacks\n> > like message_cb, truncate_cb, stream_truncate_cb. Basically, we don't\n> > need to error out if those callbacks are not provided.\n>\n> Right, but the patch proposes to error out. I wonder whether that could\n> be avoided.\n>\n\nAFAICS, the error is removed by the patch as per below change:\n\n+ if (ctx->callbacks.stream_prepare_cb == NULL)\n+ return;\n+\n /* Push callback + info on the error context stack */\n state.ctx = ctx;\n state.callback_name = \"stream_prepare\";\n@@ -1340,12 +1343,6 @@ stream_prepare_cb_wrapper(ReorderBuffer *cache,\nReorderBufferTXN *txn,\n ctx->write_xid = txn->xid;\n ctx->write_location = txn->end_lsn;\n\n- /* in streaming mode with two-phase commits, stream_prepare_cb is required */\n- if (ctx->callbacks.stream_prepare_cb == NULL)\n- ereport(ERROR,\n- (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n- errmsg(\"logical streaming at prepare time requires a\nstream_prepare_cb callback\")));\n-\n\n> > The extension can request two_phase without streaming.\n>\n> Sure. I'm worried about the case both are requested, but filter_prepare\n> returns false, i.e. asking for a streamed prepare without providing the\n> corresponding callback.\n>\n\noh, right, in that case, it will skip the stream_prepare even though\nthat is required. I guess in FilterPrepare, we should check if\nrbtxn_is_streamed and stream_prepare_cb is not provided, then we\nreturn true.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 9 Mar 2021 15:07:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make stream_prepare an optional callback"
},
{
"msg_contents": "On 09.03.21 10:37, Amit Kapila wrote:\n> AFAICS, the error is removed by the patch as per below change:\n\nAh, well, that does not seem right, then. We cannot just silently \nignore the callback but not skip the prepare, IMO. That would lead to \nthe output plugin missing the PREPARE, but still seeing a COMMIT \nPREPARED for the transaction, potentially missing changes that went out \nwith the prepare, no?\n\n> oh, right, in that case, it will skip the stream_prepare even though\n> that is required. I guess in FilterPrepare, we should check if\n> rbtxn_is_streamed and stream_prepare_cb is not provided, then we\n> return true.\n\nExcept that FilterPrepare doesn't (yet) have access to a \nReorderBufferTXN struct (see the other thread I just started).\n\nMaybe we need to do a ReorderBufferTXNByXid lookup already prior to (or \nas part of) FilterPrepare, then also skip (rather than silently ignore) \nthe prepare if no stream_prepare_cb callback is given (without even \ncalling filter_prepare_cb, because the output plugin has already stated \nit cannot handle that by not providing the corresponding callback).\n\nHowever, I also wonder what's the use case for an output plugin enabling \nstreaming and two-phase commit, but not providing a stream_prepare_cb. \nMaybe the original ERROR is the simpler approach? I.e. making the \nstream_prepare_cb mandatory, if and only if both are enabled (and \nfilter_prepare doesn't skip). (As in the original comment that says: \n\"in streaming mode with two-phase commits, stream_prepare_cb is required\").\n\nI guess I don't quite understand the initial motivation for the patch. \nIt states: \"This allows plugins to not allow the enabling of streaming \nand two_phase at the same time in logical replication.\" That's beyond \nme ... \"allows [..] to not allow\"? Why not, an output plugin can still \nreasonably request both. And that's a good thing, IMO. What problem \ndoes the patch try to solve?\n\nRegards\n\nMarkus\n\n\n",
"msg_date": "Tue, 9 Mar 2021 11:11:10 +0100",
"msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Make stream_prepare an optional callback"
},
{
"msg_contents": "On Tue, Mar 9, 2021 at 3:41 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 09.03.21 10:37, Amit Kapila wrote:\n> I guess I don't quite understand the initial motivation for the patch.\n> It states: \"This allows plugins to not allow the enabling of streaming\n> and two_phase at the same time in logical replication.\" That's beyond\n> me ... \"allows [..] to not allow\"? Why not, an output plugin can still\n> reasonably request both. And that's a good thing, IMO. What problem\n> does the patch try to solve?\n>\n\nAFAIU, Ajin doesn't want to mandate streaming with two_pc option. But,\nmaybe you are right that it doesn't make sense for the user to provide\nboth options but doesn't provide stream_prepare callback, and giving\nan error in such a case should be fine. I think if we have to follow\nAjin's idea then we need to skip 2PC in such a case (both prepare and\ncommit prepare) and make this a regular transaction.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 9 Mar 2021 16:04:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make stream_prepare an optional callback"
}
] |
[
{
"msg_contents": "Thanks Daniel for the input / next-steps.\n\nI see that 'master' too has this same magic constant [1] and so I expect it\nto have similar restrictions, although I haven't tested this yet.\n\nI do agree that the need then is to re-submit a patch that works with\n'master'. (I am travelling the next few days but) Unless discussions go\ntangential, I expect to revert with an updated patch by the end of this week\nand create a commitfest entry while at it.\n\nReference:\n1)\nhttps://github.com/postgres/postgres/blob/master/src/bin/pg_resetwal/pg_rese\ntwal.c#L444\n\n-\nRobins Tharakan\n\n> -----Original Message-----\n> From: Daniel Gustafsson <daniel@yesql.se>\n> Sent: Monday, 8 March 2021 9:42 AM\n> To: Tharakan, Robins <tharar@amazon.com>\n> Cc: pgsql-hackers@postgresql.org\n> Subject: RE: [EXTERNAL] pg_upgrade failing for 200+ million Large Objects\n> \n> CAUTION: This email originated from outside of the organization. Do not\n> click links or open attachments unless you can confirm the sender and\n> know the content is safe.\n> \n> \n> \n> > On 7 Mar 2021, at 09:43, Tharakan, Robins <tharar@amazon.com> wrote:\n> \n> > The patch (attached):\n> > - Applies cleanly on REL9_6_STABLE -\n> > c7a4fc3dd001646d5938687ad59ab84545d5d043\n> \n> Did you target 9.6 because that's where you want to upgrade to, or is\n> this not a problem on HEAD? If it's still a problem on HEAD you should\n> probably submit the patch against there. You probably also want to add\n> it to the next commit fest to make sure it's not forgotten about:\n> https://commitfest.postgresql.org/33/\n> \n> > I am not married to the patch (especially the argument name) but\n> > ideally I'd prefer a way to get this upgrade going without a patch.\n> > For now, I am unable to find any other way to upgrade a v9.5 Postgres\n> > database in this scenario, facing End-of-Life.\n> \n> It's obviously not my call to make in any shape or form, but this doesn't\n> really seem to fall under what is generally backported into a stable\n> release?\n> \n> --\n> Daniel Gustafsson https://vmware.com/",
"msg_date": "Mon, 8 Mar 2021 11:00:43 +0000",
"msg_from": "\"Tharakan, Robins\" <tharar@amazon.com>",
"msg_from_op": true,
"msg_subject": "RE: pg_upgrade failing for 200+ million Large Objects"
}
] |
[
{
"msg_contents": "Thanks Peter.\n\nThe original email [1] had some more context that somehow didn't get\nassociated with this recent email. Apologies for any confusion.\n\nIn short, pg_resetxlog (and pg_resetwal) employs a magic constant [2] (for\nboth v9.6 as well as master) which seems to have been selected to force an\naggressive autovacuum as soon as the upgrade completes. Although that works\nas planned, it narrows the window of Transaction IDs available for the\nupgrade (before which XID wraparound protection kicks and aborts the\nupgrade) to 146 Million.\n\nReducing this magic constant allows a larger XID window, which is what the\npatch is trying to do. With the patch, I was able to upgrade a cluster with\n500m Large Objects successfully (which otherwise reliably fails). In the\noriginal email [1] I had also listed a few other possible workarounds, but\nwas unsure which would be a good direction to start working on.... thus this\npatch to make a start.\n\nReference:\n1) https://www.postgresql.org/message-id/12601596dbbc4c01b86b4ac4d2bd4d48%40\nEX13D05UWC001.ant.amazon.com\n2) https://github.com/postgres/postgres/blob/master/src/bin/pg_resetwal/pg_r\nesetwal.c#L444\n\n-\nrobins | tharar@ | syd12\n\n\n> -----Original Message-----\n> From: Peter Eisentraut <peter.eisentraut@enterprisedb.com>\n> Sent: Monday, 8 March 2021 9:25 PM\n> To: Tharakan, Robins <tharar@amazon.com>; pgsql-hackers@postgresql.org\n> Subject: [EXTERNAL] [UNVERIFIED SENDER] Re: pg_upgrade failing for 200+\n> million Large Objects\n> \n> CAUTION: This email originated from outside of the organization. Do not\n> click links or open attachments unless you can confirm the sender and\n> know the content is safe.\n> \n> \n> \n> On 07.03.21 09:43, Tharakan, Robins wrote:\n> > Attached is a proof-of-concept patch that allows Postgres to perform\n> > pg_upgrade if the instance has Millions of objects.\n> >\n> > It would be great if someone could take a look and see if this patch\n> > is in the right direction. There are some pending tasks (such as\n> > documentation / pg_resetxlog vs pg_resetwal related changes) but for\n> > now, the patch helps remove a stalemate where if a Postgres instance\n> > has a large number (accurately speaking 146+ Million) of Large\n> > Objects, pg_upgrade fails. This is easily reproducible and besides\n> > deleting Large Objects before upgrade, there is no other (apparent) way\n> for pg_upgrade to complete.\n> >\n> > The patch (attached):\n> > - Applies cleanly on REL9_6_STABLE -\n> > c7a4fc3dd001646d5938687ad59ab84545d5d043\n> > - 'make check' passes\n> > - Allows the user to provide a constant via pg_upgrade command-line,\n> > that overrides the 2 billion constant in pg_resetxlog [1] thereby\n> > increasing the (window of) Transaction IDs available for pg_upgrade to\n> complete.\n> \n> Could you explain what your analysis of the problem is and why this patch\n> (might) fix it?\n> \n> Right now, all I see here is, pass a big number via a command-line option\n> and hope it works.",
"msg_date": "Mon, 8 Mar 2021 11:02:04 +0000",
"msg_from": "\"Tharakan, Robins\" <tharar@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Mon, Mar 8, 2021 at 12:02 PM Tharakan, Robins <tharar@amazon.com> wrote:\n>\n> Thanks Peter.\n>\n> The original email [1] had some more context that somehow didn't get\n> associated with this recent email. Apologies for any confusion.\n\nPlease take a look at your email configuration -- all your emails are\nlacking both References and In-reply-to headers, so every email starts\na new thread, both for each reader and in the archives. It seems quite\nbroken. It makes it very hard to follow.\n\n\n> In short, pg_resetxlog (and pg_resetwal) employs a magic constant [2] (for\n> both v9.6 as well as master) which seems to have been selected to force an\n> aggressive autovacuum as soon as the upgrade completes. Although that works\n> as planned, it narrows the window of Transaction IDs available for the\n> upgrade (before which XID wraparound protection kicks and aborts the\n> upgrade) to 146 Million.\n>\n> Reducing this magic constant allows a larger XID window, which is what the\n> patch is trying to do. With the patch, I was able to upgrade a cluster with\n> 500m Large Objects successfully (which otherwise reliably fails). In the\n> original email [1] I had also listed a few other possible workarounds, but\n> was unsure which would be a good direction to start working on.... thus this\n> patch to make a start.\n\nThis still seems to just fix the symptoms and not the actual problem.\n\nWhat part of the pg_upgrade process is it that actually burns through\nthat many transactions?\n\nWithout looking, I would guess it's the schema reload using\npg_dump/pg_restore and not actually pg_upgrade itself. This is a known\nissue in pg_dump/pg_restore. And if that is the case -- perhaps just\nrunning all of those in a single transaction would be a better choice?\nOne could argue it's still not a proper fix, because we'd still have a\nhuge memory usage etc, but it would then only burn 1 xid instead of\n500M...\n\nAFAICT at a quick check, pg_dump in binary upgrade mode emits one\nlo_create() and one ALTER ... OWNER TO for each large object - so with\n500M large objects that would be a billion statements, and thus a\nbillion xids. And without checking, I'm fairly sure it doesn't load in\na single transaction...\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 8 Mar 2021 13:33:58 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Hi Magnus,\n\nOn Mon, 8 Mar 2021 at 23:34, Magnus Hagander <magnus@hagander.net> wrote:\n\n> AFAICT at a quick check, pg_dump in binary upgrade mode emits one\n\nlo_create() and one ALTER ... OWNER TO for each large object - so with\n> 500M large objects that would be a billion statements, and thus a\n> billion xids. And without checking, I'm fairly sure it doesn't load in\n> a single transaction...\n>\n\nYour assumptions are pretty much correct.\n\nThe issue isn't with pg_upgrade itself. During pg_restore, each Large\nObject (and separately each ALTER LARGE OBJECT OWNER TO) consumes an XID\neach. For background, that's the reason the v9.5 production instance I was\nreviewing, was unable to process more than 73 Million large objects since\neach object required a CREATE + ALTER. (To clarify, 73 million = (2^31 - 2\nbillion magic constant - 1 Million wraparound protection) / 2)\n\n\nWithout looking, I would guess it's the schema reload using\n> pg_dump/pg_restore and not actually pg_upgrade itself. This is a known\n> issue in pg_dump/pg_restore. And if that is the case -- perhaps just\n> running all of those in a single transaction would be a better choice?\n> One could argue it's still not a proper fix, because we'd still have a\n> huge memory usage etc, but it would then only burn 1 xid instead of\n> 500M...\n>\n(I hope I am not missing something but) When I tried to force pg_restore to\nuse a single transaction (by hacking pg_upgrade's pg_restore call to use\n--single-transaction), it too failed owing to being unable to lock so many\nobjects in a single transaction.\n\n\nThis still seems to just fix the symptoms and not the actual problem.\n>\n\nI agree that the patch doesn't address the root-cause, but it did get the\nupgrade to complete on a test-setup. Do you think that (instead of all\nobjects) batching multiple Large Objects in a single transaction (and\nallowing the caller to size that batch via command line) would be a good /\nacceptable idea here?\n\nPlease take a look at your email configuration -- all your emails are\n> lacking both References and In-reply-to headers.\n>\n\nThanks for highlighting the cause here. Hopefully switching mail clients\nwould help.\n-\nRobins Tharakan\n\nHi Magnus,On Mon, 8 Mar 2021 at 23:34, Magnus Hagander <magnus@hagander.net> wrote:AFAICT at a quick check, pg_dump in binary upgrade mode emits one\nlo_create() and one ALTER ... OWNER TO for each large object - so with\n500M large objects that would be a billion statements, and thus a\nbillion xids. And without checking, I'm fairly sure it doesn't load in\na single transaction...Your assumptions are pretty much correct.The issue isn't with pg_upgrade itself. During pg_restore, each Large Object (and separately each ALTER LARGE OBJECT OWNER TO) consumes an XID each. For background, that's the reason the v9.5 production instance I was reviewing, was unable to process more than 73 Million large objects since each object required a CREATE + ALTER. (To clarify, 73 million = (2^31 - 2 billion magic constant - 1 Million wraparound protection) / 2)Without looking, I would guess it's the schema reload using\npg_dump/pg_restore and not actually pg_upgrade itself. This is a known\nissue in pg_dump/pg_restore. And if that is the case -- perhaps just\nrunning all of those in a single transaction would be a better choice?\nOne could argue it's still not a proper fix, because we'd still have a\nhuge memory usage etc, but it would then only burn 1 xid instead of\n500M...(I hope I am not missing something but) When I tried to force pg_restore to use a single transaction (by hacking pg_upgrade's pg_restore call to use --single-transaction), it too failed owing to being unable to lock so many objects in a single transaction.This still seems to just fix the symptoms and not the actual problem.I agree that the patch doesn't address the root-cause, but it did get the upgrade to complete on a test-setup. Do you think that (instead of all objects) batching multiple Large Objects in a single transaction (and allowing the caller to size that batch via command line) would be a good / acceptable idea here?Please take a look at your email configuration -- all your emails are\nlacking both References and In-reply-to headers.Thanks for highlighting the cause here. Hopefully switching mail clients would help.-Robins Tharakan",
"msg_date": "Tue, 9 Mar 2021 01:13:02 +1100",
"msg_from": "Robins Tharakan <tharakan@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Robins Tharakan <tharakan@gmail.com> writes:\n> On Mon, 8 Mar 2021 at 23:34, Magnus Hagander <magnus@hagander.net> wrote:\n>> Without looking, I would guess it's the schema reload using\n>> pg_dump/pg_restore and not actually pg_upgrade itself. This is a known\n>> issue in pg_dump/pg_restore. And if that is the case -- perhaps just\n>> running all of those in a single transaction would be a better choice?\n>> One could argue it's still not a proper fix, because we'd still have a\n>> huge memory usage etc, but it would then only burn 1 xid instead of\n>> 500M...\n\n> (I hope I am not missing something but) When I tried to force pg_restore to\n> use a single transaction (by hacking pg_upgrade's pg_restore call to use\n> --single-transaction), it too failed owing to being unable to lock so many\n> objects in a single transaction.\n\nIt does seem that --single-transaction is a better idea than fiddling with\nthe transaction wraparound parameters, since the latter is just going to\nput off the onset of trouble. However, we'd have to do something about\nthe lock consumption. Would it be sane to have the backend not bother to\ntake any locks in binary-upgrade mode?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Mar 2021 11:33:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Mon, Mar 8, 2021 at 5:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robins Tharakan <tharakan@gmail.com> writes:\n> > On Mon, 8 Mar 2021 at 23:34, Magnus Hagander <magnus@hagander.net> wrote:\n> >> Without looking, I would guess it's the schema reload using\n> >> pg_dump/pg_restore and not actually pg_upgrade itself. This is a known\n> >> issue in pg_dump/pg_restore. And if that is the case -- perhaps just\n> >> running all of those in a single transaction would be a better choice?\n> >> One could argue it's still not a proper fix, because we'd still have a\n> >> huge memory usage etc, but it would then only burn 1 xid instead of\n> >> 500M...\n>\n> > (I hope I am not missing something but) When I tried to force pg_restore to\n> > use a single transaction (by hacking pg_upgrade's pg_restore call to use\n> > --single-transaction), it too failed owing to being unable to lock so many\n> > objects in a single transaction.\n>\n> It does seem that --single-transaction is a better idea than fiddling with\n> the transaction wraparound parameters, since the latter is just going to\n> put off the onset of trouble. However, we'd have to do something about\n> the lock consumption. Would it be sane to have the backend not bother to\n> take any locks in binary-upgrade mode?\n\nI believe the problem occurs when writing them rather than when\nreading them, and I don't think we have a binary upgrade mode there.\n\nWe could invent one of course. Another option might be to exclusively\nlock pg_largeobject, and just say that if you do that, we don't have\nto lock the individual objects (ever)?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 8 Mar 2021 17:35:56 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Mon, Mar 8, 2021 at 5:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It does seem that --single-transaction is a better idea than fiddling with\n>> the transaction wraparound parameters, since the latter is just going to\n>> put off the onset of trouble. However, we'd have to do something about\n>> the lock consumption. Would it be sane to have the backend not bother to\n>> take any locks in binary-upgrade mode?\n\n> I believe the problem occurs when writing them rather than when\n> reading them, and I don't think we have a binary upgrade mode there.\n\nYou're confusing pg_dump's --binary-upgrade switch (indeed applied on\nthe dumping side) with the backend's -b switch (IsBinaryUpgrade,\napplied on the restoring side).\n\n> We could invent one of course. Another option might be to exclusively\n> lock pg_largeobject, and just say that if you do that, we don't have\n> to lock the individual objects (ever)?\n\nWhat was in the back of my mind is that we've sometimes seen complaints\nabout too many locks needed to dump or restore a database with $MANY\ntables; so the large-object case seems like just a special case.\n\nThe answer up to now has been \"raise max_locks_per_transaction enough\nso you don't see the failure\". Having now consumed a little more\ncaffeine, I remember that that works in pg_upgrade scenarios too,\nsince the user can fiddle with the target cluster's postgresql.conf\nbefore starting pg_upgrade.\n\nSo it seems like the path of least resistance is\n\n(a) make pg_upgrade use --single-transaction when calling pg_restore\n\n(b) document (better) how to get around too-many-locks failures.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Mar 2021 11:58:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Mon, Mar 8, 2021 at 5:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Magnus Hagander <magnus@hagander.net> writes:\n> > On Mon, Mar 8, 2021 at 5:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> It does seem that --single-transaction is a better idea than fiddling with\n> >> the transaction wraparound parameters, since the latter is just going to\n> >> put off the onset of trouble. However, we'd have to do something about\n> >> the lock consumption. Would it be sane to have the backend not bother to\n> >> take any locks in binary-upgrade mode?\n>\n> > I believe the problem occurs when writing them rather than when\n> > reading them, and I don't think we have a binary upgrade mode there.\n>\n> You're confusing pg_dump's --binary-upgrade switch (indeed applied on\n> the dumping side) with the backend's -b switch (IsBinaryUpgrade,\n> applied on the restoring side).\n\nAh. Yes, I am.\n\n\n> > We could invent one of course. Another option might be to exclusively\n> > lock pg_largeobject, and just say that if you do that, we don't have\n> > to lock the individual objects (ever)?\n>\n> What was in the back of my mind is that we've sometimes seen complaints\n> about too many locks needed to dump or restore a database with $MANY\n> tables; so the large-object case seems like just a special case.\n\nIt is -- but I guess it's more likely to have 100M large objects than\nto have 100M tables. (and the cutoff point comes a lot earlier than\n100M). But the fundamental onei s the same.\n\n\n> The answer up to now has been \"raise max_locks_per_transaction enough\n> so you don't see the failure\". Having now consumed a little more\n> caffeine, I remember that that works in pg_upgrade scenarios too,\n> since the user can fiddle with the target cluster's postgresql.conf\n> before starting pg_upgrade.\n>\n> So it seems like the path of least resistance is\n>\n> (a) make pg_upgrade use --single-transaction when calling pg_restore\n>\n> (b) document (better) how to get around too-many-locks failures.\n\nAgreed. Certainly seems like a better path forward than arbitrarily\npushing the limit on number of transactions which just postpones the\nproblem.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 8 Mar 2021 18:18:12 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On 3/8/21 11:58 AM, Tom Lane wrote:\n> The answer up to now has been \"raise max_locks_per_transaction enough\n> so you don't see the failure\". Having now consumed a little more\n> caffeine, I remember that that works in pg_upgrade scenarios too,\n> since the user can fiddle with the target cluster's postgresql.conf\n> before starting pg_upgrade.\n> \n> So it seems like the path of least resistance is\n> \n> (a) make pg_upgrade use --single-transaction when calling pg_restore\n> \n> (b) document (better) how to get around too-many-locks failures.\n\nThat would first require to fix how pg_upgrade is creating the \ndatabases. It uses \"pg_restore --create\", which is mutually exclusive \nwith --single-transaction because we cannot create a database inside of \na transaction. On the way pg_upgrade also mangles the pg_database.datdba \n(all databases are owned by postgres after an upgrade; will submit a \nseparate patch for that as I consider that a bug by itself).\n\nAll that aside, the entire approach doesn't scale.\n\nIn a hacked up pg_upgrade that does \"createdb\" first before calling \npg_upgrade with --single-transaction. I can upgrade 1M large objects with\n max_locks_per_transaction = 5300\n max_connectinons=100\nwhich contradicts the docs. Need to find out where that math went off \nthe rails because that config should only have room for 530,000 locks, \nnot 1M. The same test fails with max_locks_per_transaction = 5200.\n\nBut this would mean that one has to modify the postgresql.conf to \nsomething like 530,000 max_locks_per_transaction at 100 max_connections \nin order to actually run a successful upgrade of 100M large objects. \nThis config requires 26GB of memory just for locks. Add to that the \nmemory pg_restore needs to load the entire TOC before even restoring a \nsingle object.\n\nNot going to work. But tests are still ongoing ...\n\n\nRegards, Jan\n\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Sat, 20 Mar 2021 00:39:10 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "\nOn 3/20/21 12:39 AM, Jan Wieck wrote:\n> On 3/8/21 11:58 AM, Tom Lane wrote:\n>> The answer up to now has been \"raise max_locks_per_transaction enough\n>> so you don't see the failure\". Having now consumed a little more\n>> caffeine, I remember that that works in pg_upgrade scenarios too,\n>> since the user can fiddle with the target cluster's postgresql.conf\n>> before starting pg_upgrade.\n>>\n>> So it seems like the path of least resistance is\n>>\n>> (a) make pg_upgrade use --single-transaction when calling pg_restore\n>>\n>> (b) document (better) how to get around too-many-locks failures.\n>\n> That would first require to fix how pg_upgrade is creating the\n> databases. It uses \"pg_restore --create\", which is mutually exclusive\n> with --single-transaction because we cannot create a database inside\n> of a transaction. On the way pg_upgrade also mangles the\n> pg_database.datdba (all databases are owned by postgres after an\n> upgrade; will submit a separate patch for that as I consider that a\n> bug by itself).\n>\n> All that aside, the entire approach doesn't scale.\n>\n> In a hacked up pg_upgrade that does \"createdb\" first before calling\n> pg_upgrade with --single-transaction. I can upgrade 1M large objects with\n> max_locks_per_transaction = 5300\n> max_connectinons=100\n> which contradicts the docs. Need to find out where that math went off\n> the rails because that config should only have room for 530,000 locks,\n> not 1M. The same test fails with max_locks_per_transaction = 5200.\n>\n> But this would mean that one has to modify the postgresql.conf to\n> something like 530,000 max_locks_per_transaction at 100\n> max_connections in order to actually run a successful upgrade of 100M\n> large objects. This config requires 26GB of memory just for locks. Add\n> to that the memory pg_restore needs to load the entire TOC before even\n> restoring a single object.\n>\n> Not going to work. But tests are still ongoing ...\n\n\n\nI thought Tom's suggestion upthread:\n\n\n> Would it be sane to have the backend not bother to\n> take any locks in binary-upgrade mode?\n\n\nwas interesting. Could we do that on the restore side? After all, what\nare we locking against in binary upgrade mode?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 20 Mar 2021 11:17:41 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Jan Wieck <jan@wi3ck.info> writes:\n> On 3/8/21 11:58 AM, Tom Lane wrote:\n>> So it seems like the path of least resistance is\n>> (a) make pg_upgrade use --single-transaction when calling pg_restore\n>> (b) document (better) how to get around too-many-locks failures.\n\n> That would first require to fix how pg_upgrade is creating the \n> databases. It uses \"pg_restore --create\", which is mutually exclusive \n> with --single-transaction because we cannot create a database inside of \n> a transaction.\n\nUgh.\n\n> All that aside, the entire approach doesn't scale.\n\nYeah, agreed. When we gave large objects individual ownership and ACL\ninfo, it was argued that pg_dump could afford to treat each one as a\nseparate TOC entry because \"you wouldn't have that many of them, if\nthey're large\". The limits of that approach were obvious even at the\ntime, and I think now we're starting to see people for whom it really\ndoesn't work.\n\nI wonder if pg_dump could improve matters cheaply by aggregating the\nlarge objects by owner and ACL contents. That is, do\n\nselect distinct lomowner, lomacl from pg_largeobject_metadata;\n\nand make just *one* BLOB TOC entry for each result. Then dump out\nall the matching blobs under that heading.\n\nA possible objection is that it'd reduce the ability to restore blobs\nselectively, so maybe we'd need to make it optional.\n\nOf course, that just reduces the memory consumption on the client\nside; it does nothing for the locks. Can we get away with releasing the\nlock immediately after doing an ALTER OWNER or GRANT/REVOKE on a blob?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 20 Mar 2021 11:23:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Sat, Mar 20, 2021 at 11:23:19AM -0400, Tom Lane wrote:\n> I wonder if pg_dump could improve matters cheaply by aggregating the\n> large objects by owner and ACL contents. That is, do\n> \n> select distinct lomowner, lomacl from pg_largeobject_metadata;\n> \n> and make just *one* BLOB TOC entry for each result. Then dump out\n> all the matching blobs under that heading.\n> \n> A possible objection is that it'd reduce the ability to restore blobs\n> selectively, so maybe we'd need to make it optional.\n> \n> Of course, that just reduces the memory consumption on the client\n> side; it does nothing for the locks. Can we get away with releasing the\n> lock immediately after doing an ALTER OWNER or GRANT/REVOKE on a blob?\n\nWell, in pg_upgrade mode you can, since there are no other cluster\nusers, but you might be asking for general pg_dump usage.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Sat, 20 Mar 2021 12:45:36 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> On Sat, Mar 20, 2021 at 11:23:19AM -0400, Tom Lane wrote:\n>> Of course, that just reduces the memory consumption on the client\n>> side; it does nothing for the locks. Can we get away with releasing the\n>> lock immediately after doing an ALTER OWNER or GRANT/REVOKE on a blob?\n\n> Well, in pg_upgrade mode you can, since there are no other cluster\n> users, but you might be asking for general pg_dump usage.\n\nYeah, this problem doesn't only affect pg_upgrade scenarios, so it'd\nreally be better to find a way that isn't dependent on binary-upgrade\nmode.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 20 Mar 2021 12:53:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On 3/20/21 11:23 AM, Tom Lane wrote:\n> Jan Wieck <jan@wi3ck.info> writes:\n>> All that aside, the entire approach doesn't scale.\n> \n> Yeah, agreed. When we gave large objects individual ownership and ACL\n> info, it was argued that pg_dump could afford to treat each one as a\n> separate TOC entry because \"you wouldn't have that many of them, if\n> they're large\". The limits of that approach were obvious even at the\n> time, and I think now we're starting to see people for whom it really\n> doesn't work.\n\nIt actually looks more like some users have millions of \"small objects\". \nI am still wondering where that is coming from and why they are abusing \nLOs in that way, but that is more out of curiosity. Fact is that they \nare out there and that they cannot upgrade from their 9.5 databases, \nwhich are now past EOL.\n\n> \n> I wonder if pg_dump could improve matters cheaply by aggregating the\n> large objects by owner and ACL contents. That is, do\n> \n> select distinct lomowner, lomacl from pg_largeobject_metadata;\n> \n> and make just *one* BLOB TOC entry for each result. Then dump out\n> all the matching blobs under that heading.\n\nWhat I am currently experimenting with is moving the BLOB TOC entries \ninto the parallel data phase of pg_restore \"when doing binary upgrade\". \nIt seems to scale nicely with the number of cores in the system. In \naddition to that have options for pg_upgrade and pg_restore that cause \nthe restore to batch them into transactions, like 10,000 objects at a \ntime. There was a separate thread for that but I guess it is better to \nkeep it all together here now.\n\n> \n> A possible objection is that it'd reduce the ability to restore blobs\n> selectively, so maybe we'd need to make it optional.\n\nI fully intend to make all this into new \"options\". I am afraid that \nthere is no one-size-fits-all solution here.\n> \n> Of course, that just reduces the memory consumption on the client\n> side; it does nothing for the locks. Can we get away with releasing the\n> lock immediately after doing an ALTER OWNER or GRANT/REVOKE on a blob?\n\nI'm not very fond of the idea going lockless when at the same time \ntrying to parallelize the restore phase. That can lead to really nasty \nrace conditions. For now I'm aiming at batches in transactions.\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Sat, 20 Mar 2021 12:55:24 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "\nOn 3/20/21 12:55 PM, Jan Wieck wrote:\n> On 3/20/21 11:23 AM, Tom Lane wrote:\n>> Jan Wieck <jan@wi3ck.info> writes:\n>>> All that aside, the entire approach doesn't scale.\n>>\n>> Yeah, agreed. When we gave large objects individual ownership and ACL\n>> info, it was argued that pg_dump could afford to treat each one as a\n>> separate TOC entry because \"you wouldn't have that many of them, if\n>> they're large\". The limits of that approach were obvious even at the\n>> time, and I think now we're starting to see people for whom it really\n>> doesn't work.\n>\n> It actually looks more like some users have millions of \"small\n> objects\". I am still wondering where that is coming from and why they\n> are abusing LOs in that way, but that is more out of curiosity. Fact\n> is that they are out there and that they cannot upgrade from their 9.5\n> databases, which are now past EOL.\n>\n\nOne possible (probable?) source is the JDBC driver, which currently\ntreats all Blobs (and Clobs, for that matter) as LOs. I'm working on\nimproving that some: <https://github.com/pgjdbc/pgjdbc/pull/2093>\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 21 Mar 2021 07:47:12 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On 3/20/21 12:39 AM, Jan Wieck wrote:\n> On the way pg_upgrade also mangles the pg_database.datdba\n> (all databases are owned by postgres after an upgrade; will submit a\n> separate patch for that as I consider that a bug by itself).\n\nPatch attached.\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services",
"msg_date": "Sun, 21 Mar 2021 12:50:46 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Fix pg_upgrade to preserve datdba (was: Re: pg_upgrade failing for\n 200+ million Large Objects)"
},
{
"msg_contents": "On 3/21/21 7:47 AM, Andrew Dunstan wrote:\n> One possible (probable?) source is the JDBC driver, which currently\n> treats all Blobs (and Clobs, for that matter) as LOs. I'm working on\n> improving that some: <https://github.com/pgjdbc/pgjdbc/pull/2093>\n\nYou mean the user is using OID columns pointing to large objects and the \nJDBC driver is mapping those for streaming operations?\n\nYeah, that would explain a lot.\n\n\nThanks, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Sun, 21 Mar 2021 12:56:02 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Jan Wieck <jan@wi3ck.info> writes:\n> On 3/20/21 12:39 AM, Jan Wieck wrote:\n>> On the way pg_upgrade also mangles the pg_database.datdba\n>> (all databases are owned by postgres after an upgrade; will submit a\n>> separate patch for that as I consider that a bug by itself).\n\n> Patch attached.\n\nHmm, doesn't this lose all *other* database-level properties?\n\nI think maybe what we have here is a bug in pg_restore, its\n--create switch ought to be trying to update the database's\nownership.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Mar 2021 12:57:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_upgrade to preserve datdba (was: Re: pg_upgrade failing\n for 200+ million Large Objects)"
},
{
"msg_contents": "On 3/21/21 12:57 PM, Tom Lane wrote:\n> Jan Wieck <jan@wi3ck.info> writes:\n>> On 3/20/21 12:39 AM, Jan Wieck wrote:\n>>> On the way pg_upgrade also mangles the pg_database.datdba\n>>> (all databases are owned by postgres after an upgrade; will submit a\n>>> separate patch for that as I consider that a bug by itself).\n> \n>> Patch attached.\n> \n> Hmm, doesn't this lose all *other* database-level properties?\n> \n> I think maybe what we have here is a bug in pg_restore, its\n> --create switch ought to be trying to update the database's\n> ownership.\n\nPossibly. I didn't look into that route.\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Sun, 21 Mar 2021 13:15:54 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_upgrade to preserve datdba"
},
{
"msg_contents": "On 3/21/21 1:15 PM, Jan Wieck wrote:\n> On 3/21/21 12:57 PM, Tom Lane wrote:\n>> Jan Wieck <jan@wi3ck.info> writes:\n>>> On 3/20/21 12:39 AM, Jan Wieck wrote:\n>>>> On the way pg_upgrade also mangles the pg_database.datdba\n>>>> (all databases are owned by postgres after an upgrade; will submit a\n>>>> separate patch for that as I consider that a bug by itself).\n>> \n>>> Patch attached.\n>> \n>> Hmm, doesn't this lose all *other* database-level properties?\n>> \n>> I think maybe what we have here is a bug in pg_restore, its\n>> --create switch ought to be trying to update the database's\n>> ownership.\n> \n> Possibly. I didn't look into that route.\n\nThanks for that. I like this patch a lot better.\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services",
"msg_date": "Sun, 21 Mar 2021 13:50:45 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_upgrade to preserve datdba"
},
{
"msg_contents": "\nOn 3/21/21 12:56 PM, Jan Wieck wrote:\n> On 3/21/21 7:47 AM, Andrew Dunstan wrote:\n>> One possible (probable?) source is the JDBC driver, which currently\n>> treats all Blobs (and Clobs, for that matter) as LOs. I'm working on\n>> improving that some: <https://github.com/pgjdbc/pgjdbc/pull/2093>\n>\n> You mean the user is using OID columns pointing to large objects and\n> the JDBC driver is mapping those for streaming operations?\n>\n> Yeah, that would explain a lot.\n>\n>\n>\n\n\nProbably in most cases the database is designed by Hibernate, and the\nfront end programmers know nothing at all of Oids or LOs, they just ask\nfor and get a Blob.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 21 Mar 2021 14:18:59 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Jan Wieck <jan@wi3ck.info> writes:\n>> On 3/21/21 12:57 PM, Tom Lane wrote:\n>>> I think maybe what we have here is a bug in pg_restore, its\n>>> --create switch ought to be trying to update the database's\n>>> ownership.\n\n> Thanks for that. I like this patch a lot better.\n\nNeeds a little more work than that --- we should allow it to respond\nto the --no-owner switch, for example. But I think likely we can do\nit where other object ownership is handled. I'll look in a bit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Mar 2021 14:23:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_upgrade to preserve datdba"
},
{
"msg_contents": "I wrote:\n> Needs a little more work than that --- we should allow it to respond\n> to the --no-owner switch, for example. But I think likely we can do\n> it where other object ownership is handled. I'll look in a bit.\n\nActually ... said code already DOES do that, so now I'm confused.\nI tried\n\nregression=# create user joe;\nCREATE ROLE\nregression=# create database joe owner joe;\nCREATE DATABASE\nregression=# \\q\n$ pg_dump -Fc joe >joe.dump\n$ pg_restore --create -f - joe.dump | more\n\nand I see\n\n--\n-- Name: joe; Type: DATABASE; Schema: -; Owner: joe\n--\n\nCREATE DATABASE joe WITH TEMPLATE = template0 ENCODING = 'SQL_ASCII' LOCALE = 'C';\n\n\nALTER DATABASE joe OWNER TO joe;\n\nso at least in this case it's doing the right thing. We need a bit\nmore detail about the context in which it's doing the wrong thing\nfor you.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Mar 2021 14:34:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_upgrade to preserve datdba"
},
{
"msg_contents": "I wrote:\n> ... so at least in this case it's doing the right thing. We need a bit\n> more detail about the context in which it's doing the wrong thing\n> for you.\n\nJust to cross-check, I tried modifying pg_upgrade's regression test\nas attached, and it still passes. (And inspection of the leftover\ndump2.sql file verifies that the database ownership was correct.)\nSo I'm not sure what's up here.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 21 Mar 2021 15:29:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_upgrade to preserve datdba"
},
{
"msg_contents": "On 3/21/21 2:34 PM, Tom Lane wrote:\n> and I see\n> \n> --\n> -- Name: joe; Type: DATABASE; Schema: -; Owner: joe\n> --\n> \n> CREATE DATABASE joe WITH TEMPLATE = template0 ENCODING = 'SQL_ASCII' LOCALE = 'C';\n> \n> \n> ALTER DATABASE joe OWNER TO joe;\n> \n> so at least in this case it's doing the right thing. We need a bit\n> more detail about the context in which it's doing the wrong thing\n> for you.\n\nAfter moving all of this to a pristine postgresql.org based repo I see \nthe same. My best guess at this point is that the permission hoops, that \nRDS and Aurora PostgreSQL are jumping through, was messing with this. \nBut that has nothing to do with the actual topic.\n\nSo let's focus on the actual problem of running out of XIDs and memory \nwhile doing the upgrade involving millions of small large objects.\n\n\nRegards, Jan\n\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Sun, 21 Mar 2021 15:36:51 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_upgrade to preserve datdba"
},
{
"msg_contents": "Jan Wieck <jan@wi3ck.info> writes:\n> So let's focus on the actual problem of running out of XIDs and memory \n> while doing the upgrade involving millions of small large objects.\n\nRight. So as far as --single-transaction vs. --create goes, that's\nmostly a definitional problem. As long as the contents of a DB are\nrestored in one transaction, it's not gonna matter if we eat one or\ntwo more XIDs while creating the DB itself. So we could either\nrelax pg_restore's complaint, or invent a different switch that's\nnamed to acknowledge that it's not really only one transaction.\n\nThat still leaves us with the lots-o-locks problem. However, once\nwe've crossed the Rubicon of \"it's not really only one transaction\",\nyou could imagine that the switch is \"--fewer-transactions\", and the\nidea is for pg_restore to commit after every (say) 100000 operations.\nThat would both bound its lock requirements and greatly cut its XID\nconsumption.\n\nThe work you described sounded like it could fit into that paradigm,\nwith the additional ability to run some parallel restore tasks\nthat are each consuming a bounded number of locks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 21 Mar 2021 15:56:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_upgrade to preserve datdba"
},
{
"msg_contents": "On 3/21/21 3:56 PM, Tom Lane wrote:\n> Jan Wieck <jan@wi3ck.info> writes:\n>> So let's focus on the actual problem of running out of XIDs and memory \n>> while doing the upgrade involving millions of small large objects.\n> \n> Right. So as far as --single-transaction vs. --create goes, that's\n> mostly a definitional problem. As long as the contents of a DB are\n> restored in one transaction, it's not gonna matter if we eat one or\n> two more XIDs while creating the DB itself. So we could either\n> relax pg_restore's complaint, or invent a different switch that's\n> named to acknowledge that it's not really only one transaction.\n> \n> That still leaves us with the lots-o-locks problem. However, once\n> we've crossed the Rubicon of \"it's not really only one transaction\",\n> you could imagine that the switch is \"--fewer-transactions\", and the\n> idea is for pg_restore to commit after every (say) 100000 operations.\n> That would both bound its lock requirements and greatly cut its XID\n> consumption.\n\nIt leaves us with three things.\n\n1) tremendous amounts of locks\n2) tremendous amounts of memory needed\n3) taking forever because it is single threaded.\n\nI created a pathological case here on a VM with 24GB of RAM, 80GB of \nSWAP sitting on NVME. The database has 20 million large objects, each of \nwhich has 2 GRANTS, 1 COMMENT and 1 SECURITY LABEL (dummy). Each LO only \ncontains a string \"large object <oid>\", so the whole database in 9.5 is \nabout 15GB in size.\n\nA stock pg_upgrade to version 14devel using --link takes about 15 hours. \nThis is partly because the pg_dump and pg_restore both grow to something \nlike 50GB+ to hold the TOC. Which sounds out of touch considering that \nthe entire system catalog on disk is less than 15GB. But aside from the \nridiculous amount of swapping, the whole thing also suffers from \nconsuming about 80 million transactions and apparently having just as \nmany network round trips with a single client.\n\n> \n> The work you described sounded like it could fit into that paradigm,\n> with the additional ability to run some parallel restore tasks\n> that are each consuming a bounded number of locks.\n\nI have attached a POC patch that implements two new options for pg_upgrade.\n\n --restore-jobs=NUM --jobs parameter passed to pg_restore\n --restore-blob-batch-size=NUM number of blobs restored in one xact\n\nIt does a bit more than just that. It rearranges the way large objects \nare dumped so that most of the commands are all in one TOC entry and the \nentry is emitted into SECTION_DATA when in binary upgrade mode (which \nguarantees that there isn't any actual BLOB data in the dump). This \ngreatly reduces the number of network round trips and when using 8 \nparallel restore jobs, almost saturates the 4-core VM. Reducing the \nnumber of TOC entries also reduces the total virtual memory need of \npg_restore to 15G, so there is a lot less swapping going on.\n\nIt cuts down the pg_upgrade time from 15 hours to 1.5 hours. In that run \nI used --restore-jobs=8 and --restore-blob-batch-size=10000 (with a \nmax_locks_per_transaction=12000).\n\nAs said, this isn't a \"one size fits all\" solution. The pg_upgrade \nparameters for --jobs and --restore-jobs will really depend on the \nsituation. Hundreds of small databases want --jobs, but one database \nwith millions of large objects wants --restore-jobs.\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services",
"msg_date": "Mon, 22 Mar 2021 14:07:34 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_upgrade to preserve datdba"
},
{
"msg_contents": ">\n> Hi,\n>\nw.r.t. pg_upgrade_improvements.v2.diff.\n\n+ blobBatchCount = 0;\n+ blobInXact = false;\n\nThe count and bool flag are always reset in tandem. It seems\nvariable blobInXact is not needed.\n\nCheers\n\nHi,w.r.t. pg_upgrade_improvements.v2.diff.+ blobBatchCount = 0;+ blobInXact = false;The count and bool flag are always reset in tandem. It seems variable blobInXact is not needed.Cheers",
"msg_date": "Mon, 22 Mar 2021 14:36:38 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On 3/22/21 5:36 PM, Zhihong Yu wrote:\n> Hi,\n> \n> w.r.t. pg_upgrade_improvements.v2.diff.\n> \n> + blobBatchCount = 0;\n> + blobInXact = false;\n> \n> The count and bool flag are always reset in tandem. It seems \n> variable blobInXact is not needed.\n\nYou are right. I will fix that.\n\n\nThanks, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Mon, 22 Mar 2021 19:18:33 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On 3/22/21 7:18 PM, Jan Wieck wrote:\n> On 3/22/21 5:36 PM, Zhihong Yu wrote:\n>> Hi,\n>> \n>> w.r.t. pg_upgrade_improvements.v2.diff.\n>> \n>> + blobBatchCount = 0;\n>> + blobInXact = false;\n>> \n>> The count and bool flag are always reset in tandem. It seems \n>> variable blobInXact is not needed.\n> \n> You are right. I will fix that.\n\nNew patch v3 attached.\n\n\nThanks, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services",
"msg_date": "Tue, 23 Mar 2021 08:51:32 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Tue, Mar 23, 2021 at 08:51:32AM -0400, Jan Wieck wrote:\n> On 3/22/21 7:18 PM, Jan Wieck wrote:\n> > On 3/22/21 5:36 PM, Zhihong Yu wrote:\n> > > Hi,\n> > > \n> > > w.r.t.�pg_upgrade_improvements.v2.diff.\n> > > \n> > > + � � � blobBatchCount = 0;\n> > > + � � � blobInXact = false;\n> > > \n> > > The count and bool flag are always reset in tandem. It seems\n> > > variable�blobInXact�is not needed.\n> > \n> > You are right. I will fix that.\n> \n> New patch v3 attached.\n\nWould it be better to allow pg_upgrade to pass arbitrary arguments to\npg_restore, instead of just these specific ones?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Tue, 23 Mar 2021 10:56:28 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On 3/23/21 10:56 AM, Bruce Momjian wrote:\n> On Tue, Mar 23, 2021 at 08:51:32AM -0400, Jan Wieck wrote:\n>> On 3/22/21 7:18 PM, Jan Wieck wrote:\n>> > On 3/22/21 5:36 PM, Zhihong Yu wrote:\n>> > > Hi,\n>> > > \n>> > > w.r.t. pg_upgrade_improvements.v2.diff.\n>> > > \n>> > > + blobBatchCount = 0;\n>> > > + blobInXact = false;\n>> > > \n>> > > The count and bool flag are always reset in tandem. It seems\n>> > > variable blobInXact is not needed.\n>> > \n>> > You are right. I will fix that.\n>> \n>> New patch v3 attached.\n> \n> Would it be better to allow pg_upgrade to pass arbitrary arguments to\n> pg_restore, instead of just these specific ones?\n> \n\nThat would mean arbitrary parameters to pg_dump as well as pg_restore. \nBut yes, that would probably be better in the long run.\n\nAny suggestion as to how that would actually look like? Unfortunately \npg_restore has -[dDoOr] already used, so it doesn't look like there will \nbe any naturally intelligible short options for that.\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Tue, 23 Mar 2021 13:25:15 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Tue, Mar 23, 2021 at 01:25:15PM -0400, Jan Wieck wrote:\n> On 3/23/21 10:56 AM, Bruce Momjian wrote:\n> > Would it be better to allow pg_upgrade to pass arbitrary arguments to\n> > pg_restore, instead of just these specific ones?\n> > \n> \n> That would mean arbitrary parameters to pg_dump as well as pg_restore. But\n> yes, that would probably be better in the long run.\n> \n> Any suggestion as to how that would actually look like? Unfortunately\n> pg_restore has -[dDoOr] already used, so it doesn't look like there will be\n> any naturally intelligible short options for that.\n\nWe have the postmaster which can pass arbitrary arguments to postgres\nprocesses using -o.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Tue, 23 Mar 2021 14:06:46 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On 3/23/21 2:06 PM, Bruce Momjian wrote:\n> We have the postmaster which can pass arbitrary arguments to postgres\n> processes using -o.\n\nRight, and -o is already taken in pg_upgrade for sending options to the \nold postmaster.\n\nWhat we are looking for are options for sending options to pg_dump and \npg_restore, which are not postmasters or children of postmaster, but \nrather clients. There is no option to send options to clients of \npostmasters.\n\nSo the question remains, how do we name this?\n\n --pg-dump-options \"<string>\"\n --pg-restore-options \"<string>\"\n\nwhere \"<string>\" could be something like \"--whatever[=NUM] [...]\" would \nbe something unambiguous.\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Tue, 23 Mar 2021 14:23:03 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Tue, Mar 23, 2021 at 02:23:03PM -0400, Jan Wieck wrote:\n> On 3/23/21 2:06 PM, Bruce Momjian wrote:\n> > We have the postmaster which can pass arbitrary arguments to postgres\n> > processes using -o.\n> \n> Right, and -o is already taken in pg_upgrade for sending options to the old\n> postmaster.\n> \n> What we are looking for are options for sending options to pg_dump and\n> pg_restore, which are not postmasters or children of postmaster, but rather\n> clients. There is no option to send options to clients of postmasters.\n> \n> So the question remains, how do we name this?\n> \n> --pg-dump-options \"<string>\"\n> --pg-restore-options \"<string>\"\n> \n> where \"<string>\" could be something like \"--whatever[=NUM] [...]\" would be\n> something unambiguous.\n\nSure. I don't think the letter you use is a problem.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Tue, 23 Mar 2021 14:25:01 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Jan Wieck <jan@wi3ck.info> writes:\n> So the question remains, how do we name this?\n\n> --pg-dump-options \"<string>\"\n> --pg-restore-options \"<string>\"\n\nIf you're passing multiple options, that is\n\n\t--pg-dump-options \"--foo=x --bar=y\"\n\nit seems just horribly fragile. Lose the double quotes and suddenly\n--bar is a separate option to pg_upgrade itself, not part of the argument\nfor the previous option. That's pretty easy to do when passing things\nthrough shell scripts, too. So it'd likely be safer to write\n\n\t--pg-dump-option=--foo=x --pg-dump-option=--bar=y\n\nwhich requires pg_upgrade to allow aggregating multiple options,\nbut you'd probably want it to act that way anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Mar 2021 14:35:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On 3/23/21 2:35 PM, Tom Lane wrote:\n> Jan Wieck <jan@wi3ck.info> writes:\n>> So the question remains, how do we name this?\n> \n>> --pg-dump-options \"<string>\"\n>> --pg-restore-options \"<string>\"\n> \n> If you're passing multiple options, that is\n> \n> \t--pg-dump-options \"--foo=x --bar=y\"\n> \n> it seems just horribly fragile. Lose the double quotes and suddenly\n> --bar is a separate option to pg_upgrade itself, not part of the argument\n> for the previous option. That's pretty easy to do when passing things\n> through shell scripts, too. So it'd likely be safer to write\n> \n> \t--pg-dump-option=--foo=x --pg-dump-option=--bar=y\n> \n> which requires pg_upgrade to allow aggregating multiple options,\n> but you'd probably want it to act that way anyway.\n\n... which would be all really easy if pg_upgrade wouldn't be assembling \na shell script string to pass into parallel_exec_prog() by itself.\n\nBut I will see what I can do ...\n\n\nRegards, Jan\n\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Tue, 23 Mar 2021 14:54:29 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Jan Wieck <jan@wi3ck.info> writes:\n> On 3/23/21 2:35 PM, Tom Lane wrote:\n>> If you're passing multiple options, that is\n>> --pg-dump-options \"--foo=x --bar=y\"\n>> it seems just horribly fragile. Lose the double quotes and suddenly\n>> --bar is a separate option to pg_upgrade itself, not part of the argument\n>> for the previous option. That's pretty easy to do when passing things\n>> through shell scripts, too.\n\n> ... which would be all really easy if pg_upgrade wouldn't be assembling \n> a shell script string to pass into parallel_exec_prog() by itself.\n\nNo, what I was worried about is shell script(s) that invoke pg_upgrade\nand have to pass down some of these options through multiple levels of\noption parsing.\n\nBTW, it doesn't seem like the \"pg-\" prefix has any value-add here,\nso maybe \"--dump-option\" and \"--restore-option\" would be suitable\nspellings.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Mar 2021 14:59:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On 3/23/21 2:59 PM, Tom Lane wrote:\n> Jan Wieck <jan@wi3ck.info> writes:\n>> On 3/23/21 2:35 PM, Tom Lane wrote:\n>>> If you're passing multiple options, that is\n>>> --pg-dump-options \"--foo=x --bar=y\"\n>>> it seems just horribly fragile. Lose the double quotes and suddenly\n>>> --bar is a separate option to pg_upgrade itself, not part of the argument\n>>> for the previous option. That's pretty easy to do when passing things\n>>> through shell scripts, too.\n> \n>> ... which would be all really easy if pg_upgrade wouldn't be assembling \n>> a shell script string to pass into parallel_exec_prog() by itself.\n> \n> No, what I was worried about is shell script(s) that invoke pg_upgrade\n> and have to pass down some of these options through multiple levels of\n> option parsing.\n\nThe problem here is that pg_upgrade itself is invoking a shell again. It \nis not assembling an array of arguments to pass into exec*(). I'd be a \nhappy camper if it did the latter. But as things are we'd have to add \nfull shell escapeing for arbitrary strings.\n\n> \n> BTW, it doesn't seem like the \"pg-\" prefix has any value-add here,\n> so maybe \"--dump-option\" and \"--restore-option\" would be suitable\n> spellings.\n\nAgreed.\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Tue, 23 Mar 2021 15:22:04 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Jan Wieck <jan@wi3ck.info> writes:\n> The problem here is that pg_upgrade itself is invoking a shell again. It \n> is not assembling an array of arguments to pass into exec*(). I'd be a \n> happy camper if it did the latter. But as things are we'd have to add \n> full shell escapeing for arbitrary strings.\n\nSurely we need that (and have it already) anyway?\n\nI think we've stayed away from exec* because we'd have to write an\nemulation for Windows. Maybe somebody will get fed up and produce\nsuch code, but it's not likely to be the least-effort route to the\ngoal.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Mar 2021 15:35:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On 3/23/21 3:35 PM, Tom Lane wrote:\n> Jan Wieck <jan@wi3ck.info> writes:\n>> The problem here is that pg_upgrade itself is invoking a shell again. It \n>> is not assembling an array of arguments to pass into exec*(). I'd be a \n>> happy camper if it did the latter. But as things are we'd have to add \n>> full shell escapeing for arbitrary strings.\n> \n> Surely we need that (and have it already) anyway?\n\nThere are functions to shell escape a single string, like\n\n appendShellString()\n\nbut that is hardly enough when a single optarg for --restore-option \ncould look like any of\n\n --jobs 8\n --jobs=8\n --jobs='8'\n --jobs '8'\n --jobs \"8\"\n --jobs=\"8\"\n --dont-bother-about-jobs\n\nWhen placed into a shell string, those things have very different \neffects on your args[].\n\nI also want to say that we are overengineering this whole thing. Yes, \nthere is the problem of shell quoting possibly going wrong as it passes \nfrom one shell to another. But for now this is all about passing a few \nnumbers down from pg_upgrade to pg_restore (and eventually pg_dump).\n\nHave we even reached a consensus yet on that doing it the way, my patch \nis proposing, is the right way to go? Like that emitting BLOB TOC \nentries into SECTION_DATA when in binary upgrade mode is a good thing? \nOr that bunching all the SQL statements for creating the blob, changing \nthe ACL and COMMENT and SECLABEL all in one multi-statement-query is.\n\nMaybe we should focus on those details before getting into all the \nparameter naming stuff.\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Tue, 23 Mar 2021 15:59:48 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Jan Wieck <jan@wi3ck.info> writes:\n> Have we even reached a consensus yet on that doing it the way, my patch \n> is proposing, is the right way to go? Like that emitting BLOB TOC \n> entries into SECTION_DATA when in binary upgrade mode is a good thing? \n> Or that bunching all the SQL statements for creating the blob, changing \n> the ACL and COMMENT and SECLABEL all in one multi-statement-query is.\n\nNow you're asking for actual review effort, which is a little hard\nto come by towards the tail end of the last CF of a cycle. I'm\ninterested in this topic, but I can't justify spending much time\non it right now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Mar 2021 16:55:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On 3/23/21 4:55 PM, Tom Lane wrote:\n> Jan Wieck <jan@wi3ck.info> writes:\n>> Have we even reached a consensus yet on that doing it the way, my patch \n>> is proposing, is the right way to go? Like that emitting BLOB TOC \n>> entries into SECTION_DATA when in binary upgrade mode is a good thing? \n>> Or that bunching all the SQL statements for creating the blob, changing \n>> the ACL and COMMENT and SECLABEL all in one multi-statement-query is.\n> \n> Now you're asking for actual review effort, which is a little hard\n> to come by towards the tail end of the last CF of a cycle. I'm\n> interested in this topic, but I can't justify spending much time\n> on it right now.\n\nUnderstood.\n\nIn any case I changed the options so that they behave the same way, the \nexisting -o and -O (for old/new postmaster options) work. I don't think \nit would be wise to have option forwarding work differently between \noptions for postmaster and options for pg_dump/pg_restore.\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services\n\n\n",
"msg_date": "Wed, 24 Mar 2021 12:04:26 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On 3/24/21 12:04 PM, Jan Wieck wrote:\n> In any case I changed the options so that they behave the same way, the\n> existing -o and -O (for old/new postmaster options) work. I don't think\n> it would be wise to have option forwarding work differently between\n> options for postmaster and options for pg_dump/pg_restore.\n\nAttaching the actual diff might help.\n\n-- \nJan Wieck\nPrinciple Database Engineer\nAmazon Web Services",
"msg_date": "Wed, 24 Mar 2021 12:05:27 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Wed, Mar 24, 2021 at 12:05:27PM -0400, Jan Wieck wrote:\n> On 3/24/21 12:04 PM, Jan Wieck wrote:\n> > In any case I changed the options so that they behave the same way, the\n> > existing -o and -O (for old/new postmaster options) work. I don't think\n> > it would be wise to have option forwarding work differently between\n> > options for postmaster and options for pg_dump/pg_restore.\n> \n> Attaching the actual diff might help.\n\nI think the original issue with XIDs was fixed by 74cf7d46a.\n\nAre you still planning to progress the patches addressing huge memory use of\npg_restore?\n\nNote this other, old thread on -general, which I believe has variations on the\nsame patches.\nhttps://www.postgresql.org/message-id/flat/7bf19bf2-e6b7-01a7-1d96-f0607c728c49@wi3ck.info\n\nThere was discussion about using pg_restore --single. Note that that was used\nat some point in the past: see 12ee6ec71 and 861ad67bd.\n\nThe immediate problem is that --single conflicts with --create.\nI cleaned up a patch I'd written to work around that. It preserves DB settings\nand passes pg_upgrade's test. It's probably not portable as written, but if need be\ncould pass an empty file instead of /dev/null...\n\ndiff --git a/src/bin/pg_upgrade/pg_upgrade.c b/src/bin/pg_upgrade/pg_upgrade.c\nindex 3628bd74a7..9c504aff79 100644\n--- a/src/bin/pg_upgrade/pg_upgrade.c\n+++ b/src/bin/pg_upgrade/pg_upgrade.c\n@@ -364,6 +364,16 @@ create_new_objects(void)\n \t\tDbInfo\t *old_db = &old_cluster.dbarr.dbs[dbnum];\n \t\tconst char *create_opts;\n \n+\t\tPQExpBufferData connstr,\n+\t\t\t\tescaped_connstr;\n+\n+\t\tinitPQExpBuffer(&connstr);\n+\t\tinitPQExpBuffer(&escaped_connstr);\n+\t\tappendPQExpBufferStr(&connstr, \"dbname=\");\n+\t\tappendConnStrVal(&connstr, old_db->db_name);\n+\t\tappendShellString(&escaped_connstr, connstr.data);\n+\t\ttermPQExpBuffer(&connstr);\n+\n \t\t/* Skip template1 in this pass */\n \t\tif (strcmp(old_db->db_name, \"template1\") == 0)\n \t\t\tcontinue;\n@@ -378,18 +388,31 @@ create_new_objects(void)\n \t\t * propagate its database-level properties.\n \t\t */\n \t\tif (strcmp(old_db->db_name, \"postgres\") == 0)\n-\t\t\tcreate_opts = \"--clean --create\";\n+\t\t\tcreate_opts = \"--clean\";\n \t\telse\n-\t\t\tcreate_opts = \"--create\";\n+\t\t\tcreate_opts = \"\";\n \n+\t\t/* Create the DB but exclude all objects */\n \t\tparallel_exec_prog(log_file_name,\n \t\t\t\t\t\t NULL,\n \t\t\t\t\t\t \"\\\"%s/pg_restore\\\" %s %s --exit-on-error --verbose \"\n+\t\t\t\t\t\t\t\"--create -L /dev/null \"\n \t\t\t\t\t\t \"--dbname template1 \\\"%s\\\"\",\n \t\t\t\t\t\t new_cluster.bindir,\n \t\t\t\t\t\t cluster_conn_opts(&new_cluster),\n \t\t\t\t\t\t create_opts,\n \t\t\t\t\t\t sql_file_name);\n+\n+\t\tparallel_exec_prog(log_file_name,\n+\t\t\t\t\t\t NULL,\n+\t\t\t\t\t\t \"\\\"%s/pg_restore\\\" %s %s --exit-on-error --verbose --single \"\n+\t\t\t\t\t\t \"--dbname=%s \\\"%s\\\"\",\n+\t\t\t\t\t\t new_cluster.bindir,\n+\t\t\t\t\t\t cluster_conn_opts(&new_cluster),\n+\t\t\t\t\t\t create_opts,\n+\t\t\t\t\t\t\tescaped_connstr.data,\n+\t\t\t\t\t\t sql_file_name);\n+\n \t}\n \n \t/* reap all children */\n\n\n\n",
"msg_date": "Sat, 11 Dec 2021 16:43:08 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Wed, Mar 24, 2021 at 12:05:27PM -0400, Jan Wieck wrote:\n> On 3/24/21 12:04 PM, Jan Wieck wrote:\n>> In any case I changed the options so that they behave the same way, the\n>> existing -o and -O (for old/new postmaster options) work. I don't think\n>> it would be wise to have option forwarding work differently between\n>> options for postmaster and options for pg_dump/pg_restore.\n> \n> Attaching the actual diff might help.\n\nI'd like to revive this thread, so I've created a commitfest entry [0] and\nattached a hastily rebased patch that compiles and passes the tests. I am\naiming to spend some more time on this in the near future.\n\n[0] https://commitfest.postgresql.org/39/3841/\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 24 Aug 2022 17:32:27 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On 8/24/22 17:32, Nathan Bossart wrote:\n> I'd like to revive this thread, so I've created a commitfest entry [0] and\n> attached a hastily rebased patch that compiles and passes the tests. I am\n> aiming to spend some more time on this in the near future.\n\nJust to clarify, was Justin's statement upthread (that the XID problem\nis fixed) correct? And is this patch just trying to improve the\nremaining memory and lock usage problems?\n\nI took a quick look at the pg_upgrade diffs. I agree with Jan that the\nescaping problem is a pretty bad smell, but even putting that aside for\na bit, is it safe to expose arbitrary options to pg_dump/restore during\nupgrade? It's super flexible, but I can imagine that some of those flags\nmight really mess up the new cluster...\n\nAnd yeah, if you choose to do that then you get to keep both pieces, I\nguess, but I like that pg_upgrade tries to be (IMO) fairly bulletproof.\n\n--Jacob\n\n\n",
"msg_date": "Wed, 7 Sep 2022 14:42:05 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Wed, Sep 07, 2022 at 02:42:05PM -0700, Jacob Champion wrote:\n> Just to clarify, was Justin's statement upthread (that the XID problem\n> is fixed) correct? And is this patch just trying to improve the\n> remaining memory and lock usage problems?\n\nI think \"fixed\" might not be totally accurate, but that is the gist.\n\n> I took a quick look at the pg_upgrade diffs. I agree with Jan that the\n> escaping problem is a pretty bad smell, but even putting that aside for\n> a bit, is it safe to expose arbitrary options to pg_dump/restore during\n> upgrade? It's super flexible, but I can imagine that some of those flags\n> might really mess up the new cluster...\n> \n> And yeah, if you choose to do that then you get to keep both pieces, I\n> guess, but I like that pg_upgrade tries to be (IMO) fairly bulletproof.\n\nIIUC the main benefit of this approach is that it isn't dependent on\nbinary-upgrade mode, which seems to be a goal based on the discussion\nupthread [0]. I think it'd be easily possible to fix only pg_upgrade by\nsimply dumping and restoring pg_largeobject_metadata, as Andres suggested\nin 2018 [1]. In fact, it seems like it ought to be possible to just copy\npg_largeobject_metadata's files as was done before 12a53c7. AFAICT this\nwould only work for clusters upgrading from v12 and newer, and it'd break\nif any of the underlying data types change their storage format. This\nseems unlikely for OIDs, but there is ongoing discussion about changing\naclitem.\n\nI still think this is a problem worth fixing, but it's not yet clear how to\nproceed.\n\n[0] https://postgr.es/m/227228.1616259220%40sss.pgh.pa.us\n[1] https://postgr.es/m/20181122001415.ef5bncxqin2y3esb%40alap3.anarazel.de\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 8 Sep 2022 16:18:07 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Thu, Sep 8, 2022 at 4:18 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> IIUC the main benefit of this approach is that it isn't dependent on\n> binary-upgrade mode, which seems to be a goal based on the discussion\n> upthread [0].\n\nTo clarify, I agree that pg_dump should contain the core fix. What I'm\nquestioning is the addition of --dump-options to make use of that fix\nfrom pg_upgrade, since it also lets the user do \"exciting\" new things\nlike --exclude-schema and --include-foreign-data and so on. I don't\nthink we should let them do that without a good reason.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Thu, 8 Sep 2022 16:29:10 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Thu, Sep 08, 2022 at 04:29:10PM -0700, Jacob Champion wrote:\n> On Thu, Sep 8, 2022 at 4:18 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> IIUC the main benefit of this approach is that it isn't dependent on\n>> binary-upgrade mode, which seems to be a goal based on the discussion\n>> upthread [0].\n> \n> To clarify, I agree that pg_dump should contain the core fix. What I'm\n> questioning is the addition of --dump-options to make use of that fix\n> from pg_upgrade, since it also lets the user do \"exciting\" new things\n> like --exclude-schema and --include-foreign-data and so on. I don't\n> think we should let them do that without a good reason.\n\nAh, yes, I think that is a fair point.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 8 Sep 2022 16:34:07 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Thu, Sep 08, 2022 at 04:34:07PM -0700, Nathan Bossart wrote:\n> On Thu, Sep 08, 2022 at 04:29:10PM -0700, Jacob Champion wrote:\n>> To clarify, I agree that pg_dump should contain the core fix. What I'm\n>> questioning is the addition of --dump-options to make use of that fix\n>> from pg_upgrade, since it also lets the user do \"exciting\" new things\n>> like --exclude-schema and --include-foreign-data and so on. I don't\n>> think we should let them do that without a good reason.\n> \n> Ah, yes, I think that is a fair point.\n\nIt has been more than four weeks since the last activity of this\nthread and there has been what looks like some feedback to me, so\nmarked as RwF for the time being.\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 14:50:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "\r\n\r\n\r\n\r\nHi Everyone , I want to continue this thread , I have rebased the patch to latest\r\nmaster and fixed an issue when pg_restore prints to file.\r\n\r\n`\r\n╰─$ pg_restore dump_small.custom --restore-blob-batch-size=2 --file=a\r\n--\r\n-- End BLOB restore batch\r\n--\r\nCOMMIT;\r\n`\r\n\r\n> On 09/11/2023, 17:05, \"Jacob Champion\" <jchampion@timescale.com <mailto:jchampion@timescale.com>> wrote:\r\n> To clarify, I agree that pg_dump should contain the core fix. What I'm\r\n> questioning is the addition of --dump-options to make use of that fix\r\n> from pg_upgrade, since it also lets the user do \"exciting\" new things\r\n> like --exclude-schema and --include-foreign-data and so on. I don't\r\n> think we should let them do that without a good reason.\r\n\r\nEarlier idea was to not expose these options to users and use [1]\r\n --restore-jobs=NUM --jobs parameter passed to pg_restore\r\n --restore-blob-batch-size=NUM number of blobs restored in one xact\r\nBut this was later expanded to use --dump-options and --restore-options [2].\r\nWith --restore-options user can use --exclude-schema , \r\nSo maybe we can go back to [1]\r\n\r\n[1] https://www.postgresql.org/message-id/a1e200e6-adde-2561-422b-a166ec084e3b%40wi3ck.info\r\n[2] https://www.postgresql.org/message-id/8d8d3961-8e8b-3dbe-f911-6f418c5fb1d3%40wi3ck.info\r\n\r\nRegards\r\nSachin\r\nAmazon Web Services: https://aws.amazon.com\r\n\r\n\r\n",
"msg_date": "Thu, 9 Nov 2023 17:35:01 +0000",
"msg_from": "\"Kumar, Sachin\" <ssetiya@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "[ Jacob's email address updated ]\n\n\"Kumar, Sachin\" <ssetiya@amazon.com> writes:\n> Hi Everyone , I want to continue this thread , I have rebased the patch to latest\n> master and fixed an issue when pg_restore prints to file.\n\nUm ... you didn't attach the patch?\n\nFWIW, I agree with Jacob's concern about it being a bad idea to let\nusers of pg_upgrade pass down arbitrary options to pg_dump/pg_restore.\nI think we'd regret going there, because it'd hugely expand the set\nof cases pg_upgrade has to deal with.\n\nAlso, pg_upgrade is often invoked indirectly via scripts, so I do\nnot especially buy the idea that we're going to get useful control\ninput from some human somewhere. I think we'd be better off to\nassume that pg_upgrade is on its own to manage the process, so that\nif we need to switch strategies based on object count or whatever,\nwe should put in a heuristic to choose the strategy automatically.\nIt might not be perfect, but that will give better results for the\npretty large fraction of users who are not going to mess with\nweird little switches.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Nov 2023 13:41:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Hi, \n\nOn November 9, 2023 10:41:01 AM PST, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Also, pg_upgrade is often invoked indirectly via scripts, so I do\n>not especially buy the idea that we're going to get useful control\n>input from some human somewhere. I think we'd be better off to\n>assume that pg_upgrade is on its own to manage the process, so that\n>if we need to switch strategies based on object count or whatever,\n>we should put in a heuristic to choose the strategy automatically.\n>It might not be perfect, but that will give better results for the\n>pretty large fraction of users who are not going to mess with\n>weird little switches.\n\n+1 - even leaving everything else aside, just about no user would know about the option. There are cases where we can't do better than giving the user control, but we are certainly adding options at a rate that doesn't seem sustainable. And here it doesn't seem that hard to do better. \n\nAndres \n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Thu, 09 Nov 2023 15:12:50 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "> On 09/11/2023, 18:41, \"Tom Lane\" <tgl@sss.pgh.pa.us <mailto:tgl@sss.pgh.pa.us>> wrote:\r\n> Um ... you didn't attach the patch?\r\n\r\nSorry , patch attached\r\n\r\nRegards\r\nSachin",
"msg_date": "Mon, 13 Nov 2023 15:06:31 +0000",
"msg_from": "\"Kumar, Sachin\" <ssetiya@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "> \"Tom Lane\" <tgl@sss.pgh.pa.us <mailto:tgl@sss.pgh.pa.us>> wrote:\r\n\r\n> FWIW, I agree with Jacob's concern about it being a bad idea to let\r\n> users of pg_upgrade pass down arbitrary options to pg_dump/pg_restore.\r\n> I think we'd regret going there, because it'd hugely expand the set\r\n> of cases pg_upgrade has to deal with.\r\n\r\n> Also, pg_upgrade is often invoked indirectly via scripts, so I do\r\n> not especially buy the idea that we're going to get useful control\r\n> input from some human somewhere. I think we'd be better off to\r\n> assume that pg_upgrade is on its own to manage the process, so that\r\n> if we need to switch strategies based on object count or whatever,\r\n> we should put in a heuristic to choose the strategy automatically.\r\n> It might not be perfect, but that will give better results for the\r\n> pretty large fraction of users who are not going to mess with\r\n> weird little switches.\r\n\r\n\r\nI have updated the patch to use heuristic, During pg_upgrade we count\r\nLarge objects per database. During pg_restore execution if db large_objects\r\ncount is greater than LARGE_OBJECTS_THRESOLD (1k) we will use \r\n--restore-blob-batch-size.\r\nI also modified pg_upgrade --jobs behavior if we have large_objects (> LARGE_OBJECTS_THRESOLD)\r\n\r\n+ /* Restore all the dbs where LARGE_OBJECTS_THRESOLD is not breached */\r\n+ restore_dbs(stats, true);\r\n+ /* reap all children */\r\n+ while (reap_child(true) == true)\r\n+ ;\r\n+ /* Restore rest of the dbs one by one with pg_restore --jobs = user_opts.jobs */\r\n+ restore_dbs(stats, false);\r\n /* reap all children */\r\n while (reap_child(true) == true)\r\n ;\r\n\r\nRegards\r\nSachin",
"msg_date": "Mon, 4 Dec 2023 16:07:59 +0000",
"msg_from": "\"Kumar, Sachin\" <ssetiya@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "\r\n> I have updated the patch to use heuristic, During pg_upgrade we count\r\n> Large objects per database. During pg_restore execution if db large_objects\r\n> count is greater than LARGE_OBJECTS_THRESOLD (1k) we will use \r\n> --restore-blob-batch-size.\r\n\r\n\r\nI think both SECTION_DATA and SECTION_POST_DATA can be parallelized by pg_restore, So instead of storing \r\nlarge objects in heuristics, we can store SECTION_DATA + SECTION_POST_DATA.\r\n\r\nRegards\r\nSachin\r\n\r\n",
"msg_date": "Thu, 7 Dec 2023 14:05:13 +0000",
"msg_from": "\"Kumar, Sachin\" <ssetiya@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "I spent some time looking at the v7 patch. I can't help feeling\nthat this is going off in the wrong direction, primarily for\nthese reasons:\n\n* It focuses only on cutting the number of transactions needed\nto restore a large number of blobs (large objects). Certainly\nthat's a pain point, but it's not the only one of this sort.\nIf you have a lot of tables, restore will consume just as many\ntransactions as it would for a similar number of blobs --- probably\nmore, in fact, since we usually need more commands per table than\nper blob.\n\n* I'm not too thrilled with the (undocumented) rearrangements in\npg_dump. I really don't like the idea of emitting a fundamentally\ndifferent TOC layout in binary-upgrade mode; that seems unmaintainably\nbug-prone. Plus, the XID-consumption problem is not really confined\nto pg_upgrade.\n\nWhat I think we actually ought to do is one of the alternatives\ndiscussed upthread: teach pg_restore to be able to commit\nevery so often, without trying to provide the all-or-nothing\nguarantees of --single-transaction mode. This cuts its XID\nconsumption by whatever multiple \"every so often\" is, while\nallowing us to limit the number of locks taken during any one\ntransaction. It also seems a great deal safer than the idea\nI floated of not taking locks at all during a binary upgrade;\nplus, it has some usefulness with regular pg_restore that's not\nunder control of pg_upgrade.\n\nSo I had a go at coding that, and attached is the result.\nIt invents a --transaction-size option, and when that's active\nit will COMMIT after every N TOC items. (This seems simpler to\nimplement and less bug-prone than every-N-SQL-commands.)\n\nI had initially supposed that in a parallel restore we could\nhave child workers also commit after every N TOC items, but was\nsoon disabused of that idea. After a worker processes a TOC\nitem, any dependent items (such as index builds) might get\ndispatched to some other worker, which had better be able to\nsee the results of the first worker's step. So at least in\nthis implementation, we disable the multi-command-per-COMMIT\nbehavior during the parallel part of the restore. Maybe that\ncould be improved in future, but it seems like it'd add a\nlot more complexity, and it wouldn't make life any better for\npg_upgrade (which doesn't use parallel pg_restore, and seems\nunlikely to want to in future).\n\nI've not spent a lot of effort on pg_upgrade changes here:\nI just hard-wired it to select --transaction-size=1000.\nGiven the default lock table size of 64*100, that gives us\nenough headroom for each TOC to take half a dozen locks.\nWe could go higher than that by making pg_upgrade force the\ndestination postmaster to create a larger-than-default lock\ntable, but I'm not sure if it's worth any trouble. We've\nalready bought three orders of magnitude improvement as it\nstands, which seems like enough ambition for today. (Also,\nhaving pg_upgrade override the user's settings in the\ndestination cluster might not be without downsides.)\n\nAnother thing I'm wondering about is why this is only a pg_restore\noption not also a pg_dump/pg_dumpall option. I did it like that\nbecause --single-transaction is pg_restore only, but that seems more\nlike an oversight or laziness than a well-considered decision.\nMaybe we should back-fill that omission; but it could be done later.\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 10 Dec 2023 20:42:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "I have spent some more effort in this area and developed a patch\nseries that I think addresses all of the performance issues that\nwe've discussed in this thread, both for pg_upgrade and more\ngeneral use of pg_dump/pg_restore. Concretely, it absorbs\nthe pg_restore --transaction-size switch that I proposed before\nto cut the number of transactions needed during restore, and\nrearranges the representation of BLOB-related TOC entries to\nreduce the client-side memory requirements, and fixes some\nancient mistakes that prevent both selective restore of BLOBs\nand parallel restore of BLOBs.\n\nAs a demonstration, I made a database containing 100K empty blobs,\nand measured the time needed to dump/restore that using -Fd\nand -j 10. HEAD doesn't get any useful parallelism on blobs,\nbut with this patch series we do:\n\n\t\tdump\trestore\nHEAD:\t\t14sec\t15sec\nafter 0002:\t7sec\t10sec\nafter 0003:\t7sec\t3sec\n\nThere are a few loose ends:\n\n* I did not invent a switch to control the batching of blobs; it's\njust hard-wired at 1000 blobs per group here. Probably we need some\nuser knob for that, but I'm unsure if we want to expose a count or\njust a boolean for one vs more than one blob per batch. The point of\nforcing one blob per batch would be to allow exact control during\nselective restore, and I'm not sure if there's any value in random\nother settings. On the other hand, selective restore of blobs has\nbeen completely broken for the last dozen years and I can't recall any\nuser complaints about that; so maybe nobody cares and we could just\nleave this as an internal choice.\n\n* Likewise, there's no user-accessible knob to control what\ntransaction size pg_upgrade uses. Do we need one? In any case, it's\nlikely that the default needs a bit more thought than I've given it.\nI used 1000, but if pg_upgrade is launching parallel restore jobs we\nlikely need to divide that by the number of restore jobs.\n\n* As the patch stands, we still build a separate TOC entry for each\ncomment or seclabel or ACL attached to a blob. If you have a lot of\nblobs with non-default properties then the TOC bloat problem comes\nback again. We could do something about that, but it would take a bit\nof tedious refactoring, and the most obvious way to handle it probably\nre-introduces too-many-locks problems. Is this a scenario that's\nworth spending a lot of time on?\n\nMore details appear in the commit messages below. Patch 0004\nis nearly the same as the v8 patch I posted before, although\nit adds some logic to ensure that a large blob metadata batch\ndoesn't create too many locks.\n\nComments?\n\n\t\t\tregards, tom lane\n\nPS: I don't see any active CF entry for this thread, so\nI'm going to go make one.",
"msg_date": "Wed, 20 Dec 2023 18:47:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 06:47:44PM -0500, Tom Lane wrote:\n> I have spent some more effort in this area and developed a patch\n> series that I think addresses all of the performance issues that\n> we've discussed in this thread, both for pg_upgrade and more\n> general use of pg_dump/pg_restore. Concretely, it absorbs\n> the pg_restore --transaction-size switch that I proposed before\n> to cut the number of transactions needed during restore, and\n> rearranges the representation of BLOB-related TOC entries to\n> reduce the client-side memory requirements, and fixes some\n> ancient mistakes that prevent both selective restore of BLOBs\n> and parallel restore of BLOBs.\n> \n> As a demonstration, I made a database containing 100K empty blobs,\n> and measured the time needed to dump/restore that using -Fd\n> and -j 10. HEAD doesn't get any useful parallelism on blobs,\n> but with this patch series we do:\n> \n> \t\tdump\trestore\n> HEAD:\t\t14sec\t15sec\n> after 0002:\t7sec\t10sec\n> after 0003:\t7sec\t3sec\n\nWow, thanks for putting together these patches. I intend to help review,\nbut I'm not sure I'll find much time to do so before the new year.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 20 Dec 2023 21:16:14 -0600",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> Wow, thanks for putting together these patches. I intend to help review,\n\nThanks!\n\n> but I'm not sure I'll find much time to do so before the new year.\n\nThere's no urgency, surely. If we can get these in during the\nJanuary CF, I'll be happy.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 20 Dec 2023 23:03:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": ">\n> On Thu, 21 Dec 2023 at 10:17, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\nI have spent some more effort in this area and developed a patch\n> series that I think addresses all of the performance issues that\n> we've discussed in this thread, both for pg_upgrade and more\n> general use of pg_dump/pg_restore.\n\n\n\nThanks for picking this up!\n\nApplying all 4 patches, I also see good performance improvement.\n\nWith more Large Objects, although pg_dump improved significantly,\npg_restore is now comfortably an order of magnitude faster.\n\npg_dump times (seconds):\n NumLOs dump-patch004 dump-HEAD improvement (%)\n 1 0.09 0.09 ~\n 10 0.10 0.12 ~\n 100 0.12 0.12 ~\n 1,000 0.41 0.44 ~\n 10,000 3 5 76%\n 100,000 35 47 36%\n 1,000,000 111 251 126%\n\n\npg_restore times (seconds):\n NumLOs restore-patch0004 restore-HEAD improvement (%)\n 1 0.02 0.02 ~\n 10 0.03 0.03 ~\n 100 0.13 0.12 ~\n 1,000 0.98 0.97 ~\n 10,000 2 9 ~5x\n 100,000 6 93 13x\n 1,000,000 53 973 17x\n\n\nTest details:\n- pg_dump -Fd -j32 / pg_restore -j32\n- 32vCPU / Ubuntu 20.04 / 260GB Memory / r6id.8xlarge\n- Client & Server on same machine\n- Empty LOs / Empty ACLs\n- HEAD = 7d7ef075d2b3f3bac4db323c2a47fb15a4a9a817\n- See attached graphs\n\nIMHO the knob (for configuring batch size) is a non-blocker. The\ndefault (1k) here is already way better than what we have today.\n\nLook forward to feedback on the tests, or I'll continue testing\nwhether ACLs / non-empty LOs etc. adversely affect these numbers.\n\n-\nRobins Tharakan\nAmazon Web Services",
"msg_date": "Wed, 27 Dec 2023 23:58:01 +1030",
"msg_from": "Robins Tharakan <tharakan@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Robins Tharakan <tharakan@gmail.com> writes:\n> Applying all 4 patches, I also see good performance improvement.\n> With more Large Objects, although pg_dump improved significantly,\n> pg_restore is now comfortably an order of magnitude faster.\n\nYeah. The key thing here is that pg_dump can only parallelize\nthe data transfer, while (with 0004) pg_restore can parallelize\nlarge object creation and owner-setting as well as data transfer.\nI don't see any simple way to improve that on the dump side,\nbut I'm not sure we need to. Zillions of empty objects is not\nreally the use case to worry about. I suspect that a more realistic\ncase with moderate amounts of data in the blobs would make pg_dump\nlook better.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Dec 2023 10:18:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Thu, 28 Dec 2023 at 01:48, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robins Tharakan <tharakan@gmail.com> writes:\n> > Applying all 4 patches, I also see good performance improvement.\n> > With more Large Objects, although pg_dump improved significantly,\n> > pg_restore is now comfortably an order of magnitude faster.\n>\n> Yeah. The key thing here is that pg_dump can only parallelize\n> the data transfer, while (with 0004) pg_restore can parallelize\n> large object creation and owner-setting as well as data transfer.\n> I don't see any simple way to improve that on the dump side,\n> but I'm not sure we need to. Zillions of empty objects is not\n> really the use case to worry about. I suspect that a more realistic\n> case with moderate amounts of data in the blobs would make pg_dump\n> look better.\n>\n\n\nThanks for elaborating, and yes pg_dump times do reflect that\nexpectation.\n\nThe first test involved a fixed number (32k) of\nLarge Objects (LOs) with varying sizes - I chose that number\nintentionally since this was being tested on a 32vCPU instance\nand the patch employs 1k batches.\n\n\nWe again see that pg_restore is an order of magnitude faster.\n\n LO Size (bytes) restore-HEAD restore-patched improvement (Nx)\n 1 24.182 1.4 17x\n 10 24.741 1.5 17x\n 100 24.574 1.6 15x\n 1,000 25.314 1.7 15x\n 10,000 25.644 1.7 15x\n 100,000 50.046 4.3 12x\n 1,000,000 281.549 30.0 9x\n\n\npg_dump also sees improvements. Really small sized LOs\nsee a decent ~20% improvement which grows considerably as LOs\nget bigger (beyond ~10-100kb).\n\n\n LO Size (bytes) dump-HEAD dump-patched improvement (%)\n 1 12.9 10.7 18%\n 10 12.9 10.4 19%\n 100 12.8 10.3 20%\n 1,000 13.0 10.3 21%\n 10,000 14.2 10.3 27%\n 100,000 32.8 11.5 65%\n 1,000,000 211.8 23.6 89%\n\n\nTo test pg_restore scaling, 1 Million LOs (100kb each)\nwere created and pg_restore times tested for increasing\nconcurrency (on a 192vCPU instance). We see major speedup\nupto -j64 and the best time was at -j96, after which\nperformance decreases slowly - see attached image.\n\nConcurrency pg_restore-patched\n 384 75.87\n 352 75.63\n 320 72.11\n 288 70.05\n 256 70.98\n 224 66.98\n 192 63.04\n 160 61.37\n 128 58.82\n 96 58.55\n 64 60.46\n 32 77.29\n 16 115.51\n 8 203.48\n 4 366.33\n\n\n\nTest details:\n- Command used to generate SQL - create 1k LOs of 1kb each\n - echo \"SELECT lo_from_bytea(0, '\\x` printf 'ff%.0s' {1..1000}`') FROM\ngenerate_series(1,1000);\" > /tmp/tempdel\n- Verify the LO size: select pg_column_size(lo_get(oid));\n- Only GUC changed: max_connections=1000 (for the last test)\n\n-\nRobins Tharakan\nAmazon Web Services",
"msg_date": "Thu, 28 Dec 2023 22:08:46 +1030",
"msg_from": "Robins Tharakan <tharakan@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "> On 11/12/2023, 01:43, \"Tom Lane\" <tgl@sss.pgh.pa.us <mailto:tgl@sss.pgh.pa.us>> wrote:\r\n\r\n> I had initially supposed that in a parallel restore we could\r\n> have child workers also commit after every N TOC items, but was\r\n> soon disabused of that idea. After a worker processes a TOC\r\n> item, any dependent items (such as index builds) might get\r\n> dispatched to some other worker, which had better be able to\r\n> see the results of the first worker's step. So at least in\r\n> this implementation, we disable the multi-command-per-COMMIT\r\n> behavior during the parallel part of the restore. Maybe that\r\n> could be improved in future, but it seems like it'd add a\r\n> lot more complexity, and it wouldn't make life any better for\r\n> pg_upgrade (which doesn't use parallel pg_restore, and seems\r\n> unlikely to want to in future).\r\n\r\nI was not able to find email thread which details why we are not using\r\nparallel pg_restore for pg_upgrade. IMHO most of the customer will have single large\r\ndatabase, and not using parallel restore will cause slow pg_upgrade.\r\n\r\nI am attaching a patch which enables parallel pg_restore for DATA and POST-DATA part\r\nof dump. It will push down --jobs value to pg_restore and will restore database sequentially.\r\n\r\nBenchmarks\r\n\r\n{5 million LOs 1 large DB}\r\nPatched {v9}\r\ntime pg_upgrade --old-bindir ~/15/bin --new-bindir ~/install/bin --old-datadir ~/upgrade/data/pub --new-datadir ~/data/sub --jobs=20\r\npg_upgrade --old-bindir ~/15/bin --new-bindir ~/install/bin --old-datadir 17.51s user 65.80s system 35% cpu 3:56.64 total\r\n\r\n\r\ntime pg_upgrade --old-bindir ~/15/bin --new-bindir ~/install/bin --old-datadir ~/upgrade/data/pub --new-datadir ~/data/sub -r\r\npg_upgrade --old-bindir ~/15/bin --new-bindir ~/install/bin --old-datadir 17.51s user 65.85s system 34% cpu 3:58.39 total\r\n\r\n\r\nHEAD\r\ntime pg_upgrade --old-bindir ~/15/bin --new-bindir ~/install/bin --old-datadir ~/upgrade/data/pub --new-datadir ~/data/sub -r --jobs=20\r\npg_upgrade --old-bindir ~/15/bin --new-bindir ~/install/bin --old-datadir 53.95s user 82.44s system 41% cpu 5:25.23 total\r\n\r\ntime pg_upgrade --old-bindir ~/15/bin --new-bindir ~/install/bin --old-datadir ~/upgrade/data/pub --new-datadir ~/data/sub -r\r\npg_upgrade --old-bindir ~/15/bin --new-bindir ~/install/bin --old-datadir 54.94s user 81.26s system 41% cpu 5:24.86 total\r\n\r\n\r\n\r\nFix with --jobs propagation to pg_restore {on top of v9}\r\ntime pg_upgrade --old-bindir ~/15/bin --new-bindir ~/install/bin --old-datadir ~/upgrade/data/pub --new-datadir ~/data/sub -r --jobs=20\r\npg_upgrade --old-bindir ~/15/bin --new-bindir ~/install/bin --old-datadir 29.12s user 69.85s system 275% cpu 35.930 total \r\n\r\n\r\nAlthough parallel restore does have small regression in ideal case of pg_upgrade --jobs\r\n\r\n\r\nMultiple DBs {4 DBs each having 2 million LOs}\r\n\r\nFix with --jobs scheduling\r\ntime pg_upgrade --old-bindir ~/15/bin --new-bindir ~/install/bin --old-datadir ~/upgrade/data/pub --new-datadir ~/data/sub -r --jobs=4\r\npg_upgrade --old-bindir ~/15/bin --new-bindir ~/install/bin --old-datadir 31.80s user 109.52s system 120% cpu 1:57.35 total\r\n\r\n\r\nPatched {v9}\r\ntime pg_upgrade --old-bindir ~/15/bin --new-bindir ~/install/bin --old-datadir ~/upgrade/data/pub --new-datadir ~/data/sub -r --jobs=4\r\npg_upgrade --old-bindir ~/15/bin --new-bindir ~/install/bin --old-datadir 30.88s user 110.05s system 135% cpu 1:43.97 total\r\n\r\n\r\nRegards\r\nSachin",
"msg_date": "Tue, 2 Jan 2024 17:33:00 +0000",
"msg_from": "\"Kumar, Sachin\" <ssetiya@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "\"Kumar, Sachin\" <ssetiya@amazon.com> writes:\n>> On 11/12/2023, 01:43, \"Tom Lane\" <tgl@sss.pgh.pa.us <mailto:tgl@sss.pgh.pa.us>> wrote:\n>> ... Maybe that\n>> could be improved in future, but it seems like it'd add a\n>> lot more complexity, and it wouldn't make life any better for\n>> pg_upgrade (which doesn't use parallel pg_restore, and seems\n>> unlikely to want to in future).\n\n> I was not able to find email thread which details why we are not using\n> parallel pg_restore for pg_upgrade.\n\nWell, it's pretty obvious isn't it? The parallelism is being applied\nat the per-database level instead.\n\n> IMHO most of the customer will have single large\n> database, and not using parallel restore will cause slow pg_upgrade.\n\nYou've offered no justification for that opinion ...\n\n> I am attaching a patch which enables parallel pg_restore for DATA and POST-DATA part\n> of dump. It will push down --jobs value to pg_restore and will restore\n> database sequentially.\n\nI don't think I trust this patch one bit. It makes way too many\nassumptions about how the --section options work, or even that they\nwill work at all in a binary-upgrade situation. I've spent enough\ntime with that code to know that --section is pretty close to being\na fiction. One point in particular is that this would change the\norder of ACL restore relative to other steps, which almost certainly\nwill cause problems for somebody.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Jan 2024 13:06:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "I wrote:\n> \"Kumar, Sachin\" <ssetiya@amazon.com> writes:\n>> I was not able to find email thread which details why we are not using\n>> parallel pg_restore for pg_upgrade.\n\n> Well, it's pretty obvious isn't it? The parallelism is being applied\n> at the per-database level instead.\n\nOn further reflection, there is a very good reason why it's done like\nthat. Because pg_upgrade is doing schema-only dump and restore,\nthere's next to no opportunity for parallelism within either pg_dump\nor pg_restore. There's no data-loading steps, and there's no\nindex-building either, so the time-consuming stuff that could be\nparallelized just isn't happening in pg_upgrade's usage.\n\nNow it's true that my 0003 patch moves the needle a little bit:\nsince it makes BLOB creation (as opposed to loading) parallelizable,\nthere'd be some hope for parallel pg_restore doing something useful in\na database with very many blobs. But it makes no sense to remove the\nexisting cross-database parallelism in pursuit of that; you'd make\nmany more people unhappy than happy.\n\nConceivably something could be salvaged of your idea by having\npg_upgrade handle databases with many blobs differently from\nthose without, applying parallelism within pg_restore for the\nfirst kind and then using cross-database parallelism for the\nrest. But that seems like a lot of complexity compared to the\npossible win.\n\nIn any case I'd stay far away from using --section in pg_upgrade.\nToo many moving parts there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 05 Jan 2024 15:02:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 06:47:44PM -0500, Tom Lane wrote:\n> * I did not invent a switch to control the batching of blobs; it's\n> just hard-wired at 1000 blobs per group here. Probably we need some\n> user knob for that, but I'm unsure if we want to expose a count or\n> just a boolean for one vs more than one blob per batch. The point of\n> forcing one blob per batch would be to allow exact control during\n> selective restore, and I'm not sure if there's any value in random\n> other settings. On the other hand, selective restore of blobs has\n> been completely broken for the last dozen years and I can't recall any\n> user complaints about that; so maybe nobody cares and we could just\n> leave this as an internal choice.\n\nI think the argument for making this configurable is that folks might have\nfewer larger blobs, in which case it might make sense to lower the batch\nsize, or they might have many smaller blobs, in which case they might want\nto increase it. But I'm a bit skeptical that will make any sort of\ntremendous difference in practice, and I'm not sure how a user would decide\non the right value besides trial-and-error. In any case, at the moment I\nthink it'd be okay to keep this an internal setting and wait for feedback\nfrom the field.\n\n> * As the patch stands, we still build a separate TOC entry for each\n> comment or seclabel or ACL attached to a blob. If you have a lot of\n> blobs with non-default properties then the TOC bloat problem comes\n> back again. We could do something about that, but it would take a bit\n> of tedious refactoring, and the most obvious way to handle it probably\n> re-introduces too-many-locks problems. Is this a scenario that's\n> worth spending a lot of time on?\n\nI'll ask around about this.\n\n> Subject: [PATCH v9 1/4] Some small preliminaries for pg_dump changes.\n\nThis one looked good to me.\n\n> Subject: [PATCH v9 2/4] In dumps, group large objects into matching metadata\n> and data entries.\n\nI spent most of my review time reading this one. Overall, it looks pretty\ngood to me, and I think you've called out most of the interesting design\nchoices.\n\n> +\t\t\tchar\t *cmdEnd = psprintf(\" OWNER TO %s\", fmtId(te->owner));\n> +\n> +\t\t\tIssueCommandPerBlob(AH, te, \"ALTER LARGE OBJECT \", cmdEnd);\n\nThis is just a nitpick, but is there any reason not to have\nIssueCommandPerBlob() accept a format string and the corresponding\narguments?\n\n> +\t\twhile (n < 1000 && i + n < ntups)\n\nAnother nitpick: it might be worth moving these hard-wired constants to\nmacros. Without a control switch, that'd at least make it easy for anyone\ndetermined to change the value for their installation.\n\n> Subject: [PATCH v9 3/4] Move BLOBS METADATA TOC entries into SECTION_DATA.\n\nThis one looks reasonable, too.\n\n> In this patch I have just hard-wired pg_upgrade to use\n> --transaction-size 1000. Perhaps there would be value in adding\n> another pg_upgrade option to allow user control of that, but I'm\n> unsure that it's worth the trouble; I think few users would use it,\n> and any who did would see not that much benefit. However, we\n> might need to adjust the logic to make the size be 1000 divided\n> by the number of parallel restore jobs allowed.\n\nI wonder if it'd be worth making this configurable for pg_upgrade as an\nescape hatch in case the default setting is problematic.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 12 Jan 2024 16:42:27 -0600",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Fri, Jan 05, 2024 at 03:02:34PM -0500, Tom Lane wrote:\n> On further reflection, there is a very good reason why it's done like\n> that. Because pg_upgrade is doing schema-only dump and restore,\n> there's next to no opportunity for parallelism within either pg_dump\n> or pg_restore. There's no data-loading steps, and there's no\n> index-building either, so the time-consuming stuff that could be\n> parallelized just isn't happening in pg_upgrade's usage.\n> \n> Now it's true that my 0003 patch moves the needle a little bit:\n> since it makes BLOB creation (as opposed to loading) parallelizable,\n> there'd be some hope for parallel pg_restore doing something useful in\n> a database with very many blobs. But it makes no sense to remove the\n> existing cross-database parallelism in pursuit of that; you'd make\n> many more people unhappy than happy.\n\nI assume the concern is that we'd end up multiplying the effective number\nof workers if we parallelized both in-database and cross-database? Would\nit be sufficient to make those separately configurable with a note about\nthe multiplicative effects of setting both? I think it'd be unfortunate if\npg_upgrade completely missed out on this improvement.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 12 Jan 2024 16:48:20 -0600",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 04:42:27PM -0600, Nathan Bossart wrote:\n> On Wed, Dec 20, 2023 at 06:47:44PM -0500, Tom Lane wrote:\n>> +\t\t\tchar\t *cmdEnd = psprintf(\" OWNER TO %s\", fmtId(te->owner));\n>> +\n>> +\t\t\tIssueCommandPerBlob(AH, te, \"ALTER LARGE OBJECT \", cmdEnd);\n> \n> This is just a nitpick, but is there any reason not to have\n> IssueCommandPerBlob() accept a format string and the corresponding\n> arguments?\n\nEh, I guess you'd have to find some other way of specifying where the OID\nis supposed to go, which would probably be weird. Please disregard this\none.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 12 Jan 2024 16:56:35 -0600",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Wed, Dec 20, 2023 at 06:47:44PM -0500, Tom Lane wrote:\n>> +\t\t\tchar\t *cmdEnd = psprintf(\" OWNER TO %s\", fmtId(te->owner));\n>> +\n>> +\t\t\tIssueCommandPerBlob(AH, te, \"ALTER LARGE OBJECT \", cmdEnd);\n\n> This is just a nitpick, but is there any reason not to have\n> IssueCommandPerBlob() accept a format string and the corresponding\n> arguments?\n\nThe problem is how to combine the individual per-blob OID with the\ncommand string given by the caller. If we want the caller to also be\nable to inject data values, I don't really see how to combine the OID\nwith the va_args list from the caller. If we stick with the design\nthat the caller provides separate \"front\" and \"back\" strings, but ask\nto be able to inject data values into those, then we need two va_args\nlists which C doesn't support, or else an arbitrary decision about\nwhich one gets the va_args. (Admittedly, with only one caller that\nneeds a non-constant string, we could make such a decision; but by the\nsame token we'd gain little.)\n\nIt'd be notationally simpler if we could have the caller supply one\nstring that includes %u where the OID is supposed to go; but then\nwe have problems if an owner name includes %. So on the whole I'm\nnot seeing anything much better than what I did. Maybe I missed\nan idea though.\n\n> Another nitpick: it might be worth moving these hard-wired constants to\n> macros. Without a control switch, that'd at least make it easy for anyone\n> determined to change the value for their installation.\n\nOK.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Jan 2024 17:57:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Tue, 2 Jan 2024 at 23:03, Kumar, Sachin <ssetiya@amazon.com> wrote:\n>\n> > On 11/12/2023, 01:43, \"Tom Lane\" <tgl@sss.pgh.pa.us <mailto:tgl@sss.pgh.pa.us>> wrote:\n>\n> > I had initially supposed that in a parallel restore we could\n> > have child workers also commit after every N TOC items, but was\n> > soon disabused of that idea. After a worker processes a TOC\n> > item, any dependent items (such as index builds) might get\n> > dispatched to some other worker, which had better be able to\n> > see the results of the first worker's step. So at least in\n> > this implementation, we disable the multi-command-per-COMMIT\n> > behavior during the parallel part of the restore. Maybe that\n> > could be improved in future, but it seems like it'd add a\n> > lot more complexity, and it wouldn't make life any better for\n> > pg_upgrade (which doesn't use parallel pg_restore, and seems\n> > unlikely to want to in future).\n>\n> I was not able to find email thread which details why we are not using\n> parallel pg_restore for pg_upgrade. IMHO most of the customer will have single large\n> database, and not using parallel restore will cause slow pg_upgrade.\n>\n> I am attaching a patch which enables parallel pg_restore for DATA and POST-DATA part\n> of dump. It will push down --jobs value to pg_restore and will restore database sequentially.\n\nCFBot shows that the patch does not apply anymore as in [1]:\n=== Applying patches on top of PostgreSQL commit ID\n46a0cd4cefb4d9b462d8cc4df5e7ecdd190bea92 ===\n=== applying patch ./v9-005-parallel_pg_restore.patch\npatching file src/bin/pg_upgrade/pg_upgrade.c\nHunk #3 FAILED at 650.\n1 out of 3 hunks FAILED -- saving rejects to file\nsrc/bin/pg_upgrade/pg_upgrade.c.rej\n\nPlease post an updated version for the same.\n\n[1] - http://cfbot.cputube.org/patch_46_4713.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 26 Jan 2024 20:12:44 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> CFBot shows that the patch does not apply anymore as in [1]:\n> === Applying patches on top of PostgreSQL commit ID\n> 46a0cd4cefb4d9b462d8cc4df5e7ecdd190bea92 ===\n> === applying patch ./v9-005-parallel_pg_restore.patch\n> patching file src/bin/pg_upgrade/pg_upgrade.c\n> Hunk #3 FAILED at 650.\n> 1 out of 3 hunks FAILED -- saving rejects to file\n> src/bin/pg_upgrade/pg_upgrade.c.rej\n\nThat's because v9-005 was posted by itself. But I don't think\nwe should use it anyway.\n\nHere's 0001-0004 again, updated to current HEAD (only line numbers\nchanged) and with Nathan's suggestion to define some macros for\nthe magic constants.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 26 Jan 2024 11:44:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "This patch seems to have stalled out again. In hopes of getting it\nover the finish line, I've done a bit more work to address the two\nloose ends I felt were probably essential to deal with:\n\n* Duplicative blob ACLs are now merged into a single TOC entry\n(per metadata group) with the GRANT/REVOKE commands stored only\nonce. This is to address the possibly-common case where a database\nhas a ton of blobs that have identical-but-not-default ACLs.\n\nI have not done anything about improving efficiency for blob comments\nor security labels. I think it's reasonable to assume that blobs with\ncomments are pets not cattle, and there won't be many of them.\nI suppose it could be argued that seclabels might be used like ACLs\nwith a lot of duplication, but I doubt that there's anyone out there\nat all putting seclabels on blobs in practice. So I don't care to\nexpend effort on that.\n\n* Parallel pg_upgrade cuts the --transaction-size given to concurrent\npg_restore jobs by the -j factor. This is to ensure we keep the\nshared locks table within bounds even in parallel mode.\n\nNow we could go further than that and provide some direct user\ncontrol over these hard-wired settings, but I think that could\nbe left for later, getting some field experience before we design\nan API. In short, I think this patchset is more or less commitable.\n\n0001-0004 are rebased up to HEAD, but differ only in line numbers\nfrom the v10 patchset. 0005 handles ACL merging, and 0006 does\nthe other thing.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 15 Mar 2024 19:18:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Fri, 2024-03-15 at 19:18 -0400, Tom Lane wrote:\n> This patch seems to have stalled out again. In hopes of getting it\n> over the finish line, I've done a bit more work to address the two\n> loose ends I felt were probably essential to deal with:\n\nApplies and builds fine.\n\nI didn't scrutinize the code, but I gave it a spin on a database with\n15 million (small) large objects. I tried pg_upgrade --link with and\nwithout the patch on a debug build with the default configuration.\n\nWithout the patch:\n\nRuntime: 74.5 minutes\nMemory usage: ~7GB\nDisk usage: an extra 5GB dump file + log file during the dump\n\nWith the patch:\n\nRuntime: 70 minutes\nMemory usage: ~1GB\nDisk usage: an extra 0.5GB during the dump\n\nMemory usage stayed stable once it reached its peak, so no noticeable\nmemory leaks.\n\nThe reduced memory usage is great. I was surprised by the difference\nin disk usage: the lion's share is the dump file, and that got substantially\nsmaller. But also the log file shrank considerably, because not every\nindividual large object gets logged.\n\nI had a look at \"perf top\", and the profile looked pretty similar in\nboth cases.\n\nThe patch is a clear improvement.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Sat, 16 Mar 2024 22:59:45 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Fri, 2024-03-15 at 19:18 -0400, Tom Lane wrote:\n>> This patch seems to have stalled out again. In hopes of getting it\n>> over the finish line, I've done a bit more work to address the two\n>> loose ends I felt were probably essential to deal with:\n\n> Applies and builds fine.\n> I didn't scrutinize the code, but I gave it a spin on a database with\n> 15 million (small) large objects. I tried pg_upgrade --link with and\n> without the patch on a debug build with the default configuration.\n\nThanks for looking at it!\n\n> Without the patch:\n> Runtime: 74.5 minutes\n\n> With the patch:\n> Runtime: 70 minutes\n\nHm, I'd have hoped for a bit more runtime improvement. But perhaps\nnot --- most of the win we saw upthread was from parallelism, and\nI don't think you'd get any parallelism in a pg_upgrade with all\nthe data in one database. (Perhaps there is more to do there later,\nbut I'm still not clear on how this should interact with the existing\ncross-DB parallelism; so I'm content to leave that question for\nanother patch.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 16 Mar 2024 18:46:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Sat, 2024-03-16 at 18:46 -0400, Tom Lane wrote:\n> > Without the patch:\n> > Runtime: 74.5 minutes\n> \n> > With the patch:\n> > Runtime: 70 minutes\n> \n> Hm, I'd have hoped for a bit more runtime improvement.\n\nI did a second run with the patch, and that finished in 66 minutes,\nso there is some jitter there.\n\nI think the reduced memory footprint and the reduced transaction ID\nconsumption alone make this patch worthwhile.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Sun, 17 Mar 2024 18:57:45 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Hi,\n\nOn Sat, Mar 16, 2024 at 06:46:15PM -0400, Tom Lane wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > On Fri, 2024-03-15 at 19:18 -0400, Tom Lane wrote:\n> >> This patch seems to have stalled out again. In hopes of getting it\n> >> over the finish line, I've done a bit more work to address the two\n> >> loose ends I felt were probably essential to deal with:\n> \n> > Applies and builds fine.\n> > I didn't scrutinize the code, but I gave it a spin on a database with\n> > 15 million (small) large objects. I tried pg_upgrade --link with and\n> > without the patch on a debug build with the default configuration.\n> \n> Thanks for looking at it!\n> \n> > Without the patch:\n> > Runtime: 74.5 minutes\n> \n> > With the patch:\n> > Runtime: 70 minutes\n> \n> Hm, I'd have hoped for a bit more runtime improvement. \n\nI also think that this is quite a large runtime for pg_upgrade, but the\nmore important savings should be the memory usage.\n\n> But perhaps not --- most of the win we saw upthread was from\n> parallelism, and I don't think you'd get any parallelism in a\n> pg_upgrade with all the data in one database. (Perhaps there is more\n> to do there later, but I'm still not clear on how this should interact\n> with the existing cross-DB parallelism; so I'm content to leave that\n> question for another patch.)\n\nWhat is the status of this? In the commitfest, this patch is marked as\n\"Needs Review\" with Nathan as reviewer - Nathan, were you going to take\nanother look at this or was your mail from January 12th a full review?\n\nMy feeling is that this patch is \"Ready for Committer\" and it is Tom's\ncall to commit it during the next days or not.\n\nI am +1 that this is an important feature/bug fix to have. Because we\nhave customers stuck on older versions due to their pathological large\nobjects usage, I did some benchmarks (jsut doing pg_dump, not\npg_upgarde) a while ago which were also very promising; however, I lost\nthe exact numbers/results. I am happy to do further tests if that is\nrequired for this patch to go forward.\n\nAlso, is there a chance this is going to be back-patched? I guess it\nwould be enough if the ugprade target is v17 so it is less of a concern,\nbut it would be nice if people with millions of large objects are not\nstuck until they are ready to ugprade to v17.\n\n\nMichael\n\n\n",
"msg_date": "Wed, 27 Mar 2024 10:20:31 +0100",
"msg_from": "Michael Banck <mbanck@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Wed, 2024-03-27 at 10:20 +0100, Michael Banck wrote:\n> Also, is there a chance this is going to be back-patched? I guess it\n> would be enough if the ugprade target is v17 so it is less of a concern,\n> but it would be nice if people with millions of large objects are not\n> stuck until they are ready to ugprade to v17.\n\nIt is a quite invasive patch, and it adds new features (pg_restore in\nbigger transaction patches), so I think this is not for backpatching,\ndesirable as it may seem from the usability angle.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 27 Mar 2024 10:53:51 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Hi,\n\nOn Wed, Mar 27, 2024 at 10:53:51AM +0100, Laurenz Albe wrote:\n> On Wed, 2024-03-27 at 10:20 +0100, Michael Banck wrote:\n> > Also, is there a chance this is going to be back-patched? I guess it\n> > would be enough if the ugprade target is v17 so it is less of a concern,\n> > but it would be nice if people with millions of large objects are not\n> > stuck until they are ready to ugprade to v17.\n> \n> It is a quite invasive patch, and it adds new features (pg_restore in\n> bigger transaction patches), so I think this is not for backpatching,\n> desirable as it may seem from the usability angle.\n\nRight, I forgot about those changes, makes sense.\n\n\nMichael\n\n\n",
"msg_date": "Wed, 27 Mar 2024 11:54:54 +0100",
"msg_from": "Michael Banck <mbanck@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Michael Banck <mbanck@gmx.net> writes:\n> What is the status of this? In the commitfest, this patch is marked as\n> \"Needs Review\" with Nathan as reviewer - Nathan, were you going to take\n> another look at this or was your mail from January 12th a full review?\n\nIn my mind the ball is in Nathan's court. I feel it's about\ncommittable, but he might not agree.\n\n> Also, is there a chance this is going to be back-patched?\n\nNo chance of that I'm afraid. The patch bumps the archive version\nnumber, because it creates TOC entries that older pg_restore would\nnot know what to do with. We can't put that kind of compatibility\nbreak into stable branches.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 27 Mar 2024 10:54:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 10:54:05AM -0400, Tom Lane wrote:\n> Michael Banck <mbanck@gmx.net> writes:\n>> What is the status of this? In the commitfest, this patch is marked as\n>> \"Needs Review\" with Nathan as reviewer - Nathan, were you going to take\n>> another look at this or was your mail from January 12th a full review?\n> \n> In my mind the ball is in Nathan's court. I feel it's about\n> committable, but he might not agree.\n\nI'll prioritize another round of review on this one. FWIW I don't remember\nhaving any major concerns on a previous version of the patch set I looked\nat.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 27 Mar 2024 10:08:26 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Wed, Mar 27, 2024 at 10:08:26AM -0500, Nathan Bossart wrote:\n> On Wed, Mar 27, 2024 at 10:54:05AM -0400, Tom Lane wrote:\n>> Michael Banck <mbanck@gmx.net> writes:\n>>> What is the status of this? In the commitfest, this patch is marked as\n>>> \"Needs Review\" with Nathan as reviewer - Nathan, were you going to take\n>>> another look at this or was your mail from January 12th a full review?\n>> \n>> In my mind the ball is in Nathan's court. I feel it's about\n>> committable, but he might not agree.\n> \n> I'll prioritize another round of review on this one. FWIW I don't remember\n> having any major concerns on a previous version of the patch set I looked\n> at.\n\nSorry for taking so long to get back to this one. Overall, I think the\ncode is in decent shape. Nothing stands out after a couple of passes. The\nsmall amount of runtime improvement cited upthread is indeed a bit\ndisappointing, but IIUC this at least sets the stage for additional\nparallelism in the future, and the memory/disk usage improvements are\nnothing to sneeze at, either.\n\nThe one design point that worries me a little is the non-configurability of\n--transaction-size in pg_upgrade. I think it's fine to default it to 1,000\nor something, but given how often I've had to fiddle with\nmax_locks_per_transaction, I'm wondering if we might regret hard-coding it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 1 Apr 2024 14:19:30 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> Sorry for taking so long to get back to this one. Overall, I think the\n> code is in decent shape.\n\nThanks for looking at it!\n\n> The one design point that worries me a little is the non-configurability of\n> --transaction-size in pg_upgrade. I think it's fine to default it to 1,000\n> or something, but given how often I've had to fiddle with\n> max_locks_per_transaction, I'm wondering if we might regret hard-coding it.\n\nWell, we could add a command-line switch to pg_upgrade, but I'm\nunconvinced that it'd be worth the trouble. I think a very large\nfraction of users invoke pg_upgrade by means of packager-supplied\nscripts that are unlikely to provide a way to pass through such\na switch. I'm inclined to say let's leave it as-is until we get\nsome actual field requests for a switch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Apr 2024 15:28:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Mon, Apr 01, 2024 at 03:28:26PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> The one design point that worries me a little is the non-configurability of\n>> --transaction-size in pg_upgrade. I think it's fine to default it to 1,000\n>> or something, but given how often I've had to fiddle with\n>> max_locks_per_transaction, I'm wondering if we might regret hard-coding it.\n> \n> Well, we could add a command-line switch to pg_upgrade, but I'm\n> unconvinced that it'd be worth the trouble. I think a very large\n> fraction of users invoke pg_upgrade by means of packager-supplied\n> scripts that are unlikely to provide a way to pass through such\n> a switch. I'm inclined to say let's leave it as-is until we get\n> some actual field requests for a switch.\n\nOkay. I'll let you know if I see anything. IIRC usually the pg_dump side\nof pg_upgrade is more prone to lock exhaustion, so you may very well be\nright that this is unnecessary.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 1 Apr 2024 14:37:18 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Mon, Apr 01, 2024 at 03:28:26PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n> > The one design point that worries me a little is the non-configurability of\n> > --transaction-size in pg_upgrade. I think it's fine to default it to 1,000\n> > or something, but given how often I've had to fiddle with\n> > max_locks_per_transaction, I'm wondering if we might regret hard-coding it.\n> \n> Well, we could add a command-line switch to pg_upgrade, but I'm\n> unconvinced that it'd be worth the trouble. I think a very large\n> fraction of users invoke pg_upgrade by means of packager-supplied\n> scripts that are unlikely to provide a way to pass through such\n> a switch. I'm inclined to say let's leave it as-is until we get\n> some actual field requests for a switch.\n\nI've been importing our schemas and doing upgrade testing, and was\nsurprised when a postgres backend was killed for OOM during pg_upgrade:\n\nKilled process 989302 (postgres) total-vm:5495648kB, anon-rss:5153292kB, ...\n\nUpgrading from v16 => v16 doesn't use nearly as much RAM.\n\nWhile tracking down the responsible commit, I reproduced the problem\nusing a subset of tables; at 959b38d770, the backend process used\n~650 MB RAM, and at its parent commit used at most ~120 MB.\n\n959b38d770b Invent --transaction-size option for pg_restore.\n\nBy changing RESTORE_TRANSACTION_SIZE to 100, backend RAM use goes to\n180 MB during pg_upgrade, which is reasonable.\n\nWith partitioning, we have a lot of tables, some of them wide (126\npartitioned tables, 8942 childs, total 1019315 columns). I didn't track\nif certain parts of our schema contribute most to the high backend mem\nuse, just that it's now 5x (while testing a subset) to 50x higher.\n\nWe'd surely prefer that the transaction size be configurable.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 24 Jul 2024 09:17:51 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Hi, Justin!\n\nThank you for sharing this.\n\nOn Wed, Jul 24, 2024 at 5:18 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Mon, Apr 01, 2024 at 03:28:26PM -0400, Tom Lane wrote:\n> > Nathan Bossart <nathandbossart@gmail.com> writes:\n> > > The one design point that worries me a little is the non-configurability of\n> > > --transaction-size in pg_upgrade. I think it's fine to default it to 1,000\n> > > or something, but given how often I've had to fiddle with\n> > > max_locks_per_transaction, I'm wondering if we might regret hard-coding it.\n> >\n> > Well, we could add a command-line switch to pg_upgrade, but I'm\n> > unconvinced that it'd be worth the trouble. I think a very large\n> > fraction of users invoke pg_upgrade by means of packager-supplied\n> > scripts that are unlikely to provide a way to pass through such\n> > a switch. I'm inclined to say let's leave it as-is until we get\n> > some actual field requests for a switch.\n>\n> I've been importing our schemas and doing upgrade testing, and was\n> surprised when a postgres backend was killed for OOM during pg_upgrade:\n>\n> Killed process 989302 (postgres) total-vm:5495648kB, anon-rss:5153292kB, ...\n>\n> Upgrading from v16 => v16 doesn't use nearly as much RAM.\n>\n> While tracking down the responsible commit, I reproduced the problem\n> using a subset of tables; at 959b38d770, the backend process used\n> ~650 MB RAM, and at its parent commit used at most ~120 MB.\n>\n> 959b38d770b Invent --transaction-size option for pg_restore.\n>\n> By changing RESTORE_TRANSACTION_SIZE to 100, backend RAM use goes to\n> 180 MB during pg_upgrade, which is reasonable.\n>\n> With partitioning, we have a lot of tables, some of them wide (126\n> partitioned tables, 8942 childs, total 1019315 columns). I didn't track\n> if certain parts of our schema contribute most to the high backend mem\n> use, just that it's now 5x (while testing a subset) to 50x higher.\n\nDo you think there is a way to anonymize the schema and share it?\n\n> We'd surely prefer that the transaction size be configurable.\n\nI think we can add an option to pg_upgrade. But I wonder if there is\nsomething else we can do. It seems that restoring some objects is\nmuch more expensive than restoring others. It would be nice to\nidentify such cases and check which memory contexts are growing and\nwhy. It would be helpful if you could share your data schema, so we\ncould dig into it.\n\nI can imagine we need to count some DDL commands in aspect of maximum\nrestore transaction size in a different way than others. Also, we\nprobably need to change the default restore transaction size.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Fri, 26 Jul 2024 22:53:30 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> On Wed, Jul 24, 2024 at 5:18 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> We'd surely prefer that the transaction size be configurable.\n\n> I think we can add an option to pg_upgrade. But I wonder if there is\n> something else we can do.\n\nYeah, I'm not enamored of adding a command-line option, if only\nbecause I think a lot of people invoke pg_upgrade through\nvendor-provided scripts that aren't going to cooperate with that.\nIf we can find some way to make it adapt without help, that\nwould be much better.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Jul 2024 16:05:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Wed, Jul 24, 2024 at 09:17:51AM -0500, Justin Pryzby wrote:\n> With partitioning, we have a lot of tables, some of them wide (126\n> partitioned tables, 8942 childs, total 1019315 columns).\n\nOn Fri, Jul 26, 2024 at 10:53:30PM +0300, Alexander Korotkov wrote:\n> It would be nice to identify such cases and check which memory contexts are\n> growing and why.\n\nI reproduced the problem with this schema:\n\nSELECT format('CREATE TABLE p(i int, %s) PARTITION BY RANGE(i)', array_to_string(a, ', ')) FROM (SELECT array_agg(format('i%s int', i))a FROM generate_series(1,999)i);\nSELECT format('CREATE TABLE t%s PARTITION OF p FOR VALUES FROM (%s)TO(%s)', i,i,i+1) FROM generate_series(1,999)i;\n\nThis used over 4 GB of RAM.\n3114201 pryzbyj 20 0 5924520 4.2g 32476 T 0.0 53.8 0:27.35 postgres: pryzbyj postgres [local] UPDATE\n\nThe large context is:\n2024-07-26 15:22:19.280 CDT [3114201] LOG: level: 1; CacheMemoryContext: 5211209088 total in 50067 blocks; 420688 free (14 chunks); 5210788400 used\n\nNote that there seemed to be no issue when I created 999 tables without\npartitioning:\n\nSELECT format('CREATE TABLE t%s(LIKE p)', i,i,i+1) FROM generate_series(1,999)i;\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 26 Jul 2024 15:36:20 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Fri, Jul 26, 2024 at 11:36 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Wed, Jul 24, 2024 at 09:17:51AM -0500, Justin Pryzby wrote:\n> > With partitioning, we have a lot of tables, some of them wide (126\n> > partitioned tables, 8942 childs, total 1019315 columns).\n>\n> On Fri, Jul 26, 2024 at 10:53:30PM +0300, Alexander Korotkov wrote:\n> > It would be nice to identify such cases and check which memory contexts are\n> > growing and why.\n>\n> I reproduced the problem with this schema:\n>\n> SELECT format('CREATE TABLE p(i int, %s) PARTITION BY RANGE(i)', array_to_string(a, ', ')) FROM (SELECT array_agg(format('i%s int', i))a FROM generate_series(1,999)i);\n> SELECT format('CREATE TABLE t%s PARTITION OF p FOR VALUES FROM (%s)TO(%s)', i,i,i+1) FROM generate_series(1,999)i;\n>\n> This used over 4 GB of RAM.\n> 3114201 pryzbyj 20 0 5924520 4.2g 32476 T 0.0 53.8 0:27.35 postgres: pryzbyj postgres [local] UPDATE\n>\n> The large context is:\n> 2024-07-26 15:22:19.280 CDT [3114201] LOG: level: 1; CacheMemoryContext: 5211209088 total in 50067 blocks; 420688 free (14 chunks); 5210788400 used\n>\n> Note that there seemed to be no issue when I created 999 tables without\n> partitioning:\n>\n> SELECT format('CREATE TABLE t%s(LIKE p)', i,i,i+1) FROM generate_series(1,999)i;\n\nThank you! That was quick.\nI'm looking into this.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Sat, 27 Jul 2024 00:42:23 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Fri, Jul 26, 2024 at 10:53:30PM +0300, Alexander Korotkov wrote:\n>> It would be nice to identify such cases and check which memory contexts are\n>> growing and why.\n\n> I reproduced the problem with this schema:\n\n> SELECT format('CREATE TABLE p(i int, %s) PARTITION BY RANGE(i)', array_to_string(a, ', ')) FROM (SELECT array_agg(format('i%s int', i))a FROM generate_series(1,999)i);\n> SELECT format('CREATE TABLE t%s PARTITION OF p FOR VALUES FROM (%s)TO(%s)', i,i,i+1) FROM generate_series(1,999)i;\n\n> This used over 4 GB of RAM.\n\nInteresting. This doesn't bloat particularly much in a regular\npg_restore, even with --transaction-size=1000; but it does in\npg_upgrade, as you say. I found that the bloat was occurring\nduring these long sequences of UPDATE commands issued by pg_upgrade:\n\n-- For binary upgrade, recreate inherited column.\nUPDATE pg_catalog.pg_attribute\nSET attislocal = false\nWHERE attname = 'i'\n AND attrelid = '\\\"public\\\".\\\"t139\\\"'::pg_catalog.regclass;\n\n-- For binary upgrade, recreate inherited column.\nUPDATE pg_catalog.pg_attribute\nSET attislocal = false\nWHERE attname = 'i1'\n AND attrelid = '\\\"public\\\".\\\"t139\\\"'::pg_catalog.regclass;\n\n-- For binary upgrade, recreate inherited column.\nUPDATE pg_catalog.pg_attribute\nSET attislocal = false\nWHERE attname = 'i2'\n AND attrelid = '\\\"public\\\".\\\"t139\\\"'::pg_catalog.regclass;\n\nI think the problem is basically that each one of these commands\ncauses a relcache inval, for which we can't reclaim space right\naway, so that we end up consuming O(N^2) cache space for an\nN-column inherited table.\n\nIt's fairly easy to fix things so that this example doesn't cause\nthat to happen: we just need to issue these updates as one command\nnot N commands per table. See attached. However, I fear this should\njust be considered a draft, because the other code for binary upgrade\nin the immediate vicinity is just as aggressively stupid and\nunoptimized as this bit, and can probably also be driven to O(N^2)\nbehavior with enough CHECK constraints etc. We've gone out of our way\nto make ALTER TABLE capable of handling many updates to a table's DDL\nin one command, but whoever wrote this code appears not to have read\nthat memo, or at least to have believed that performance of pg_upgrade\nisn't of concern.\n\n> Note that there seemed to be no issue when I created 999 tables without\n> partitioning:\n> SELECT format('CREATE TABLE t%s(LIKE p)', i,i,i+1) FROM generate_series(1,999)i;\n\nYeah, because then we don't need to play games with attislocal.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 26 Jul 2024 18:37:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Sat, Jul 27, 2024 at 1:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Fri, Jul 26, 2024 at 10:53:30PM +0300, Alexander Korotkov wrote:\n> >> It would be nice to identify such cases and check which memory contexts are\n> >> growing and why.\n>\n> > I reproduced the problem with this schema:\n>\n> > SELECT format('CREATE TABLE p(i int, %s) PARTITION BY RANGE(i)', array_to_string(a, ', ')) FROM (SELECT array_agg(format('i%s int', i))a FROM generate_series(1,999)i);\n> > SELECT format('CREATE TABLE t%s PARTITION OF p FOR VALUES FROM (%s)TO(%s)', i,i,i+1) FROM generate_series(1,999)i;\n>\n> > This used over 4 GB of RAM.\n>\n> Interesting. This doesn't bloat particularly much in a regular\n> pg_restore, even with --transaction-size=1000; but it does in\n> pg_upgrade, as you say. I found that the bloat was occurring\n> during these long sequences of UPDATE commands issued by pg_upgrade:\n>\n> -- For binary upgrade, recreate inherited column.\n> UPDATE pg_catalog.pg_attribute\n> SET attislocal = false\n> WHERE attname = 'i'\n> AND attrelid = '\\\"public\\\".\\\"t139\\\"'::pg_catalog.regclass;\n>\n> -- For binary upgrade, recreate inherited column.\n> UPDATE pg_catalog.pg_attribute\n> SET attislocal = false\n> WHERE attname = 'i1'\n> AND attrelid = '\\\"public\\\".\\\"t139\\\"'::pg_catalog.regclass;\n>\n> -- For binary upgrade, recreate inherited column.\n> UPDATE pg_catalog.pg_attribute\n> SET attislocal = false\n> WHERE attname = 'i2'\n> AND attrelid = '\\\"public\\\".\\\"t139\\\"'::pg_catalog.regclass;\n>\n> I think the problem is basically that each one of these commands\n> causes a relcache inval, for which we can't reclaim space right\n> away, so that we end up consuming O(N^2) cache space for an\n> N-column inherited table.\n\nI was about to report the same.\n\n> It's fairly easy to fix things so that this example doesn't cause\n> that to happen: we just need to issue these updates as one command\n> not N commands per table. See attached. However, I fear this should\n> just be considered a draft, because the other code for binary upgrade\n> in the immediate vicinity is just as aggressively stupid and\n> unoptimized as this bit, and can probably also be driven to O(N^2)\n> behavior with enough CHECK constraints etc. We've gone out of our way\n> to make ALTER TABLE capable of handling many updates to a table's DDL\n> in one command, but whoever wrote this code appears not to have read\n> that memo, or at least to have believed that performance of pg_upgrade\n> isn't of concern.\n\nI was thinking about counting actual number of queries, not TOC\nentries for transaction number as a more universal solution. But that\nwould require usage of psql_scan() or writing simpler alternative for\nthis particular purpose. That looks quite annoying. What do you\nthink?\n\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Sat, 27 Jul 2024 01:55:00 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> On Sat, Jul 27, 2024 at 1:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It's fairly easy to fix things so that this example doesn't cause\n>> that to happen: we just need to issue these updates as one command\n>> not N commands per table.\n\n> I was thinking about counting actual number of queries, not TOC\n> entries for transaction number as a more universal solution. But that\n> would require usage of psql_scan() or writing simpler alternative for\n> this particular purpose. That looks quite annoying. What do you\n> think?\n\nThe assumption underlying what we're doing now is that the number\nof SQL commands per TOC entry is limited. I'd prefer to fix the\ncode so that that assumption is correct, at least in normal cases.\nI confess I'd not looked closely enough at the binary-upgrade support\ncode to realize it wasn't correct already :-(. If we go that way,\nwe can fix this while also making pg_upgrade faster rather than\nslower. I also expect that it'll be a lot simpler than putting\na full SQL parser in pg_restore.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Jul 2024 19:06:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Sat, Jul 27, 2024 at 2:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > On Sat, Jul 27, 2024 at 1:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> It's fairly easy to fix things so that this example doesn't cause\n> >> that to happen: we just need to issue these updates as one command\n> >> not N commands per table.\n>\n> > I was thinking about counting actual number of queries, not TOC\n> > entries for transaction number as a more universal solution. But that\n> > would require usage of psql_scan() or writing simpler alternative for\n> > this particular purpose. That looks quite annoying. What do you\n> > think?\n>\n> The assumption underlying what we're doing now is that the number\n> of SQL commands per TOC entry is limited. I'd prefer to fix the\n> code so that that assumption is correct, at least in normal cases.\n> I confess I'd not looked closely enough at the binary-upgrade support\n> code to realize it wasn't correct already :-(. If we go that way,\n> we can fix this while also making pg_upgrade faster rather than\n> slower. I also expect that it'll be a lot simpler than putting\n> a full SQL parser in pg_restore.\n\nI'm good with that as soon as we're not going to meet many cases of\nhigh number SQL commands per TOC entry.\n\nJ4F, I have an idea to count number of ';' sings and use it for\ntransaction size counter, since it is as upper bound estimate of\nnumber of SQL commands :-)\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Sat, 27 Jul 2024 06:00:47 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> J4F, I have an idea to count number of ';' sings and use it for\n> transaction size counter, since it is as upper bound estimate of\n> number of SQL commands :-)\n\nHmm ... that's not a completely silly idea. Let's keep it in\nthe back pocket in case we can't easily reduce the number of\nSQL commands in some cases.\n\nIt's late here, and I've got some other commitments tomorrow,\nbut I'll try to produce a patch to merge more of the SQL\ncommands in a day or two.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Jul 2024 23:08:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "I wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n>> J4F, I have an idea to count number of ';' sings and use it for\n>> transaction size counter, since it is as upper bound estimate of\n>> number of SQL commands :-)\n\n> Hmm ... that's not a completely silly idea. Let's keep it in\n> the back pocket in case we can't easily reduce the number of\n> SQL commands in some cases.\n\nAfter poking at this for awhile, we can fix Justin's example\ncase by avoiding repeated UPDATEs on pg_attribute, so I think\nwe should do that. It seems clearly a win, with no downside\nother than a small increment of complexity in pg_dump.\n\nHowever, that's probably not sufficient to mark this issue\nas closed. It seems likely that there are other patterns\nthat would cause backend memory bloat. One case that I found\nis tables with a lot of inherited constraints (not partitions,\nbut old-style inheritance). For example, load the output of\nthis Perl script into a database:\n\n-----\nfor (my $i = 0; $i < 100; $i++)\n{\n\tprint \"CREATE TABLE test_inh_check$i (\\n\";\n\tfor (my $j = 0; $j < 1000; $j++)\n\t{\n\t\tprint \"a$j float check (a$j > 10.2),\\n\";\n\t}\n\tprint \"b float);\\n\";\n\tprint \"CREATE TABLE test_inh_check_child$i() INHERITS(test_inh_check$i);\\n\";\n}\n-----\n\npg_dump is horrendously slow on this, thanks to O(N^2) behavior in\nruleutils.c, and pg_upgrade is worse --- and leaks memory too in\nHEAD/v17. The slowness was there before, so I think the lack of\nfield complaints indicates that this isn't a real-world use case.\nStill, it's bad if pg_upgrade fails when it would not have before,\nand there may be other similar issues.\n\nSo I'm forced to the conclusion that we'd better make the transaction\nsize adaptive as per Alexander's suggestion.\n\nIn addition to the patches attached, I experimented with making\ndumpTableSchema fold all the ALTER TABLE commands for a single table\ninto one command. That's do-able without too much effort, but I'm now\nconvinced that we shouldn't. It would break the semicolon-counting\nhack for detecting that tables like these involve extra work.\nI'm also not very confident that the backend won't have trouble with\nALTER TABLE commands containing hundreds of subcommands. That's\nsomething we ought to work on probably, but it's not a project that\nI want to condition v17 pg_upgrade's stability on.\n\nAnyway, proposed patches attached. 0001 is some trivial cleanup\nthat I noticed while working on the failed single-ALTER-TABLE idea.\n0002 merges the catalog-UPDATE commands that dumpTableSchema issues,\nand 0003 is Alexander's suggestion.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 28 Jul 2024 17:24:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
},
{
"msg_contents": "On Mon, Jul 29, 2024 at 12:24 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So I'm forced to the conclusion that we'd better make the transaction\n> size adaptive as per Alexander's suggestion.\n>\n> In addition to the patches attached, I experimented with making\n> dumpTableSchema fold all the ALTER TABLE commands for a single table\n> into one command. That's do-able without too much effort, but I'm now\n> convinced that we shouldn't. It would break the semicolon-counting\n> hack for detecting that tables like these involve extra work.\n> I'm also not very confident that the backend won't have trouble with\n> ALTER TABLE commands containing hundreds of subcommands. That's\n> something we ought to work on probably, but it's not a project that\n> I want to condition v17 pg_upgrade's stability on.\n>\n> Anyway, proposed patches attached. 0001 is some trivial cleanup\n> that I noticed while working on the failed single-ALTER-TABLE idea.\n> 0002 merges the catalog-UPDATE commands that dumpTableSchema issues,\n> and 0003 is Alexander's suggestion.\n\nNice to see you picked up my idea. I took a look over the patchset.\nLooks good to me.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Wed, 31 Jul 2024 16:39:19 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failing for 200+ million Large Objects"
}
] |
[
{
"msg_contents": "Hi\n\nWhen I wrote an reply to questing\n\nhttps://stackoverflow.com/questions/66523737/postgresql-10-pl-pgsql-test-if-column-exits-in-a-record-variable\n\nI found an interesting idea to have some basic functions and operators for\nrecord type (similar to json, jsonb or hstore).\n\nNow we can do almost all tasks on record type by cast to jsonb type. But\nthis transformation has some overhead (and for some tasks is not\nnecessary), and it is not too intuitive too.\n\nI don't think so we need full functionality like hstore or jsonb (minimally\nbecause record type cannot be persistent and indexed), but some basic\nfunctionality can be useful.\n\n-- tests of basic helper functions for record type\ndo $$\ndeclare\n r record;\n k text; v text; t text;\nbegin\n select oid, relname, relnamespace, reltype from pg_class limit 1 into r;\n if not r ? 'xxx' then\n raise notice 'pg_class has not column xxx';\n end if;\n\n if r ? 'relname' then\n raise notice 'pg_class has column relname';\n end if;\n\n foreach k in array record_keys_array(r)\n loop\n raise notice '% => %', k, r->>k;\n end loop;\n\n raise notice '---';\n\n -- second (slower) variant\n for k in select * from record_keys(r)\n loop\n raise notice '% => %', k, r->>k;\n end loop;\n\n raise notice '---';\n\n -- complete unpacking\n for k, v, t in select * from record_each_text(r)\n loop\n raise notice '% => %(%)', k, v, t;\n end loop;\nend;\n$$;\n\nWhat do you think about this proposal?\n\nComments, notes?\n\nRegards\n\nPavel",
"msg_date": "Mon, 8 Mar 2021 22:29:47 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "proposal - operators ? and ->> for type record, and functions\n record_keys and record_each_text"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I found an interesting idea to have some basic functions and operators for\n> record type (similar to json, jsonb or hstore).\n\nI think this is a pretty bad idea, because there's no way to know what\ndata type the result of -> should be. \"Smash it all to text\" is a hack,\nnot a solution --- and if you find that hack satisfactory, you might as\nwell be using json or hstore.\n\nMost of the other things you mention are predicated on the assumption\nthat the field set will vary from one value to the next, which again\nseems more like something you'd do with json or hstore than with SQL\ncomposites.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 08 Mar 2021 17:12:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: proposal - operators ? and ->> for type record,\n and functions record_keys and record_each_text"
},
{
"msg_contents": "po 8. 3. 2021 v 23:12 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > I found an interesting idea to have some basic functions and operators\n> for\n> > record type (similar to json, jsonb or hstore).\n>\n> I think this is a pretty bad idea, because there's no way to know what\n> data type the result of -> should be. \"Smash it all to text\" is a hack,\n> not a solution --- and if you find that hack satisfactory, you might as\n> well be using json or hstore.\n>\n\nI wrote (and sent) an implementation of generic type, that can hold any\ntype in binary form, and that can reduce IO casts. It can be more effective\nthan text, but an usability is the same like json or text, because you have\nto use explicit casts everywhere. I think other solutions are not possible,\nbecause you don't know the real type before an evaluation.\n\n\n>\n> Most of the other things you mention are predicated on the assumption\n> that the field set will vary from one value to the next, which again\n> seems more like something you'd do with json or hstore than with SQL\n> composites.\n>\n\nI am thinking about effectiveness in triggers. NEW and OLD variables are of\nrecord type, and sometimes you need to do operation just on tupledesc. When\nI work with a record type, I can do it, without any overhead. When I need\nto use jsonb or hstore, I have to pay, because all fields should be\ntransformated.\n\nMinimally the operator \"?\" can be useful. It allows access to statically\nspecified fields without risk of exception. So I can write universal\ntrigger with\n\nIF NEW ? 'fieldx' THEN\n RAISE NOTICE '%', NEW.fieldx ;\n\nand this operation can be fast and safe\n\n\n\n> regards, tom lane\n>\n\npo 8. 3. 2021 v 23:12 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I found an interesting idea to have some basic functions and operators for\n> record type (similar to json, jsonb or hstore).\n\nI think this is a pretty bad idea, because there's no way to know what\ndata type the result of -> should be. \"Smash it all to text\" is a hack,\nnot a solution --- and if you find that hack satisfactory, you might as\nwell be using json or hstore. I wrote (and sent) an implementation of generic type, that can hold any type in binary form, and that can reduce IO casts. It can be more effective than text, but an usability is the same like json or text, because you have to use explicit casts everywhere. I think other solutions are not possible, because you don't know the real type before an evaluation. \n\nMost of the other things you mention are predicated on the assumption\nthat the field set will vary from one value to the next, which again\nseems more like something you'd do with json or hstore than with SQL\ncomposites.I am thinking about effectiveness in triggers. NEW and OLD variables are of record type, and sometimes you need to do operation just on tupledesc. When I work with a record type, I can do it, without any overhead. When I need to use jsonb or hstore, I have to pay, because all fields should be transformated.Minimally the operator \"?\" can be useful. It allows access to statically specified fields without risk of exception. So I can write universal trigger withIF NEW ? 'fieldx' THEN RAISE NOTICE '%', NEW.fieldx ;and this operation can be fast and safe\n\n regards, tom lane",
"msg_date": "Mon, 8 Mar 2021 23:44:24 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal - operators ? and ->> for type record, and functions\n record_keys and record_each_text"
}
] |
[
{
"msg_contents": "Hi All,\r\nOn the master branch, it is possible to install multiple versions of pg_stat_statements with CREATE EXTENSION, but all the tests in sql/ on look at the latest version available, without testing past compatibility. \r\n\r\nSince we support to install lowest version 1.4 currently, add some tests to verify compatibility, upgrade from lower versions of pg_stat_statements.",
"msg_date": "Tue, 9 Mar 2021 11:35:14 +0800",
"msg_from": "\"=?ISO-8859-1?B?RXJpY2EgWmhhbmc=?=\" <ericazhangy@qq.com>",
"msg_from_op": true,
"msg_subject": "Add some tests for pg_stat_statements compatibility verification\n under contrib"
},
{
"msg_contents": "Hi,\n\nOn Tue, Mar 09, 2021 at 11:35:14AM +0800, Erica Zhang wrote:\n> Hi All,\n> On the master branch, it is possible to install multiple versions of pg_stat_statements with CREATE EXTENSION, but all the tests in sql/ on look at the latest version available, without testing past compatibility. \n> \n> Since we support to install lowest version 1.4 currently, add some tests to verify compatibility, upgrade from lower versions of pg_stat_statements.\n\nThe upgrade scripts are already tested as postgres will install 1.4 and perform\nall upgrades to reach the default version.\n\nBut an additional thing being tested here is the ABI compatibility when there's\na mismatch between the library and the SQL definition, which seems like a\nreasonable thing to test.\n\nLooking at the patch:\n\n+SELECT * FROM pg_available_extensions WHERE name = 'pg_stat_statements' and installed_version = '1.4';\n\nWhat is this supposed to test? All those tests will break every time we change\nthe default version, which will add maintenance efforts. It could be good to\nhave one test breaking when changing the version to remind us to add a test for\nthe new version, but not more.\n\n\n",
"msg_date": "Tue, 9 Mar 2021 17:09:37 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add some tests for pg_stat_statements compatibility verification\n under contrib"
},
{
"msg_contents": "Hi Erica,\n\nOn Wed, Mar 10, 2021 at 11:14:52AM +0800, Erica Zhang wrote:\n> Hi Julien,\n> Thanks a lot for the quick review. Please see my answer below in blue. Attached is the new patch.\n\nThanks!\n\n>> The upgrade scripts are already tested as postgres will install 1.4 and perform\n>> all upgrades to reach the default version.\n> Thanks for pointing that the upgrades paths are covered by upgrade scripts tests. Since I don't need to test the upgrade, I will test the installation of different versions directly, any concern?\n\nI think you should keep your previous approach. The result will be the same\nbut it will consume less resources for that which is always good.\n\n>> +SELECT * FROM pg_available_extensions WHERE name = 'pg_stat_statements' and installed_version = '1.4';\n>> \n>> \n>> What is this supposed to test? All those tests will break every time we change\n>> the default version, which will add maintenance efforts. It could be good to\n>> have one test breaking when changing the version to remind us to add a test for\n>> the new version, but not more.\n> Here I just want to verify that \"installed\" version is the expected version. But we do have the issue as you mentioned which will add maintenance efforts. \n> \n> So I prefer to keep one test as now which can remind us to add a new version. As for others, just to check the count(*) to make sure installation is success.\n> Such as SELECT count(*) FROM pg_available_extensions WHERE name = 'pg_stat_statements' and installed_version = '1.4'; What do you think?\n\nHow about tweaking your previous query so only the last execution fails when\npg_stat_statements default version is updated? Something like:\n\nSELECT installed_version = default_version, installed_version\nFROM pg_available_extensions\nWHERE name = 'pg_stat_statements';\n\nThis way the same query can be reused for both older versions and current\nversion.\n\nAlso, can you register your patch for the next commitfest at\nhttps://commitfest.postgresql.org/33/, to make sure it won't be forgotten?\n\n\n",
"msg_date": "Wed, 10 Mar 2021 11:35:53 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add some tests for pg_stat_statements compatibility verification\n under contrib"
},
{
"msg_contents": "Hi Julien,\r\n\r\n\r\n------------------ Original ------------------\r\nFrom: \"Julien Rouhaud\" <rjuju123@gmail.com>;\r\nDate: Wed, Mar 10, 2021 11:35 AM\r\nTo: \"Erica Zhang\"<ericazhangy@qq.com>;\r\nCc: \"pgsql-hackers\"<pgsql-hackers@postgresql.org>;\r\nSubject: Re: Add some tests for pg_stat_statements compatibility verification under contrib\r\n\r\n\r\n\r\nHi Erica,\r\n\r\nOn Wed, Mar 10, 2021 at 11:14:52AM +0800, Erica Zhang wrote:\r\n> Hi Julien,\r\n> Thanks a lot for the quick review. Please see my answer below in blue. Attached is the new patch.\r\n\r\nThanks!\r\n\r\n>> The upgrade scripts are already tested as postgres will install 1.4 and perform\r\n>> all upgrades to reach the default version.\r\n> Thanks for pointing that the upgrades paths are covered by upgrade scripts tests. Since I don't need to test the upgrade, I will test the installation of different versions directly, any concern?\r\n\r\nI think you should keep your previous approach. The result will be the same\r\nbut it will consume less resources for that which is always good.\r\nAgreed!\r\n\r\n\r\n>> +SELECT * FROM pg_available_extensions WHERE name = 'pg_stat_statements' and installed_version = '1.4';\r\n>> \r\n>> \r\n>> What is this supposed to test?&nbsp; All those tests will break every time we change\r\n>> the default version, which will add maintenance efforts.&nbsp; It could be good to\r\n>> have one test breaking when changing the version to remind us to add a test for\r\n>> the new version, but not more.\r\n> Here I just want to verify that \"installed\" version is the expected version. But we do have the issue as you mentioned which will add maintenance efforts. \r\n> \r\n> So I prefer to keep one test as now which can remind us to add a new version. As for others, just to check the count(*) to make sure installation is success.\r\n> Such as SELECT count(*) FROM pg_available_extensions WHERE name = 'pg_stat_statements' and installed_version = '1.4'; What do you think?\r\n\r\nHow about tweaking your previous query so only the last execution fails when\r\npg_stat_statements default version is updated? Something like:\r\n\r\nSELECT installed_version = default_version, installed_version\r\nFROM pg_available_extensions\r\nWHERE name = 'pg_stat_statements';\r\n\r\nThis way the same query can be reused for both older versions and current\r\nversion.\r\nYep, it's neater to use the query as you suggested. Thanks!\r\n\r\n\r\nAlso, can you register your patch for the next commitfest at\r\nhttps://commitfest.postgresql.org/33/, to make sure it won't be forgotten?",
"msg_date": "Mon, 15 Mar 2021 15:05:24 +0800",
"msg_from": "\"=?ISO-8859-1?B?RXJpY2EgWmhhbmc=?=\" <ericazhangy@qq.com>",
"msg_from_op": true,
"msg_subject": "Re: Add some tests for pg_stat_statements compatibility verification\n under contrib"
},
{
"msg_contents": "On Mon, Mar 15, 2021 at 03:05:24PM +0800, Erica Zhang wrote:\n> This way the same query can be reused for both older versions and current\n> version.\n> Yep, it's neater to use the query as you suggested. Thanks!\n> \n> Also, can you register your patch for the next commitfest at\n> https://commitfest.postgresql.org/33/, to make sure it won't be forgotten?\n\nI was just looking at your patch, and I think that you should move all\nthe past compatibility tests into a separate test file, in a way\nconsistent to what we do in contrib/pageinspect/ for\noldextversions.sql. I would suggest to use the same file names, while\non it.\n--\nMichael",
"msg_date": "Wed, 25 Aug 2021 16:16:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add some tests for pg_stat_statements compatibility verification\n under contrib"
},
{
"msg_contents": "On Wed, Aug 25, 2021 at 04:16:08PM +0900, Michael Paquier wrote:\n> I was just looking at your patch, and I think that you should move all\n> the past compatibility tests into a separate test file, in a way\n> consistent to what we do in contrib/pageinspect/ for\n> oldextversions.sql. I would suggest to use the same file names, while\n> on it.\n\nThe current commit fest is ending, and it would be a waste to do\nnothing here, so I have looked at what you proposed and reworked it.\nThe patch was blindly testing pg_stat_statements_reset() in all the\nversions bumped with the same query on pg_stat_statements done each\ntime, which does not help in checking the actual parts of the code\nthat have changed, and there are two of them:\n- pg_stat_statements_reset() execution got authorized for\npg_read_all_stats once in 1.6.\n- pg_stat_statements() has been extended in 1.8, so we could just have\none query stressing this function in the tests for <= 1.7.\n\nThere is also no need for tests on 1.9, which is the latest version.\nTests for this one should be added once we bump the code to the next\nversion. At the end I finish with the attached, counting for the\nback-and-forth game with pg_read_all_stats.\n--\nMichael",
"msg_date": "Thu, 30 Sep 2021 11:12:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add some tests for pg_stat_statements compatibility verification\n under contrib"
},
{
"msg_contents": "On Thu, Sep 30, 2021 at 11:12:21AM +0900, Michael Paquier wrote:\n> There is also no need for tests on 1.9, which is the latest version.\n> Tests for this one should be added once we bump the code to the next\n> version. At the end I finish with the attached, counting for the\n> back-and-forth game with pg_read_all_stats.\n\nDone as of 2b0da03.\n--\nMichael",
"msg_date": "Mon, 4 Oct 2021 14:08:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add some tests for pg_stat_statements compatibility verification\n under contrib"
}
] |
[
{
"msg_contents": "Hi,\n\ncurrently, only the gid is passed on to the filter_prepare callback. \nWhile we probably should not pass a full ReorderBufferTXN (as we do for \nmost other output plugin callbacks), a bit more information would be \nnice, I think.\n\nAttached is a patch that adds the xid (still lacking docs changes). The \nquestion about stream_prepare being optional made me think about whether \nan output plugin needs to know if changes have been already streamed \nprior to a prepare. Maybe not? Any other information you think the \noutput plugin might find useful to decide whether or not to skip the \nprepare?\n\nIf you are okay with adding just the xid, I'll add docs changes to the \npatch provided.\n\nRegards\n\nMarkus",
"msg_date": "Tue, 9 Mar 2021 09:44:40 +0100",
"msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On Tue, Mar 9, 2021 at 2:14 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> currently, only the gid is passed on to the filter_prepare callback.\n> While we probably should not pass a full ReorderBufferTXN (as we do for\n> most other output plugin callbacks), a bit more information would be\n> nice, I think.\n>\n\nHow the proposed 'xid' parameter can be useful? What exactly plugins\nwant to do with it?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 10 Mar 2021 15:48:47 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On 10.03.21 11:18, Amit Kapila wrote:\n> On Tue, Mar 9, 2021 at 2:14 PM Markus Wanner\n> <markus.wanner@enterprisedb.com> wrote:\n>> currently, only the gid is passed on to the filter_prepare callback.\n>> While we probably should not pass a full ReorderBufferTXN (as we do for\n>> most other output plugin callbacks), a bit more information would be\n>> nice, I think.\n> \n> How the proposed 'xid' parameter can be useful? What exactly plugins\n> want to do with it?\n\nThe xid is the very basic identifier for transactions in Postgres. Any \noutput plugin that interacts with Postgres in any way slightly more \ninteresting than \"filter by gid prefix\" is very likely to come across a \nTransactionId.\n\nIt allows for basics like checking if the transaction to decode still is \nin progress, for example. Or in a much more complex scenario, decide on \nwhether or not to filter based on properties the extension stored during \nprocessing the transaction.\n\nRegards\n\nMarkus\n\n\n",
"msg_date": "Wed, 10 Mar 2021 11:56:12 +0100",
"msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On Wed, Mar 10, 2021 at 4:26 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 10.03.21 11:18, Amit Kapila wrote:\n> > On Tue, Mar 9, 2021 at 2:14 PM Markus Wanner\n> > <markus.wanner@enterprisedb.com> wrote:\n> >> currently, only the gid is passed on to the filter_prepare callback.\n> >> While we probably should not pass a full ReorderBufferTXN (as we do for\n> >> most other output plugin callbacks), a bit more information would be\n> >> nice, I think.\n> >\n> > How the proposed 'xid' parameter can be useful? What exactly plugins\n> > want to do with it?\n>\n> The xid is the very basic identifier for transactions in Postgres. Any\n> output plugin that interacts with Postgres in any way slightly more\n> interesting than \"filter by gid prefix\" is very likely to come across a\n> TransactionId.\n>\n> It allows for basics like checking if the transaction to decode still is\n> in progress, for example.\n>\n\nBut this happens when we are decoding prepare, so it is clear that the\ntransaction is prepared, why any additional check?\n\n> Or in a much more complex scenario, decide on\n> whether or not to filter based on properties the extension stored during\n> processing the transaction.\n>\n\nWhat in this can't be done with GID and how XID can achieve it?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 11 Mar 2021 09:28:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On 11.03.21 04:58, Amit Kapila wrote:\n> But this happens when we are decoding prepare, so it is clear that the\n> transaction is prepared, why any additional check?\n\nAn output plugin cannot assume the transaction is still prepared and \nuncommitted at the point in time it gets to decode the prepare. \nTherefore, the transaction may or may not be still in progress. \nHowever, my point is that the xid is the more generally useful \nidentifier than the gid.\n\n> What in this can't be done with GID and how XID can achieve it?\n\nIt's a convenience. Of course, an output plugin could lookup the xid \nvia the gid. But why force it to have to do that when the xid would be \nso readily available? (Especially given that seems rather expensive. \nOr how would an extension lookup the xid by gid?)\n\nThe initial versions by Nikhil clearly did include it (actually a full \nReorderBufferTXN, which I think would be even better). I'm not clear on \nyour motivations to restrict the API. What's clear to me is that the \nmore information Postgres exposes to plugins and extensions, the easier \nit becomes to extend Postgres. (Modulo perhaps API stability \nconsiderations. A TransactionId clearly is not a concern in that area. \n Especially given we expose the entire ReorderBufferTXN struct for \nother callbacks.)\n\nRegards\n\nMarkus\n\n\n",
"msg_date": "Thu, 11 Mar 2021 10:14:48 +0100",
"msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On Thu, Mar 11, 2021 at 2:44 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 11.03.21 04:58, Amit Kapila wrote:\n> > But this happens when we are decoding prepare, so it is clear that the\n> > transaction is prepared, why any additional check?\n>\n> An output plugin cannot assume the transaction is still prepared and\n> uncommitted at the point in time it gets to decode the prepare.\n> Therefore, the transaction may or may not be still in progress.\n> However, my point is that the xid is the more generally useful\n> identifier than the gid.\n>\n> > What in this can't be done with GID and how XID can achieve it?\n>\n> It's a convenience. Of course, an output plugin could lookup the xid\n> via the gid. But why force it to have to do that when the xid would be\n> so readily available?\n>\n\nI am not suggesting doing any such look-up. It is just that the use of\nadditional parameter(s) for deciding whether to decode at prepare time\nor to decode later as a regular one-phase transaction is not clear to\nme. Now, it is possible that your argument is right that passing\nadditional information gives flexibility to plugin authors and we\nshould just do what you are saying or maybe go even a step further and\npass ReorderBufferTxn but I am not completely sure about this point\nbecause I didn't hear of any concrete use case.\n\nAnyone else would like to weigh in here?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 13 Mar 2021 15:43:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On Sat, Mar 13, 2021 at 3:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Mar 11, 2021 at 2:44 PM Markus Wanner\n> <markus.wanner@enterprisedb.com> wrote:\n> >\n> > On 11.03.21 04:58, Amit Kapila wrote:\n> > > But this happens when we are decoding prepare, so it is clear that the\n> > > transaction is prepared, why any additional check?\n> >\n> > An output plugin cannot assume the transaction is still prepared and\n> > uncommitted at the point in time it gets to decode the prepare.\n> > Therefore, the transaction may or may not be still in progress.\n> > However, my point is that the xid is the more generally useful\n> > identifier than the gid.\n> >\n> > > What in this can't be done with GID and how XID can achieve it?\n> >\n> > It's a convenience. Of course, an output plugin could lookup the xid\n> > via the gid. But why force it to have to do that when the xid would be\n> > so readily available?\n> >\n>\n> I am not suggesting doing any such look-up. It is just that the use of\n> additional parameter(s) for deciding whether to decode at prepare time\n> or to decode later as a regular one-phase transaction is not clear to\n> me. Now, it is possible that your argument is right that passing\n> additional information gives flexibility to plugin authors and we\n> should just do what you are saying or maybe go even a step further and\n> pass ReorderBufferTxn but I am not completely sure about this point\n> because I didn't hear of any concrete use case.\n>\n\nDuring a discussion of GID's in the nearby thread [1], it came up that\nthe replication solutions might want to generate a different GID based\non xid for two-phase transactions, so it seems this patch has a\nuse-case.\n\nMarkus, feel free to update the docs, you might want to mention about\nuse-case of XID. Also, feel free to add an open item on PG-14 Open\nItems page [2].\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2BopiV4aFTmWWUF9h_32%3DHfPOW9vZASHarT0UA5oBrtGw%40mail.gmail.com\n[2] - https://wiki.postgresql.org/wiki/PostgreSQL_14_Open_Items\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sun, 21 Mar 2021 16:23:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "Hello Amit,\n\nOn 21.03.21 11:53, Amit Kapila wrote:\n> During a discussion of GID's in the nearby thread [1], it came up that\n> the replication solutions might want to generate a different GID based\n> on xid for two-phase transactions, so it seems this patch has a\n> use-case.\n\nthank you for reconsidering this patch. I updated it to include the \nrequired adjustments to the documentation. Please review.\n\n> Markus, feel free to update the docs, you might want to mention about\n> use-case of XID. Also, feel free to add an open item on PG-14 Open\n> Items page [2].\n\nYes, will add.\n\nRegards\n\nMarkus",
"msg_date": "Mon, 22 Mar 2021 09:50:21 +0100",
"msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On 22.03.21 09:50, Markus Wanner wrote:\n> thank you for reconsidering this patch. I updated it to include the \n> required adjustments to the documentation. Please review.\n\nI tweaked the wording in the docs a bit, resulting in a v3 of this patch.\n\nRegards\n\nMarkus",
"msg_date": "Thu, 25 Mar 2021 09:36:58 +0100",
"msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On Thu, Mar 25, 2021 at 2:07 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 22.03.21 09:50, Markus Wanner wrote:\n> > thank you for reconsidering this patch. I updated it to include the\n> > required adjustments to the documentation. Please review.\n>\n> I tweaked the wording in the docs a bit, resulting in a v3 of this patch.\n>\n\nOne minor comment:\n- The callback has to provide the same static answer for a given\n- <parameter>gid</parameter> every time it is called.\n+ The callback may be invoked several times per transaction to decode and\n+ must provide the same static answer for a given pair of\n\nWhy do you think that this callback can be invoked several times per\ntransaction? I think it could be called at most two times, once at\nprepare time, then at commit or rollback time. So, I think using\n'multiple' instead of 'several' times is better.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 29 Mar 2021 11:42:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On Mon, Mar 29, 2021 at 11:42 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Mar 25, 2021 at 2:07 PM Markus Wanner\n> <markus.wanner@enterprisedb.com> wrote:\n> >\n> > On 22.03.21 09:50, Markus Wanner wrote:\n> > > thank you for reconsidering this patch. I updated it to include the\n> > > required adjustments to the documentation. Please review.\n> >\n> > I tweaked the wording in the docs a bit, resulting in a v3 of this patch.\n> >\n>\n> One minor comment:\n> - The callback has to provide the same static answer for a given\n> - <parameter>gid</parameter> every time it is called.\n> + The callback may be invoked several times per transaction to decode and\n> + must provide the same static answer for a given pair of\n>\n> Why do you think that this callback can be invoked several times per\n> transaction? I think it could be called at most two times, once at\n> prepare time, then at commit or rollback time. So, I think using\n> 'multiple' instead of 'several' times is better.\n>\n\n+ to it not being a unique identifier. Therefore, other systems combine\n+ the <parameter>xid</parameter> with a node identifier to form a\n+ globally unique transaction identifier.\n\nWhat exactly is the node identifier here? Is it a publisher or\nsubscriber node id? We might want to be a bit more explicit here?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 29 Mar 2021 11:53:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On 29.03.21 08:23, Amit Kapila wrote:\n> On Mon, Mar 29, 2021 at 11:42 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> Why do you think that this callback can be invoked several times per\n>> transaction? I think it could be called at most two times, once at\n>> prepare time, then at commit or rollback time. So, I think using\n>> 'multiple' instead of 'several' times is better.\n\nThank you for reviewing.\n\nThat's fine with me, I just wanted to provide an explanation for why the \ncallback needs to be stable. (I would not want to limit us in the docs \nto guarantee it is called only twice. 'multiple' sounds generic enough, \nI changed it to that word.)\n\n> What exactly is the node identifier here? Is it a publisher or\n> subscriber node id? We might want to be a bit more explicit here?\n\nGood point. I clarified this to speak of the origin node (given this is \nnot necessarily the direct provider when using chained replication).\n\nAn updated patch is attached.\n\nRegards\n\nMarkus",
"msg_date": "Mon, 29 Mar 2021 09:27:25 +0200",
"msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "Sorry, git tricked me. Here's the patch including actual changes.\n\nRegards\n\nMarkus",
"msg_date": "Mon, 29 Mar 2021 09:33:12 +0200",
"msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On Mon, Mar 29, 2021 at 12:57 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 29.03.21 08:23, Amit Kapila wrote:\n> > On Mon, Mar 29, 2021 at 11:42 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > What exactly is the node identifier here? Is it a publisher or\n> > subscriber node id? We might want to be a bit more explicit here?\n>\n> Good point. I clarified this to speak of the origin node (given this is\n> not necessarily the direct provider when using chained replication).\n>\n\nThis might or might not be valid for all logical replication solutions\nbut in the publisher-subscriber model, it would easily lead to\nduplicate identifiers and block the replication. For example, when\nthere are multiple subscriptions (say - 2) for multiple publications\n(again say-2), the two subscriptions are on Node-B and two\npublications are on Node-A. Say both publications are for different\ntables tab-1 and tab-2. Now, a prepared transaction involving\noperation on both tables will generate the same GID. This will block\nforever if someone has set synchronous_standby_names for both\nsubscriptions because Prepare won't finish till both the subscribers\nprepare the transaction and due to conflict one of the subscriber will\nnever finish the prepare. I thought it might be better to use\nsubscriber-id (or unique replication-origin-id for a subscription) and\nthe origin node's xid as that will minimize the chances of any such\ncollision. We can reach this situation if the user prepares the\ntransaction with the same name as we have generated but we can suggest\nuser not to do this or we can generate an internal prepared\ntransaction name starting with pg_* and disallow prepared transaction\nnames from the user starting with pg_ as we do in some other cases.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 29 Mar 2021 14:43:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On 29.03.21 11:13, Amit Kapila wrote:\n> This might or might not be valid for all logical replication solutions\n> but in the publisher-subscriber model, it would easily lead to\n> duplicate identifiers and block the replication. For example, when\n> there are multiple subscriptions (say - 2) for multiple publications\n> (again say-2), the two subscriptions are on Node-B and two\n> publications are on Node-A. Say both publications are for different\n> tables tab-1 and tab-2. Now, a prepared transaction involving\n> operation on both tables will generate the same GID.\n\nI think you are misunderstanding. This is about a globally unique \nidentifier for a transaction, which has nothing to do with a GID used to \nprepare a transaction. This *needs* to be the same for what logical is \nthe same transaction.\n\nWhat GID a downsteam subscriber uses when receiving messages from some \nnon-Postgres-provided output plugin clearly is out of scope for this \ndocumentation. The point is to highlight how the xid can be useful for \nfilter_prepare. And that serves transaction identification purposes.\n\nRegards\n\nMarkus\n\n\n",
"msg_date": "Mon, 29 Mar 2021 11:41:05 +0200",
"msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On Mon, Mar 29, 2021 at 3:11 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 29.03.21 11:13, Amit Kapila wrote:\n> > This might or might not be valid for all logical replication solutions\n> > but in the publisher-subscriber model, it would easily lead to\n> > duplicate identifiers and block the replication. For example, when\n> > there are multiple subscriptions (say - 2) for multiple publications\n> > (again say-2), the two subscriptions are on Node-B and two\n> > publications are on Node-A. Say both publications are for different\n> > tables tab-1 and tab-2. Now, a prepared transaction involving\n> > operation on both tables will generate the same GID.\n>\n> I think you are misunderstanding. This is about a globally unique\n> identifier for a transaction, which has nothing to do with a GID used to\n> prepare a transaction. This *needs* to be the same for what logical is\n> the same transaction.\n>\n\nOkay, but just in the previous sentence (\"However, reuse of the same\n<parameter>gid</parameter> for example by a downstream node using\nmultiple subscriptions may lead to it not being a unique\nidentifier.\"), you have explained how sending a GID identifier can\nlead to a non-unique identifier for multiple subscriptions. And then\nin the next line, the way you are suggesting to generate GID by use of\nXID seems to have the same problem, so that caused confusion for me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 29 Mar 2021 15:23:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On 29.03.21 11:53, Amit Kapila wrote:\n> Okay, but just in the previous sentence (\"However, reuse of the same\n> <parameter>gid</parameter> for example by a downstream node using\n> multiple subscriptions may lead to it not being a unique\n> identifier.\"), you have explained how sending a GID identifier can\n> lead to a non-unique identifier for multiple subscriptions.\n\nMaybe the example of the downstream node is a bad one. I understand \nthat can cause confusion. Let's leave away that example and focus on \nthe output plugin side. v6 attached.\n\n> And then\n> in the next line, the way you are suggesting to generate GID by use of\n> XID seems to have the same problem, so that caused confusion for me.\n\nIt was not intended as a suggestion for how to generate GIDs at all. \nHopefully leaving away that bad example will make it less likely to \nappear related to GID generation on the subscriber.\n\nRegards\n\nMarkus",
"msg_date": "Mon, 29 Mar 2021 12:00:19 +0200",
"msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On Mon, Mar 29, 2021 at 3:30 PM Markus Wanner <\nmarkus.wanner@enterprisedb.com> wrote:\n>\n> On 29.03.21 11:53, Amit Kapila wrote:\n> > Okay, but just in the previous sentence (\"However, reuse of the same\n> > <parameter>gid</parameter> for example by a downstream node using\n> > multiple subscriptions may lead to it not being a unique\n> > identifier.\"), you have explained how sending a GID identifier can\n> > lead to a non-unique identifier for multiple subscriptions.\n>\n> Maybe the example of the downstream node is a bad one. I understand\n> that can cause confusion. Let's leave away that example and focus on\n> the output plugin side. v6 attached.\n>\n> > And then\n> > in the next line, the way you are suggesting to generate GID by use of\n> > XID seems to have the same problem, so that caused confusion for me.\n>\n> It was not intended as a suggestion for how to generate GIDs at all.\n> Hopefully leaving away that bad example will make it less likely to\n> appear related to GID generation on the subscriber.\n\nThanks for the updated patch. Patch applies neatly, make check and make\ncheck-world passes. The code changes look fine to me.\n\nIn documentation, I did not understand the bold contents in the below\ndocumentation:\n+ The <parameter>ctx</parameter> parameter has the same contents as\nfor\n+ the other callbacks. The parameters <parameter>xid</parameter>\n+ and <parameter>gid</parameter> provide two different ways to\nidentify\n+ the transaction. For some systems, the <parameter>gid</parameter>\nmay\n+ be sufficient. However, reuse of the same\n<parameter>gid</parameter>\n+ may lead to it not being a unique identifier.\n\n*Therefore, other systems+ combine the <parameter>xid</parameter>\nwith an identifier of the origin+ node to form a globally unique\ntransaction identifier.* The later\n+ <command>COMMIT PREPARED</command> or <command>ROLLBACK\n+ PREPARED</command> carries both identifiers, providing an output\nplugin\n+ the choice of what to use.\n\nI know that in publisher/subscriber decoding, the prepared transaction\ngid will be modified to either pg_xid_origin or pg_xid_subid(it is still\nbeing discussed in logical decoding of two-phase transactions thread, it\nis in not yet completely finalized) to solve the subscriber getting the\nsame gid name.\nBut in prepare_filter_cb callback, by stating \"other systems ...\" it is not\nvery clear who will change the GID. Are we referring to\npublisher/subscriber decoding?\n\nRegards,\nVignesh\n\nOn Mon, Mar 29, 2021 at 3:30 PM Markus Wanner <markus.wanner@enterprisedb.com> wrote:>> On 29.03.21 11:53, Amit Kapila wrote:> > Okay, but just in the previous sentence (\"However, reuse of the same> > <parameter>gid</parameter> for example by a downstream node using> > multiple subscriptions may lead to it not being a unique> > identifier.\"), you have explained how sending a GID identifier can> > lead to a non-unique identifier for multiple subscriptions.>> Maybe the example of the downstream node is a bad one. I understand> that can cause confusion. Let's leave away that example and focus on> the output plugin side. v6 attached.>> > And then> > in the next line, the way you are suggesting to generate GID by use of> > XID seems to have the same problem, so that caused confusion for me.>> It was not intended as a suggestion for how to generate GIDs at all.> Hopefully leaving away that bad example will make it less likely to> appear related to GID generation on the subscriber.Thanks for the updated patch. Patch applies neatly, make check and make check-world passes. The code changes look fine to me.In documentation, I did not understand the bold contents in the below documentation:+ The <parameter>ctx</parameter> parameter has the same contents as for+ the other callbacks. The parameters <parameter>xid</parameter>+ and <parameter>gid</parameter> provide two different ways to identify+ the transaction. For some systems, the <parameter>gid</parameter> may+ be sufficient. However, reuse of the same <parameter>gid</parameter>+ may lead to it not being a unique identifier. Therefore, other systems+ combine the <parameter>xid</parameter> with an identifier of the origin+ node to form a globally unique transaction identifier. The later+ <command>COMMIT PREPARED</command> or <command>ROLLBACK+ PREPARED</command> carries both identifiers, providing an output plugin+ the choice of what to use.I know that in publisher/subscriber decoding, the prepared transaction gid will be modified to either pg_xid_origin or pg_xid_subid(it is still being discussed in logical decoding of two-phase transactions thread, it is in not yet completely finalized) to solve the subscriber getting the same gid name. But in prepare_filter_cb callback, by stating \"other systems ...\" it is not very clear who will change the GID. Are we referring to publisher/subscriber decoding?Regards,Vignesh",
"msg_date": "Mon, 29 Mar 2021 15:48:43 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On 29.03.21 12:18, vignesh C wrote:\n> But in prepare_filter_cb callback, by stating \"other systems ...\" it is \n> not very clear who will change the GID. Are we referring to \n> publisher/subscriber decoding?\n\nThanks for your feedback. This is not about GIDs at all, but just about \nidentifying a transaction. I'm out of ideas on how else to phrase that. \n Any suggestion?\n\nMaybe we should not try to give examples and reference other systems, \nbut just leave it at:\n\n The <parameter>ctx</parameter> parameter has the same contents as for\n the other callbacks. The parameters <parameter>xid</parameter>\n and <parameter>gid</parameter> provide two different ways to identify\n the transaction. The later <command>COMMIT PREPARED</command> or\n <command>ROLLBACK PREPARED</command> carries both identifiers,\n providing an output plugin the choice of what to use.\n\nThat is sufficient an explanation in my opinion. What do you think?\n\nRegards\n\nMarkus\n\n\n",
"msg_date": "Mon, 29 Mar 2021 12:52:01 +0200",
"msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On Mon, Mar 29, 2021 at 4:22 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 29.03.21 12:18, vignesh C wrote:\n> > But in prepare_filter_cb callback, by stating \"other systems ...\" it is\n> > not very clear who will change the GID. Are we referring to\n> > publisher/subscriber decoding?\n>\n> Thanks for your feedback. This is not about GIDs at all, but just about\n> identifying a transaction. I'm out of ideas on how else to phrase that.\n> Any suggestion?\n>\n> Maybe we should not try to give examples and reference other systems,\n> but just leave it at:\n>\n> The <parameter>ctx</parameter> parameter has the same contents as for\n> the other callbacks. The parameters <parameter>xid</parameter>\n> and <parameter>gid</parameter> provide two different ways to identify\n> the transaction. The later <command>COMMIT PREPARED</command> or\n> <command>ROLLBACK PREPARED</command> carries both identifiers,\n> providing an output plugin the choice of what to use.\n>\n> That is sufficient an explanation in my opinion. What do you think?\n\nThe above content looks sufficient to me. As we already explain this\nearlier \"The optional filter_prepare_cb callback is called to\ndetermine whether data that is part of the current two-phase commit\ntransaction should be considered for decoding at this prepare stage or\nlater as a regular one-phase transaction at COMMIT PREPARED time.\"\nwhich helps in understanding what filter_prepare_cb means.\nThoughts?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 29 Mar 2021 16:34:23 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On 29.03.21 13:04, vignesh C wrote:\n> The above content looks sufficient to me.\n\nGood, thanks. Based on that, I'm adding v7 of the patch.\n\nRegards\n\nMarkus",
"msg_date": "Mon, 29 Mar 2021 13:16:36 +0200",
"msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On Mon, Mar 29, 2021 at 4:46 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 29.03.21 13:04, vignesh C wrote:\n> > The above content looks sufficient to me.\n>\n> Good, thanks. Based on that, I'm adding v7 of the patch.\n>\n\nThanks for the updated patch.\n\n@@ -440,7 +441,8 @@ pg_decode_rollback_prepared_txn(LogicalDecodingContext *ctx,\n * substring, then we filter it out.\n */\n static bool\n-pg_decode_filter_prepare(LogicalDecodingContext *ctx, const char *gid)\n+pg_decode_filter_prepare(LogicalDecodingContext *ctx, TransactionId xid,\n+ const char *gid)\n {\n if (strstr(gid, \"_nodecode\") != NULL)\n return true;\n\nCurrently there is one test to filter prepared txn with gid having\n\"_nodecode\". I'm not sure if we can have any tests based on xid, I'm\nsure you might have thought about it, Have you intentionally not\nwritten any tests as it will be difficult to predict the xid. I just\nwanted to confirm my understanding.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 29 Mar 2021 17:30:41 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On 29.03.21 14:00, vignesh C wrote:\n> Have you intentionally not\n> written any tests as it will be difficult to predict the xid. I just\n> wanted to confirm my understanding.\n\nYeah, that's the reason this is hard to test this with a regression \ntest. It might be possible to come up with a TAP test for this, but I \ndoubt that's worth it, as it's a pretty trivial addition.\n\nRegards\n\nMarkus\n\n\n",
"msg_date": "Mon, 29 Mar 2021 14:10:28 +0200",
"msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On Mon, Mar 29, 2021 at 5:40 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 29.03.21 14:00, vignesh C wrote:\n> > Have you intentionally not\n> > written any tests as it will be difficult to predict the xid. I just\n> > wanted to confirm my understanding.\n>\n> Yeah, that's the reason this is hard to test this with a regression\n> test. It might be possible to come up with a TAP test for this, but I\n> doubt that's worth it, as it's a pretty trivial addition.\n>\n\nThanks, I don't have any more comments.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 29 Mar 2021 19:07:58 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On Mon, Mar 29, 2021 at 4:46 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 29.03.21 13:04, vignesh C wrote:\n> > The above content looks sufficient to me.\n>\n> Good, thanks. Based on that, I'm adding v7 of the patch.\n>\n\nPushed. In the last version, you have named the patch incorrectly.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 30 Mar 2021 14:03:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
},
{
"msg_contents": "On 30.03.21 10:33, Amit Kapila wrote:\n> Pushed. In the last version, you have named the patch incorrectly.\n\nThanks a lot, Amit!\n\nRegards\n\nMarkus\n\n\n",
"msg_date": "Tue, 30 Mar 2021 11:46:37 +0200",
"msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Provide more information to filter_prepare"
}
] |
[
{
"msg_contents": "Hi,\n\nOur customer experienced a significant slowdown on queries involving\nIndex Only Scan. As it turned out, the problem was constant pin-unpin of\nthe visibility map page. IOS caches only one vm page, which corresponds\nto 8192 * 8 / 2 * 8192 bytes = 256 MB of data; if the table is larger\nand the order of access (index) doesn't match the order of data, vm page\nwill be replaced on each tuple processing. That's costly. Attached\nios.sql script emulates this worst case behaviour. In current master,\nselect takes\n\n[local]:5432 ars@postgres:21052=# explain analyse select * from test order by id;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using test_idx on test (cost=0.44..1159381.24 rows=59013120 width=8) (actual time=0.015..9094.532 rows=59013120 loops=1)\n Heap Fetches: 0\n Planning Time: 0.043 ms\n Execution Time: 10508.576 ms\n\n\nAttached straightforward patch increases the cache to store 64 pages (16\nGB of data). With it, we get\n\n[local]:5432 ars@postgres:17427=# explain analyse select * from test order by id;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using test_idx on test (cost=0.44..1159381.24 rows=59013120 width=8) (actual time=0.040..3469.299 rows=59013120 loops=1)\n Heap Fetches: 0\n Planning Time: 0.118 ms\n Execution Time: 4871.124 ms\n\n(I believe the whole index is cached in these tests)\n\nYou might say 16GB is also somewhat arbitrary border. Well, it is. We\ncould make it GUC-able, but I'm not sure here as the setting is rather\nlow-level, and at the same time having several dozens of additionally\npinned buffers doesn't sound too criminal, i.e. I doubt there is a real\nrisk of \"no unpinned buffers available\" or something (e.g. even default\n32MB shared_buffers contain 4096 pages). However, forcing IOS to be\ninefficient if the table is larger is also illy. Any thoughts?\n\n(the code is by K. Knizhnik, testing by M. Zhilin and R. Zharkov, I've\nonly polished the things up)\n\n\n\n\n\n\n--\nArseny Sher\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 09 Mar 2021 13:56:18 +0300",
"msg_from": "Arseny Sher <a.sher@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Enlarge IOS vm cache"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhen reading the code, I found that in function CommandIsReadOnly[1], \"select for update/share\" is defined as \"not read only\".\n[1]-----------------\n if (pstmt->rowMarks != NIL)\n return false; /* SELECT FOR [KEY] UPDATE/SHARE */\n-----------------\n\nAnd from the comment [2], I think it means we need to CCI for \"select for update/share \",\nI am not very familiar this, is there some reason that we have to do CCI for \"select for update/share \" ?\nOr Did I misunderstand ?\n\n[2]-----------------\n* the query must be *in truth* read-only, because the caller wishes\n* not to do CommandCounterIncrement for it.\n-----------------\n\nBest regards,\nhouzj\n\n\n\n\n",
"msg_date": "Tue, 9 Mar 2021 11:54:04 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Questions about CommandIsReadOnly"
}
] |
[
{
"msg_contents": "Hi,\n\nIn passing I noticed that lwlock.c contains 3 comments about bogus\nwakeups due to sharing proc->sem with the heavyweight lock manager and\nProcWaitForSignal. Commit 6753333f55e (9.5) switched those things\nfrom proc->sem to proc->procLatch. ProcArrayGroupClearXid() and\nTransactionGroupUpdateXidStatus() also use proc->sem though, and I\nhaven't studied how those might overlap with with LWLockWait(), so I'm\nnot sure what change to suggest.\n\n\n",
"msg_date": "Wed, 10 Mar 2021 01:11:59 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Outdated comments about proc->sem in lwlock.c"
},
{
"msg_contents": "On Wed, Mar 10, 2021 at 1:11 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> In passing I noticed that lwlock.c contains 3 comments about bogus\n> wakeups due to sharing proc->sem with the heavyweight lock manager and\n> ProcWaitForSignal. Commit 6753333f55e (9.5) switched those things\n> from proc->sem to proc->procLatch. ProcArrayGroupClearXid() and\n> TransactionGroupUpdateXidStatus() also use proc->sem though, and I\n> haven't studied how those might overlap with with LWLockWait(), so I'm\n> not sure what change to suggest.\n\nHere's a patch to remove the misleading comments.",
"msg_date": "Thu, 3 Jun 2021 14:07:02 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Outdated comments about proc->sem in lwlock.c"
},
{
"msg_contents": "> On 3 Jun 2021, at 04:07, Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> Here's a patch to remove the misleading comments.\n\nWhile not an expert in the area; reading the referenced commit and the code\nwith the now removed comments, I think this is correct.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 7 Jul 2021 22:48:16 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Outdated comments about proc->sem in lwlock.c"
},
{
"msg_contents": "On Thu, Jul 8, 2021 at 8:48 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 3 Jun 2021, at 04:07, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Here's a patch to remove the misleading comments.\n>\n> While not an expert in the area; reading the referenced commit and the code\n> with the now removed comments, I think this is correct.\n\nThanks! I made the comments slightly more uniform and pushed.\n\n\n",
"msg_date": "Fri, 9 Jul 2021 18:15:35 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Outdated comments about proc->sem in lwlock.c"
}
] |
[
{
"msg_contents": "Hi,\n\nThe heap AMs' pages only grow their pd_linp array, and never shrink\nwhen trailing entries are marked unused. This means that up to 14% of\nfree space (=291 unused line pointers) on a page could be unusable for\ndata storage, which I think is a shame. With a patch in the works that\nallows the line pointer array to grow up to one third of the size of\nthe page [0], it would be quite catastrophic for the available data\nspace on old-and-often-used pages if this could not ever be reused for\ndata.\n\nThe shrinking of the line pointer array is already common practice in\nindexes (in which all LP_UNUSED items are removed), but this specific\nimplementation cannot be used for heap pages due to ItemId\ninvalidation. One available implementation, however, is that we\ntruncate the end of this array, as mentioned in [1]. There was a\nwarning at the top of PageRepairFragmentation about not removing\nunused line pointers, but I believe that was about not removing\n_intermediate_ unused line pointers (which would imply moving in-use\nline pointers); as far as I know there is nothing that relies on only\ngrowing page->pd_lower, and nothing keeping us from shrinking it\nwhilst holding a pin on the page.\n\nPlease find attached a fairly trivial patch for which detects the last\nunused entry on a page, and truncates the pd_linp array to that entry,\neffectively freeing 4 bytes per line pointer truncated away (up to\n1164 bytes for pages with MaxHeapTuplesPerPage unused lp_unused\nlines).\n\nOne unexpected benefit from this patch is that the PD_HAS_FREE_LINES\nhint bit optimization can now be false more often, increasing the\nchances of not having to check the whole array to find an empty spot.\n\nNote: This does _not_ move valid ItemIds, it only removes invalid\n(unused) ItemIds from the end of the space reserved for ItemIds on a\npage, keeping valid linepointers intact.\n\n\nEnjoy,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/message-id/flat/CAD21AoD0SkE11fMw4jD4RENAwBMcw1wasVnwpJVw3tVqPOQgAw@mail.gmail.com\n[1] https://www.postgresql.org/message-id/CAEze2Wjf42g8Ho%3DYsC_OvyNE_ziM0ZkXg6wd9u5KVc2nTbbYXw%40mail.gmail.com",
"msg_date": "Tue, 9 Mar 2021 16:13:20 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "\n\n> On Mar 9, 2021, at 7:13 AM, Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n> \n> Hi,\n> \n> The heap AMs' pages only grow their pd_linp array, and never shrink\n> when trailing entries are marked unused. This means that up to 14% of\n> free space (=291 unused line pointers) on a page could be unusable for\n> data storage, which I think is a shame. With a patch in the works that\n> allows the line pointer array to grow up to one third of the size of\n> the page [0], it would be quite catastrophic for the available data\n> space on old-and-often-used pages if this could not ever be reused for\n> data.\n> \n> The shrinking of the line pointer array is already common practice in\n> indexes (in which all LP_UNUSED items are removed), but this specific\n> implementation cannot be used for heap pages due to ItemId\n> invalidation. One available implementation, however, is that we\n> truncate the end of this array, as mentioned in [1]. There was a\n> warning at the top of PageRepairFragmentation about not removing\n> unused line pointers, but I believe that was about not removing\n> _intermediate_ unused line pointers (which would imply moving in-use\n> line pointers); as far as I know there is nothing that relies on only\n> growing page->pd_lower, and nothing keeping us from shrinking it\n> whilst holding a pin on the page.\n> \n> Please find attached a fairly trivial patch for which detects the last\n> unused entry on a page, and truncates the pd_linp array to that entry,\n> effectively freeing 4 bytes per line pointer truncated away (up to\n> 1164 bytes for pages with MaxHeapTuplesPerPage unused lp_unused\n> lines).\n> \n> One unexpected benefit from this patch is that the PD_HAS_FREE_LINES\n> hint bit optimization can now be false more often, increasing the\n> chances of not having to check the whole array to find an empty spot.\n> \n> Note: This does _not_ move valid ItemIds, it only removes invalid\n> (unused) ItemIds from the end of the space reserved for ItemIds on a\n> page, keeping valid linepointers intact.\n> \n> \n> Enjoy,\n> \n> Matthias van de Meent\n> \n> [0] https://www.postgresql.org/message-id/flat/CAD21AoD0SkE11fMw4jD4RENAwBMcw1wasVnwpJVw3tVqPOQgAw@mail.gmail.com\n> [1] https://www.postgresql.org/message-id/CAEze2Wjf42g8Ho%3DYsC_OvyNE_ziM0ZkXg6wd9u5KVc2nTbbYXw%40mail.gmail.com\n> <v1-0001-Truncate-a-pages-line-pointer-array-when-it-has-t.patch>\n\nFor a prior discussion on this topic:\n\nhttps://www.postgresql.org/message-id/2e78013d0709130606l56539755wb9dbe17225ffe90a%40mail.gmail.com\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 9 Mar 2021 08:21:44 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Tue, 9 Mar 2021 at 17:21, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>\n> For a prior discussion on this topic:\n>\n> https://www.postgresql.org/message-id/2e78013d0709130606l56539755wb9dbe17225ffe90a%40mail.gmail.com\n\nThanks for the reference! I note that that thread mentions the\nold-style VACUUM FULL as a reason as to why it would be unsafe, which\nI believe was removed quite a few versions ago (v9.0).\n\nThe only two existing mechanisms that I could find (in the access/heap\ndirectory) that possibly could fail on shrunken line pointer arrays;\nbeing xlog recovery (I do not have enough knowledge on recovery to\ndetermine if that may touch pages that have shrunken line pointer\narrays, or if those situations won't exist due to never using dirtied\npages in recovery) and backwards table scans on non-MVCC snapshots\n(which would be fixed in the attached patch).\n\nWith regards,\n\nMatthias van de Meent",
"msg_date": "Tue, 9 Mar 2021 20:28:19 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Tue, Mar 9, 2021 at 7:13 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> The shrinking of the line pointer array is already common practice in\n> indexes (in which all LP_UNUSED items are removed), but this specific\n> implementation cannot be used for heap pages due to ItemId\n> invalidation. One available implementation, however, is that we\n> truncate the end of this array, as mentioned in [1]. There was a\n> warning at the top of PageRepairFragmentation about not removing\n> unused line pointers, but I believe that was about not removing\n> _intermediate_ unused line pointers (which would imply moving in-use\n> line pointers); as far as I know there is nothing that relies on only\n> growing page->pd_lower, and nothing keeping us from shrinking it\n> whilst holding a pin on the page.\n\nSounds like a good idea to me.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 9 Mar 2021 12:22:25 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> The only two existing mechanisms that I could find (in the access/heap\n> directory) that possibly could fail on shrunken line pointer arrays;\n> being xlog recovery (I do not have enough knowledge on recovery to\n> determine if that may touch pages that have shrunken line pointer\n> arrays, or if those situations won't exist due to never using dirtied\n> pages in recovery) and backwards table scans on non-MVCC snapshots\n> (which would be fixed in the attached patch).\n\nI think you're not visualizing the problem properly.\n\nThe case I was concerned about back when is that there are various bits of\ncode that may visit a page with a predetermined TID in mind to look at.\nAn index lookup is an obvious example, and another one is chasing an\nupdate chain's t_ctid link. You might argue that if the tuple was dead\nenough to be removed, there should be no such in-flight references to\nworry about, but I think such an assumption is unsafe. There is not, for\nexample, any interlock that ensures that a process that has obtained a TID\nfrom an index will have completed its heap visit before a VACUUM that\nsubsequently removed that index entry also removes the heap tuple.\n\nSo, to accept a patch that shortens the line pointer array, what we need\nto do is verify that every such code path checks for an out-of-range\noffset before trying to fetch the target line pointer. I believed\nback in 2007 that there were, or once had been, code paths that omitted\nsuch a range check, assuming that they could trust the TID they had\ngotten from $wherever to point at an extant line pointer array entry.\nMaybe things are all good now, but I think you should run around and\nexamine every place that checks for tuple deadness to see if the offset\nit used is known to be within the current page bounds.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Mar 2021 16:35:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "\n\n> On Mar 9, 2021, at 1:35 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> So, to accept a patch that shortens the line pointer array, what we need\n> to do is verify that every such code path checks for an out-of-range\n> offset before trying to fetch the target line pointer. I believed\n> back in 2007 that there were, or once had been, code paths that omitted\n> such a range check, assuming that they could trust the TID they had\n> gotten from $wherever to point at an extant line pointer array entry.\n> Maybe things are all good now, but I think you should run around and\n> examine every place that checks for tuple deadness to see if the offset\n> it used is known to be within the current page bounds.\n\nMuch as Pavan asked [1], I'm curious how we wouldn't already be in trouble if such code exists? In such a scenario, what stops a dead line pointer from being reused (rather than garbage collected by this patch) prior to such hypothetical code using an outdated TID?\n\nI'm not expressing a view here, just asking questions.\n\n[1] https://www.postgresql.org/message-id/2e78013d0709130832t31244e79k9488a3e4eb00d64c%40mail.gmail.com\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 9 Mar 2021 13:47:18 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Tue, Mar 9, 2021 at 1:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> > The only two existing mechanisms that I could find (in the access/heap\n> > directory) that possibly could fail on shrunken line pointer arrays;\n> > being xlog recovery (I do not have enough knowledge on recovery to\n> > determine if that may touch pages that have shrunken line pointer\n> > arrays, or if those situations won't exist due to never using dirtied\n> > pages in recovery) and backwards table scans on non-MVCC snapshots\n> > (which would be fixed in the attached patch).\n>\n> I think you're not visualizing the problem properly.\n>\n> The case I was concerned about back when is that there are various bits of\n> code that may visit a page with a predetermined TID in mind to look at.\n> An index lookup is an obvious example, and another one is chasing an\n> update chain's t_ctid link. You might argue that if the tuple was dead\n> enough to be removed, there should be no such in-flight references to\n> worry about, but I think such an assumption is unsafe. There is not, for\n> example, any interlock that ensures that a process that has obtained a TID\n> from an index will have completed its heap visit before a VACUUM that\n> subsequently removed that index entry also removes the heap tuple.\n>\n> So, to accept a patch that shortens the line pointer array, what we need\n> to do is verify that every such code path checks for an out-of-range\n> offset before trying to fetch the target line pointer. I believed\n> back in 2007 that there were, or once had been, code paths that omitted\n> such a range check, assuming that they could trust the TID they had\n> gotten from $wherever to point at an extant line pointer array entry.\n> Maybe things are all good now, but I think you should run around and\n> examine every place that checks for tuple deadness to see if the offset\n> it used is known to be within the current page bounds.\n\nIt occurs to me that we should also mark the hole in the middle of the\npage (which includes the would-be LP_UNUSED line pointers at the end\nof the original line pointer array space) as undefined to Valgrind\nwithin PageRepairFragmentation(). This is not to be confused with\nmarking them inaccessible to Valgrind, which just poisons the bytes.\nRather, it represents that the bytes in question are considered safe\nto copy around but not safe to rely on being any particular value. So\nValgrind will complain if the bytes in question influence control\nflow, directly or indirectly.\n\nObviously the code should also be audited. Even then, there may still\nbe bugs. I think that we need to bite the bullet here -- line pointer\nbloat is a significant problem.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 9 Mar 2021 13:54:17 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Mar 9, 2021, at 1:35 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> So, to accept a patch that shortens the line pointer array, what we need\n>> to do is verify that every such code path checks for an out-of-range\n>> offset before trying to fetch the target line pointer.\n\n> Much as Pavan asked [1], I'm curious how we wouldn't already be in trouble if such code exists? In such a scenario, what stops a dead line pointer from being reused (rather than garbage collected by this patch) prior to such hypothetical code using an outdated TID?\n\nThe line pointer very well *could* be re-used before the in-flight\nreference gets to it. That's okay though, because whatever tuple now\noccupies the TID would have to have xmin too new to match the snapshot\nthat such a reference is scanning with.\n\n(Back when we had non-MVCC snapshots to contend with, a bunch of\nadditional arm-waving was needed to argue that such situations were\nsafe. Possibly the proposed change wouldn't have flown back then.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Mar 2021 17:10:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> It occurs to me that we should also mark the hole in the middle of the\n> page (which includes the would-be LP_UNUSED line pointers at the end\n> of the original line pointer array space) as undefined to Valgrind\n> within PageRepairFragmentation().\n\n+1\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Mar 2021 17:11:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Tue, Mar 9, 2021 at 1:54 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> It occurs to me that we should also mark the hole in the middle of the\n> page (which includes the would-be LP_UNUSED line pointers at the end\n> of the original line pointer array space) as undefined to Valgrind\n> within PageRepairFragmentation().\n\nIt would probably also make sense to memset() the space in question to\na sequence of 0x7F bytes in CLOBBER_FREED_MEMORY builds. That isn't\nquite as good as what Valgrind will do in some ways, but it has a\nmajor advantage: it will usually visibly break code where the\nPageRepairFragmentation() calls made by VACUUM happen to take place\ninside another backend.\n\nValgrind instrumentation of PageRepairFragmentation() along the lines\nI've described won't recognize the \"hole in the middle of the page\"\narea as undefined when it was marked undefined in another backend.\nIt's as if shared memory is private to each process as far as the\nmemory poisoning/undefined-to-Valgrind stuff is concerned. In other\nwords, it deals with memory mappings, not memory.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 9 Mar 2021 14:42:39 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Tue, 9 Mar 2021 at 22:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> > The only two existing mechanisms that I could find (in the access/heap\n> > directory) that possibly could fail on shrunken line pointer arrays;\n> > being xlog recovery (I do not have enough knowledge on recovery to\n> > determine if that may touch pages that have shrunken line pointer\n> > arrays, or if those situations won't exist due to never using dirtied\n> > pages in recovery) and backwards table scans on non-MVCC snapshots\n> > (which would be fixed in the attached patch).\n>\n> I think you're not visualizing the problem properly.\n>\n> The case I was concerned about back when is that there are various bits of\n> code that may visit a page with a predetermined TID in mind to look at.\n> An index lookup is an obvious example, and another one is chasing an\n> update chain's t_ctid link. You might argue that if the tuple was dead\n> enough to be removed, there should be no such in-flight references to\n> worry about, but I think such an assumption is unsafe. There is not, for\n> example, any interlock that ensures that a process that has obtained a TID\n> from an index will have completed its heap visit before a VACUUM that\n> subsequently removed that index entry also removes the heap tuple.\n\nI am aware of this problem. I will admit that I did not detected all\npotentially problematic accesses, so I'll show you my work.\n\n> So, to accept a patch that shortens the line pointer array, what we need\n> to do is verify that every such code path checks for an out-of-range\n> offset before trying to fetch the target line pointer. I believed\n> back in 2007 that there were, or once had been, code paths that omitted\n> such a range check, assuming that they could trust the TID they had\n> gotten from $wherever to point at an extant line pointer array entry.\n> Maybe things are all good now, but I think you should run around and\n> examine every place that checks for tuple deadness to see if the offset\n> it used is known to be within the current page bounds.\n\nIn my search for problematic accesses, I make the following assumptions:\n* PageRepairFragmentation as found in bufpage is only applicable to\nheap pages; this function is not applied to other pages in core\npostgres. So, any problems that occur are with due to access with an\nOffsetNumber > PageGetMaxOffsetNumber.\n* Items [on heap pages] are only extracted after using PageGetItemId\nfor that item on the page, whilst holding a lock.\n\nUnder those assumptions, I ran \"grep PageGetItemId\" over the src\ndirectory. For all 227 results (as of 68b34b23) I checked if the page\naccessed (or item accessed thereafter) was a heap page or heap tuple.\nAfter analysis of the relevant references, I had the following results\n(attached full report, containing a line with the file & line number,\nand code line of the call, followed by a line containing the usage\ntype):\n\nCount of usage type - usage type\n4 - Not a call (comment)\n7 - Callsite guarantees bounds\n8 - Has assertion ItemIdIsNormal (asserts item is not removed; i.e.\nconcurrent vacuum should not have been able to remove this item)\n39 - Has bound checks\n6 - Not a heap page (brin)\n1 - Not a heap page (generic index)\n24 - Not a heap page (gin)\n21 - Not a heap page (gist)\n14 - Not a heap page (hash)\n60 - Not a heap page (nbtree)\n1 - Not a heap page (sequence)\n36 - Not a heap page (spgist)\n2 - OffsetNumber is generated by PageAddItem\n2 - problem case 1 (table scan)\n1 - Problem case 2 (xlog)\n1 - Problem case 3 (invalid redirect pointers)\n\nThe 3 problem cases were classified based on the origin of the\npotentially invalid pointer.\n\nProblem case 1: table scan; heapam.c lines 678 and 811, in heapgettup\n\nThe table scan maintains a state which contains a page-bound\nOffsetNumber, which it uses as a cursor whilst working through the\npages of the relation. In forward scans, the bounds of the page are\nre-validated at the start of the 'while (linesleft > 0)' loop at 681,\nbut for backwards scans this check is invalid, because it incorrectly\nassumes that the last OffsetNumber is guaranteed to still exist.\n\nFor MVCC snapshots, this is true (the previously returned value must\nhave been visible in its snapshot, therefore cannot have been vacuumed\nbecause the snapshot is still alive), but non-mvcc snapshots may have\nreturned a dead tuple, which is now vacuumed and truncated away.\n\nThe first occurrance of this issue is easily fixed with the changes as\nsubmitted in patch v2.\n\nThe second problem case (on line 811) is for forward scans, where the\nline pointer array could have been truncated to 0 length. As the code\nuses a hardcoded offset of FirstOffsetNumber (=1), that might point\ninto arbitrary data. The reading of this arbitrary data is saved by\nthe 'while (linesleft > 0) check', because at that point linesleft\nwill be PageGetMaxOffsetNumber, which would then equal 0.\n\nProblem case 2: xlog; heapam.c line 8796, in heap_xlog_freeze_page\n\nThis is in the replay of transaction logs, in heap_xlog_freeze_page.\nAs I am unaware whether or not pages to which these transaction logs\nare applied can contain changes from the xlog-generating instance, I\nflagged this as a potential problem. The application of the xlogs is\nprobably safe (it assumes the existence of a HeapTupleHeader for that\nItemId), but my limited knowledge put this on the 'potential problem'\nlist.\n\nPlease advise on this; I cannot find the right documentation\n\nProblem case 3: invalid redirect pointers; pruneheap.c line 949, in\nheap_get_root_tuples\n\nThe heap pruning mechanism currently assumes that all redirects are\nvalid. Currently, if a HOT root points to !ItemIdIsNormal, it breaks\nout of the loop, but doesn't actually fail. This code is already\npotentially problematic because it has no bounds or sanity checking\nfor the nextoffnum, but with shrinking pd_linp it would also add the\nfailure mode of HOT tuples pointing into what is now arbitrary data.\n\nThis failure mode is now accompanied by an Assert, which fails when\nthe redirect is to an invalid OffsetNumber. This is not enough to not\nexhibit arbitrary behaviour when accessing corrupted data with\nnon-assert builds, but I think that that is fine; we already do not\nhave a guaranteed behaviour for this failure mode.\n\n\nI have also searched the contrib modules using the same method; and\nall 18 usages seem to be validated correctly.\n\n\nWith regards,\n\nMatthias van de Meent.",
"msg_date": "Wed, 10 Mar 2021 15:01:05 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Wed, Mar 10, 2021 at 6:01 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> > The case I was concerned about back when is that there are various bits of\n> > code that may visit a page with a predetermined TID in mind to look at.\n> > An index lookup is an obvious example, and another one is chasing an\n> > update chain's t_ctid link. You might argue that if the tuple was dead\n> > enough to be removed, there should be no such in-flight references to\n> > worry about, but I think such an assumption is unsafe. There is not, for\n> > example, any interlock that ensures that a process that has obtained a TID\n> > from an index will have completed its heap visit before a VACUUM that\n> > subsequently removed that index entry also removes the heap tuple.\n>\n> I am aware of this problem. I will admit that I did not detected all\n> potentially problematic accesses, so I'll show you my work.\n\nAttached is a trivial rebase of your v3, which I've called v4. I am\ninterested in getting this patch into Postgres 14.\n\n> In my search for problematic accesses, I make the following assumptions:\n> * PageRepairFragmentation as found in bufpage is only applicable to\n> heap pages; this function is not applied to other pages in core\n> postgres. So, any problems that occur are with due to access with an\n> OffsetNumber > PageGetMaxOffsetNumber.\n> * Items [on heap pages] are only extracted after using PageGetItemId\n> for that item on the page, whilst holding a lock.\n\n> The 3 problem cases were classified based on the origin of the\n> potentially invalid pointer.\n>\n> Problem case 1: table scan; heapam.c lines 678 and 811, in heapgettup\n\nI think that it boils down to this: 100% of the cases where this could\nbe a problem all either involve old TIDs, or old line pointer -- in\nprinciple these could be invalidated in some way, like your backwards\nscan example. But that's it. Bear in mind that we always call\nPageRepairFragmentation() with a super-exclusive lock.\n\n> This is in the replay of transaction logs, in heap_xlog_freeze_page.\n> As I am unaware whether or not pages to which these transaction logs\n> are applied can contain changes from the xlog-generating instance, I\n> flagged this as a potential problem. The application of the xlogs is\n> probably safe (it assumes the existence of a HeapTupleHeader for that\n> ItemId), but my limited knowledge put this on the 'potential problem'\n> list.\n>\n> Please advise on this; I cannot find the right documentation\n\nAre you aware of wal_consistency_checking?\n\nI think that this should be fine. There are differences that are\npossible between a replica and primary, but they're very minor and\ndon't seem relevant.\n\n> Problem case 3: invalid redirect pointers; pruneheap.c line 949, in\n> heap_get_root_tuples\n>\n> The heap pruning mechanism currently assumes that all redirects are\n> valid. Currently, if a HOT root points to !ItemIdIsNormal, it breaks\n> out of the loop, but doesn't actually fail. This code is already\n> potentially problematic because it has no bounds or sanity checking\n> for the nextoffnum, but with shrinking pd_linp it would also add the\n> failure mode of HOT tuples pointing into what is now arbitrary data.\n\nheap_prune_chain() is less trusting than heap_get_root_tuples(),\nthough -- it doesn't trust redirects (because there is a generic\noffnum sanity check at the start of its loop). I think that the\ninconsistency between these two functions probably isn't hugely\nsignificant.\n\nIdeally it would be 100% clear which of the defenses in code like this\nis merely extra hardening. The assumptions should be formalized. There\nis nothing wrong with hardening, but we should know it when we see it.\n\n> This failure mode is now accompanied by an Assert, which fails when\n> the redirect is to an invalid OffsetNumber. This is not enough to not\n> exhibit arbitrary behaviour when accessing corrupted data with\n> non-assert builds, but I think that that is fine; we already do not\n> have a guaranteed behaviour for this failure mode.\n\nWhat about my \"set would-be LP_UNUSED space to all-0x7F bytes in\nCLOBBER_FREED_MEMORY builds\" idea? Did you think about that?\n\n--\nPeter Geoghegan",
"msg_date": "Tue, 30 Mar 2021 20:35:33 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Wed, 31 Mar 2021 at 05:35, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Mar 10, 2021 at 6:01 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > > The case I was concerned about back when is that there are various bits of\n> > > code that may visit a page with a predetermined TID in mind to look at.\n> > > An index lookup is an obvious example, and another one is chasing an\n> > > update chain's t_ctid link. You might argue that if the tuple was dead\n> > > enough to be removed, there should be no such in-flight references to\n> > > worry about, but I think such an assumption is unsafe. There is not, for\n> > > example, any interlock that ensures that a process that has obtained a TID\n> > > from an index will have completed its heap visit before a VACUUM that\n> > > subsequently removed that index entry also removes the heap tuple.\n> >\n> > I am aware of this problem. I will admit that I did not detected all\n> > potentially problematic accesses, so I'll show you my work.\n>\n> Attached is a trivial rebase of your v3, which I've called v4. I am\n> interested in getting this patch into Postgres 14.\n\nThanks, and that'd be great! PFA v5, which fixes 1 issue later\nmentioned, and adds some comments on existing checks that are now in a\ncritical path.\n\n> > In my search for problematic accesses, I make the following assumptions:\n> > * PageRepairFragmentation as found in bufpage is only applicable to\n> > heap pages; this function is not applied to other pages in core\n> > postgres. So, any problems that occur are with due to access with an\n> > OffsetNumber > PageGetMaxOffsetNumber.\n> > * Items [on heap pages] are only extracted after using PageGetItemId\n> > for that item on the page, whilst holding a lock.\n>\n> > The 3 problem cases were classified based on the origin of the\n> > potentially invalid pointer.\n> >\n> > Problem case 1: table scan; heapam.c lines 678 and 811, in heapgettup\n>\n> I think that it boils down to this: 100% of the cases where this could\n> be a problem all either involve old TIDs, or old line pointer -- in\n> principle these could be invalidated in some way, like your backwards\n> scan example. But that's it. Bear in mind that we always call\n> PageRepairFragmentation() with a super-exclusive lock.\n\nYeah, that's the gist of what I found out. All accesses using old line\npointers need revalidation, and there were some cases in which this\nwas not yet done correctly.\n\n> > This is in the replay of transaction logs, in heap_xlog_freeze_page.\n> > As I am unaware whether or not pages to which these transaction logs\n> > are applied can contain changes from the xlog-generating instance, I\n> > flagged this as a potential problem. The application of the xlogs is\n> > probably safe (it assumes the existence of a HeapTupleHeader for that\n> > ItemId), but my limited knowledge put this on the 'potential problem'\n> > list.\n> >\n> > Please advise on this; I cannot find the right documentation\n>\n> Are you aware of wal_consistency_checking?\n\nI was vaguely aware that an option with that name exists, but that was\nabout the extent. Thanks for pointing me in that direction.\n\n> I think that this should be fine. There are differences that are\n> possible between a replica and primary, but they're very minor and\n> don't seem relevant.\n\nOK, then I'll assume that WAL replay shouldn't cause problems here.\n\n> > Problem case 3: invalid redirect pointers; pruneheap.c line 949, in\n> > heap_get_root_tuples\n> >\n> > The heap pruning mechanism currently assumes that all redirects are\n> > valid. Currently, if a HOT root points to !ItemIdIsNormal, it breaks\n> > out of the loop, but doesn't actually fail. This code is already\n> > potentially problematic because it has no bounds or sanity checking\n> > for the nextoffnum, but with shrinking pd_linp it would also add the\n> > failure mode of HOT tuples pointing into what is now arbitrary data.\n>\n> heap_prune_chain() is less trusting than heap_get_root_tuples(),\n> though -- it doesn't trust redirects (because there is a generic\n> offnum sanity check at the start of its loop). I think that the\n> inconsistency between these two functions probably isn't hugely\n> significant.\n>\n> Ideally it would be 100% clear which of the defenses in code like this\n> is merely extra hardening. The assumptions should be formalized. There\n> is nothing wrong with hardening, but we should know it when we see it.\n\nI realized one of my Assert()s was incorrectly asserting an actually\nvalid page state, so I've updated and documented that case.\n\n> > This failure mode is now accompanied by an Assert, which fails when\n> > the redirect is to an invalid OffsetNumber. This is not enough to not\n> > exhibit arbitrary behaviour when accessing corrupted data with\n> > non-assert builds, but I think that that is fine; we already do not\n> > have a guaranteed behaviour for this failure mode.\n>\n> What about my \"set would-be LP_UNUSED space to all-0x7F bytes in\n> CLOBBER_FREED_MEMORY builds\" idea? Did you think about that?\n\nI had implemented it locally, but was waiting for some more feedback\nbefore posting that and got busy with other stuff since, it's now\nattached.\n\nI've also played around with marking the free space on the page as\nundefined for valgrind, but later realized that that would make the\ntest for defined memory in PageAddItemExtended fail. This is\ndocumented in the commit message of the attached patch 0002.\n\n\nWith regards,\n\nMatthias van de Meent",
"msg_date": "Wed, 31 Mar 2021 11:49:08 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Wed, Mar 31, 2021 at 2:49 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> I had implemented it locally, but was waiting for some more feedback\n> before posting that and got busy with other stuff since, it's now\n> attached.\n>\n> I've also played around with marking the free space on the page as\n> undefined for valgrind, but later realized that that would make the\n> test for defined memory in PageAddItemExtended fail. This is\n> documented in the commit message of the attached patch 0002.\n\nI would like to deal with this work within the scope of the project\nwe're discussing over on the \"New IndexAM API controlling index vacuum\nstrategies\" thread. The latest revision of that patch series includes\na modified version of your patch:\n\nhttps://postgr.es/m/CAH2-Wzn6a64PJM1Ggzm=uvx2otsopJMhFQj_g1rAj4GWr3ZSzw@mail.gmail.com\n\nPlease take discussion around this project over to that other thread.\nThere are a variety of issues that can only really be discussed in\nthat context.\n\nNote that I've revised the patch so that it runs during VACUUM's\nsecond heap pass only -- not during pruning/defragmentation. This\nmeans that the line pointer array truncation mechanism will only ever\nkick-in during a VACUUM operation.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 3 Apr 2021 19:07:32 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Sat, Apr 3, 2021 at 10:07 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I would like to deal with this work within the scope of the project\n> we're discussing over on the \"New IndexAM API controlling index vacuum\n> strategies\" thread. The latest revision of that patch series includes\n> a modified version of your patch:\n>\n>\nhttps://postgr.es/m/CAH2-Wzn6a64PJM1Ggzm=uvx2otsopJMhFQj_g1rAj4GWr3ZSzw@mail.gmail.com\n>\n> Please take discussion around this project over to that other thread.\n> There are a variety of issues that can only really be discussed in\n> that context.\n\nSince that work has been committed as of 3c3b8a4b2689, I've marked this CF\nentry as committed.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Sat, Apr 3, 2021 at 10:07 PM Peter Geoghegan <pg@bowt.ie> wrote:> I would like to deal with this work within the scope of the project> we're discussing over on the \"New IndexAM API controlling index vacuum> strategies\" thread. The latest revision of that patch series includes> a modified version of your patch:>> https://postgr.es/m/CAH2-Wzn6a64PJM1Ggzm=uvx2otsopJMhFQj_g1rAj4GWr3ZSzw@mail.gmail.com>> Please take discussion around this project over to that other thread.> There are a variety of issues that can only really be discussed in> that context.Since that work has been committed as of 3c3b8a4b2689, I've marked this CF entry as committed.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 3 May 2021 10:26:07 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Mon, 3 May 2021 at 16:26, John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n> On Sat, Apr 3, 2021 at 10:07 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I would like to deal with this work within the scope of the project\n> > we're discussing over on the \"New IndexAM API controlling index vacuum\n> > strategies\" thread. The latest revision of that patch series includes\n> > a modified version of your patch:\n> >\n> > https://postgr.es/m/CAH2-Wzn6a64PJM1Ggzm=uvx2otsopJMhFQj_g1rAj4GWr3ZSzw@mail.gmail.com\n> >\n> > Please take discussion around this project over to that other thread.\n> > There are a variety of issues that can only really be discussed in\n> > that context.\n>\n> Since that work has been committed as of 3c3b8a4b2689, I've marked this CF entry as committed.\n\nI disagree that this work has been fully committed. A derivative was\ncommitted that would solve part of the problem, but it doesn't cover\nall problem cases. I believe that I voiced such concern in the other\nthread as well. As such, I am planning on fixing this patch sometime\nbefore the next commit fest so that we can truncate the LP array\nduring hot pruning as well, instead of only doing so in the 2nd VACUUM\npass. This is especially relevant on pages where hot is highly\neffective, but vacuum can't keep up and many unused (former HOT) line\npointers now exist on the page.\n\nWith regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Mon, 3 May 2021 16:39:02 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Mon, 3 May 2021 at 16:39, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> I am planning on fixing this patch sometime\n> before the next commit fest so that we can truncate the LP array\n> during hot pruning as well, instead of only doing so in the 2nd VACUUM\n> pass.\n\nPFA the updated version of this patch. Apart from adding line pointer\ntruncation in PageRepairFragmentation (as in the earlier patches), I\nalso altered PageTruncateLinePointerArray to clean up all trailing\nline pointers, even if it was the last item on the page.\n\nThis means that for 32-bit systems, pages that have once had tuples\n(but have been cleared since) can now be used again for\nMaxHeapTupleSize insertions. Without this patch, an emptied page would\nalways have at least one line pointer left, which equates to\nMaxHeapTupleSize actual free space, but PageGetFreeSpace always\nsubtracts sizeof(ItemIdData), leaving the perceived free space as\nreported to the FSM less than MaxHeapTupleSize if the page has any\nline pointers.\n\nFor 64-bit systems, this is not as much of a problem, because\nMaxHeapTupleSize is 4 bytes smaller on those systems, which leaves us\nwith 1 line pointer as margin for the FSM to recognise the page as\nfree enough for one MaxHeapTupleSize item.\n\nWith regards,\n\nMatthias van de Meent",
"msg_date": "Tue, 18 May 2021 21:29:00 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Tue, May 18, 2021 at 12:29 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> PFA the updated version of this patch. Apart from adding line pointer\n> truncation in PageRepairFragmentation (as in the earlier patches), I\n> also altered PageTruncateLinePointerArray to clean up all trailing\n> line pointers, even if it was the last item on the page.\n\nCan you show a practical benefit to this patch, such as an improvement\nin throughout or in efficiency for a given workload?\n\nIt was easy to see that having something was better than having\nnothing at all. But things are of course different now that we have\nPageTruncateLinePointerArray().\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 18 May 2021 12:33:30 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Tue, 18 May 2021 at 20:33, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, May 18, 2021 at 12:29 PM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > PFA the updated version of this patch. Apart from adding line pointer\n> > truncation in PageRepairFragmentation (as in the earlier patches), I\n> > also altered PageTruncateLinePointerArray to clean up all trailing\n> > line pointers, even if it was the last item on the page.\n>\n> Can you show a practical benefit to this patch, such as an improvement\n> in throughout or in efficiency for a given workload?\n>\n> It was easy to see that having something was better than having\n> nothing at all. But things are of course different now that we have\n> PageTruncateLinePointerArray().\n\nThere does seem to be utility in Matthias' patch, which currently does\ntwo things:\n1. Allow same thing as PageTruncateLinePointerArray() during HOT cleanup\nThat is going to have a clear benefit for HOT workloads, which by\ntheir nature will use a lot of line pointers.\nMany applications are updated much more frequently than they are vacuumed.\nPeter - what is your concern about doing this more frequently? Why\nwould we *not* do this?\n\n2. Reduce number of line pointers to 0 in some cases.\nMatthias - I don't think you've made a full case for doing this, nor\nlooked at the implications.\nThe comment clearly says \"it seems like a good idea to avoid leaving a\nPageIsEmpty()\" page behind.\nSo I would be inclined to remove that from the patch and consider that\nas a separate issue, or close this.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 3 Aug 2021 07:57:37 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Tue, 3 Aug 2021 at 08:57, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Tue, 18 May 2021 at 20:33, Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Tue, May 18, 2021 at 12:29 PM Matthias van de Meent\n> > <boekewurm+postgres@gmail.com> wrote:\n> > > PFA the updated version of this patch. Apart from adding line pointer\n> > > truncation in PageRepairFragmentation (as in the earlier patches), I\n> > > also altered PageTruncateLinePointerArray to clean up all trailing\n> > > line pointers, even if it was the last item on the page.\n> >\n> > Can you show a practical benefit to this patch, such as an improvement\n> > in throughout or in efficiency for a given workload?\n> >\n> > It was easy to see that having something was better than having\n> > nothing at all. But things are of course different now that we have\n> > PageTruncateLinePointerArray().\n>\n> There does seem to be utility in Matthias' patch, which currently does\n> two things:\n> 1. Allow same thing as PageTruncateLinePointerArray() during HOT cleanup\n> That is going to have a clear benefit for HOT workloads, which by\n> their nature will use a lot of line pointers.\n> Many applications are updated much more frequently than they are vacuumed.\n> Peter - what is your concern about doing this more frequently? Why\n> would we *not* do this?\n\nOne clear reason as to why we _do_ want this, is that the current\nshrinking only happens in the second phase of vacuum. Shrinking the\nLP-array in heap_page_prune decreases the chance that tuples that\ncould fit on the page due to removed HOT chain items don't currently\nfit on the page due to lack of vacuum, whilst adding only little\noverhead. Additionally, heap_page_prune is also executed if more empty\nspace on the page is required for a new tuple that currently doesn't\nfit, and in such cases I think clearing as much space as possible is\nuseful.\n\n> 2. Reduce number of line pointers to 0 in some cases.\n> Matthias - I don't think you've made a full case for doing this, nor\n> looked at the implications.\n\nI have looked at the implications (see upthread), and I haven't found\nany implications other than those mentioned below.\n\n> The comment clearly says \"it seems like a good idea to avoid leaving a\n> PageIsEmpty()\" page behind.\n\nDo note that that comment is based on (to the best of my knowledge)\nunmeasured, but somewhat informed, guesswork ('it seems like a good\nidea'), which I also commented on in the thread discussing the patch\nthat resulted in that commit [0].\n\nIf I recall correctly, the decision to keep at least 1 line pointer on\nthe page was because this feature was to be committed late in the\ndevelopment cycle of pg14, and as such there would be little time to\ncheck the impact of fully clearing pages. To go forward with the\nfeature in pg14 at that point, it was safer to not completely empty\npages, so that we'd not be changing the paths we were hitting during\ne.g. vacuum too significantly, reducing the chances on significant\nbugs that would require the patch to be reverted [1].\n\n\nI agreed at that point that that was a safer bet, but right now it's\nearly in the pg15 development cycle, and I've had the time to get more\nexperience around the vacuum and line pointer machinery. That being\nthe case, I consider this a re-visit of the topic 'is it OK to\ntruncate the LP-array to 0', where previously the answer was 'we don't\nknow, and it's late in the release cycle', and after looking through\nthe code base now I argue that the answer is Yes.\n\nOne more point for going to 0 is that for 32-bit systems, a single\nline pointer is enough to block a page from being 'empty' enough to\nfit a MaxHeapTupleSize-sized tuple (when requesting pages through the\nFSM).\n\nAdditionally, there are some other optimizations we can only apply to\nempty pages:\n\n- vacuum (with disable_page_skipping = on) will process these empty\npages faster, as it won't need to do any pruning on that page. With\npage skipping enabled this won't matter because empty pages are\nall_visible and therefore vacuum won't access that page.\n- the pgstattuple contrib extension processes emtpy pages (slightly)\nfaster in pgstattuple_approx\n- various loops won't need to check the remaining item that it is\nunused, saving some cycles in those loops when the page is accessed.\n\nand further future optimizations might include\n\n- Full-page WAL logging of empty pages produced in the checkpointer\ncould potentially be optimized to only log 'it's an empty page'\ninstead of writing out the full 8kb page, which would help in reducing\nWAL volume. Previously this optimization would never be hit on\nheapam-pages because pages could not become empty again, but right now\nthis has real potential for applying an optimization.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/message-id/CAEze2Wh-nXjkp0bLN_vQwgHttC8CRH%3D1ewcrWk%2B7RX5B93YQPQ%40mail.gmail.com\n[1] https://www.postgresql.org/message-id/CAH2-WznCxtWL4B995y2KJWj-%2BjrjahH4n6gD2R74SyQJo6Y63w%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 3 Aug 2021 18:14:55 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Tue, 3 Aug 2021 at 17:15, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n\n> and further future optimizations might include\n>\n> - Full-page WAL logging of empty pages produced in the checkpointer\n> could potentially be optimized to only log 'it's an empty page'\n> instead of writing out the full 8kb page, which would help in reducing\n> WAL volume. Previously this optimization would never be hit on\n> heapam-pages because pages could not become empty again, but right now\n> this has real potential for applying an optimization.\n\nSo what you are saying is your small change will cause multiple\nadditional FPIs in WAL. I don't call that a future optimization, I\ncall that essential additional work.\n\n+1 on committing the first part of the patch, -1 on the second. I\nsuggest you split the patch and investigate the second part further.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 3 Aug 2021 19:37:13 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Tue, 3 Aug 2021 at 20:37, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Tue, 3 Aug 2021 at 17:15, Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n>\n> > and further future optimizations might include\n> >\n> > - Full-page WAL logging of empty pages produced in the checkpointer\n> > could potentially be optimized to only log 'it's an empty page'\n> > instead of writing out the full 8kb page, which would help in reducing\n> > WAL volume. Previously this optimization would never be hit on\n> > heapam-pages because pages could not become empty again, but right now\n> > this has real potential for applying an optimization.\n>\n> So what you are saying is your small change will cause multiple\n> additional FPIs in WAL. I don't call that a future optimization, I\n> call that essential additional work.\n\nI think you misunderstood my statement. The patch does not change more\npages than before. The patch only ensures that empty heapam pages are\ntruly empty according to the relevant PageIsEmpty()-macro; which\nhypothethically allows for optimizations in the checkpointer process\nthat currently (as in, since its inception) writes all changed pages\nas full page writes (unless turned off).\n\nThis change makes it easier and more worthwile to implement a further\noptimization for the checkpointer and/or buffer manager to determine\nthat 1.) this page is now empty, and that 2.) we can therefore write a\nspecialized WAL record specifically tuned for empty pages instead of\nFPI records. No additional pages are changed, because each time the\nline pointer array is shrunk, we've already either marked dead tuples\nas unused (2nd phase vacuum) or removed HOT line pointers / truncated\ndead tuples to lp_dead line pointers (heap_page_prune).\n\nIf, instead, you are suggesting that this checkpointer FPI\noptimization should be part of the patch just because the potential is\nthere, then I disagree. Please pardon me if this was not your intended\nmessage, but \"you suggested this might be possible after your patch,\nthus you must implement this\" seems like a great way to burn\ncontributor goodwill.\n\nThe scope of the checkpointer is effectively PostgreSQL's WAL, plus\nthe page formats of all access methods that use the Page-based storage\nmanager (not just table access methods, but also those of indexes).\nI'm not yet comfortable with hacking in those (sub)systems, nor do I\nexpect to have the time/effort capacity soon to go through all the\nlogic of these access methods to validate the hypothesis that such\noptimization can be both correctly implemented and worthwile.\n\n\nPatch 2, as I see it, just clears up some leftover stuff from the end\nof the pg14 release cycle with new insights and research I didn't have\nat that point in time. As it is a behaviour change for wal-logged\nactions, it cannot realistically be backported; therefore it is\nincluded as an improvement for pg15.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Tue, 3 Aug 2021 21:26:50 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Mon, Aug 2, 2021 at 11:57 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> 1. Allow same thing as PageTruncateLinePointerArray() during HOT cleanup\n> That is going to have a clear benefit for HOT workloads, which by\n> their nature will use a lot of line pointers.\n\nWhy do you say that?\n\n> Many applications are updated much more frequently than they are vacuumed.\n> Peter - what is your concern about doing this more frequently? Why\n> would we *not* do this?\n\nWhat I meant before was that this argument worked back when we limited\nthe technique to VACUUM's second heap pass. Doing line pointer array\ntruncation at that point alone meant that it only ever happened\noutside of VACUUM proper. Prior to that point we literally did nothing\nabout LP_UNUSED items at the end of each line pointer array, so we\nwere going from doing nothing to doing something.\n\nThis time it's quite different: we're truncating the line pointer\narray during pruning. Pruning often occurs opportunistically, during\nregular query processing. In fact I'd say that it's far more common\nthan pruning by VACUUM. So the chances of line pointer array\ntruncation hurting rather than helping seems higher. Plus now we might\nbreak things like DDL, that would naturally not have been affected\nbefore because we were only doing line pointer truncation during\nVACUUM proper.\n\nOf course it might well be true that there is a significant benefit to\nthis patch. I don't think that we should assume that that's the case,\nthough. We have yet to see a test case showing any benefit. Maybe\nthat's an easy thing to produce, and maybe Matthias has assumed that I\nmust already know what to look at. But I don't. It's not obvious to me\nhow to assess this patch now.\n\nI don't claim to have any insight about what we should or should not\ndo. At least not right now. When I committed the original (commit\n3c3b8a4b), I did so because I thought that it was very likely to\nimprove certain cases and very unlikely to hurt any other cases.\nNothing more.\n\n> 2. Reduce number of line pointers to 0 in some cases.\n> Matthias - I don't think you've made a full case for doing this, nor\n> looked at the implications.\n> The comment clearly says \"it seems like a good idea to avoid leaving a\n> PageIsEmpty()\" page behind.\n> So I would be inclined to remove that from the patch and consider that\n> as a separate issue, or close this.\n\nThis was part of that earlier commit because of sheer paranoia;\nnothing more. I actually think that it's highly unlikely to protect us\nfrom bugs in practice. Though I am, in a certain sense, likely to be\nwrong about \"PageIsEmpty() defensiveness\", it does not bother me in\nthe slightest. It seems like the right approach in the absence of new\ninformation about a significant downside. If my paranoia really did\nturn out to be justified, then I would expect that there'd be a\nsubtle, nasty bug. That possibility is what I was really thinking of.\nAnd so it almost doesn't matter to me how unlikely we might think such\na bug is now, unless and until somebody can demonstrate a real\npractical downside to my defensive approach.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 3 Aug 2021 17:43:36 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Tue, Aug 3, 2021 at 12:27 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> This change makes it easier and more worthwile to implement a further\n> optimization for the checkpointer and/or buffer manager to determine\n> that 1.) this page is now empty, and that 2.) we can therefore write a\n> specialized WAL record specifically tuned for empty pages instead of\n> FPI records. No additional pages are changed, because each time the\n> line pointer array is shrunk, we've already either marked dead tuples\n> as unused (2nd phase vacuum) or removed HOT line pointers / truncated\n> dead tuples to lp_dead line pointers (heap_page_prune).\n\nWe generate an FPI the first time a page is modified after a\ncheckpoint. The FPI consists of the page *after* it has been modified.\nPresumably this optimization would need the heap page to be 100%\nempty, so we're left with what seems to me to be a very narrow target\nfor optimization; something that is naturally rare.\n\nA fully-empty page seems quite unlikely in the case of xl_heap_vacuum\nrecords, and impossible in the case of xl_heap_prune records. Even\nwith all the patches, working together. Have I missed something?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 3 Aug 2021 18:51:25 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Wed, 4 Aug 2021 at 03:51, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> We generate an FPI the first time a page is modified after a\n> checkpoint. The FPI consists of the page *after* it has been modified.\n\nIn that case, I misremembered when FPIs were written with relation to\ncheckpoints. Thanks for reminding me.\n\nThe point of non-FPIs as a replacement could still be valid, except\nfor the point below making this yet more unlikely.\n\n> Presumably this optimization would need the heap page to be 100%\n> empty, so we're left with what seems to me to be a very narrow target\n> for optimization; something that is naturally rare.\n\nYes.\n\n> A fully-empty page seems quite unlikely in the case of xl_heap_vacuum\n> records, and impossible in the case of xl_heap_prune records. Even\n> with all the patches, working together. Have I missed something?\n\nNo, you're correct. xl_heap_prune cannot ever empty pages, as it\nleaves at least 1 dead, or 1 redirect + 1 normal, line pointer on the\npage.\n\nFurthermore, it is indeed quite unlikely that the 2nd pass of vacuum\nwill be the first page modification after a checkpoint; it is quite a\nbit more likely that the page was first modified by the 1st vacuum\npass. Although, this chance on the first modification since checkpoint\nbeing made by the second pass is increased by indexless tables\n(however unlikely, they exist in practice) and the autovacuum index\ncleanup delay mechanisms allowing pages with only dead item pointers\nto remain dead for more than just this one vacuum run, but the chances\non fully clearing the page are indeed very, very slim.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 4 Aug 2021 12:30:30 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Wed, 4 Aug 2021 at 02:43, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Aug 2, 2021 at 11:57 PM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> > 2. Reduce number of line pointers to 0 in some cases.\n> > Matthias - I don't think you've made a full case for doing this, nor\n> > looked at the implications.\n> > The comment clearly says \"it seems like a good idea to avoid leaving a\n> > PageIsEmpty()\" page behind.\n> > So I would be inclined to remove that from the patch and consider that\n> > as a separate issue, or close this.\n>\n> This was part of that earlier commit because of sheer paranoia;\n> nothing more. I actually think that it's highly unlikely to protect us\n> from bugs in practice. Though I am, in a certain sense, likely to be\n> wrong about \"PageIsEmpty() defensiveness\", it does not bother me in\n> the slightest. It seems like the right approach in the absence of new\n> information about a significant downside. If my paranoia really did\n> turn out to be justified, then I would expect that there'd be a\n> subtle, nasty bug. That possibility is what I was really thinking of.\n> And so it almost doesn't matter to me how unlikely we might think such\n> a bug is now, unless and until somebody can demonstrate a real\n> practical downside to my defensive approach.\n\nAs I believe I have mentioned before, there is one significant\ndownside: 32-bit systems cannot reuse pages that contain only a\nsingular unused line pointer for max-sized FSM-requests. A fresh page\nhas 8168 bytes free (8kB - 24B page header), which then becomes 8164\nwhen returned from PageGetFreeSpace (it acocunts for space used by the\nline pointer when inserting items onto the page).\n\nOn 64-bit systems, MaxHeapTupleSize is 8160, and for for 32-bit\nsystems the MaxHeapTupleSize is 8164. When we leave one more unused\nline pointer on the page, this means the page will have a\nPageGetFreeSpace of 8160, 4 bytes less than the MaxHeapTupleSize on\n32-bit systems. As such, there will never be FSM entries of the\nlargest category for pages that have had data on those systems, and as\nsuch, those systems will need to add pages for each request of the\nlargest category, meaning that all tuples larger than 8128 bytes\n(largest request that would request the 254-category) will always be\nput on a new page, regardless of the actual availability of free space\nin the table.\n\nYou might argue that this is a problem in the FSM subsystem, but in\nthis case it actively hinders us from reusing pages in the largest\ncategory of FSM-requests. If you would argue 'PageGetHeapFreeSpace\nshould keep free line pointers in mind when calculating free space',\nthen I would argue 'yes, but isn't it better then to also actually\nfully mark that space as unused'.\n\nAll in all, I'd just rather remove the distinction between once-used\npages and fresh pages completely by truncating the LP-array to 0 than\nto leave this bloating behaviour in the system.\n\nKind regards,\n\nMatthias van de Meent.\n\n\n",
"msg_date": "Wed, 4 Aug 2021 12:55:07 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Tue, Aug 3, 2021 at 8:44 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> This time it's quite different: we're truncating the line pointer\n> array during pruning. Pruning often occurs opportunistically, during\n> regular query processing. In fact I'd say that it's far more common\n> than pruning by VACUUM. So the chances of line pointer array\n> truncation hurting rather than helping seems higher.\n\nHow would it hurt?\n\nIt's easy to see the harm caused by not shortening the line pointer\narray when it is possible to do so: we're using up space in the page\nthat could have been made free. It's not so obvious to me what the\ndownside of shortening it might be. I suppose there is a risk that we\nshorten it and get no benefit, or even shorten it and have to lengthen\nit again almost immediately. But neither of those things really\nmatters unless shortening is expensive. If we can piggy-back it on an\nexisting WAL record, I don't really see what the problem is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 4 Aug 2021 10:39:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Wed, 4 Aug 2021 at 01:43, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Aug 2, 2021 at 11:57 PM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> > 1. Allow same thing as PageTruncateLinePointerArray() during HOT cleanup\n> > That is going to have a clear benefit for HOT workloads, which by\n> > their nature will use a lot of line pointers.\n>\n> Why do you say that?\n\nTruncating line pointers can make extra space on the page, so it could\nbe the difference between a HOT and a non-HOT update. My understanding\nis that these just-in-time actions have a beneficial effect in other\ncircumstances, so we can do that here also.\n\nIf we truncate line pointers more frequently then the root pointers\nwill tend to be lower in the array, which will make truncation even\nmore effective.\n\n> > Many applications are updated much more frequently than they are vacuumed.\n> > Peter - what is your concern about doing this more frequently? Why\n> > would we *not* do this?\n>\n> What I meant before was that this argument worked back when we limited\n> the technique to VACUUM's second heap pass. Doing line pointer array\n> truncation at that point alone meant that it only ever happened\n> outside of VACUUM proper. Prior to that point we literally did nothing\n> about LP_UNUSED items at the end of each line pointer array, so we\n> were going from doing nothing to doing something.\n>\n> This time it's quite different: we're truncating the line pointer\n> array during pruning. Pruning often occurs opportunistically, during\n> regular query processing. In fact I'd say that it's far more common\n> than pruning by VACUUM. So the chances of line pointer array\n> truncation hurting rather than helping seems higher.\n\nIf the only thing we do to a page is truncate line pointers then it\nmay not be worth it. But dirtying a page to reclaim space is also the\nprecise time when reclaiming line pointers makes sense also. So if the\npage is dirtied by cleaning, then that is the time to reclaim line\npointers also.\n\nAgain, why would we reclaim space to avoid bloat but then ignore any\nline pointer bloat?\n\nIt's not clear why truncating unused line pointers would break DDL.\n\n--\nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 4 Aug 2021 20:09:30 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Wed, Aug 4, 2021 at 12:09 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> Truncating line pointers can make extra space on the page, so it could\n> be the difference between a HOT and a non-HOT update. My understanding\n> is that these just-in-time actions have a beneficial effect in other\n> circumstances, so we can do that here also.\n\nI would prefer if the arguments in favor were a little more concrete.\nMaybe in general they don't have to be. But that would be my\npreference, and what I would look if I was to commit such a patch.\n\n> If the only thing we do to a page is truncate line pointers then it\n> may not be worth it. But dirtying a page to reclaim space is also the\n> precise time when reclaiming line pointers makes sense also. So if the\n> page is dirtied by cleaning, then that is the time to reclaim line\n> pointers also.\n>\n> Again, why would we reclaim space to avoid bloat but then ignore any\n> line pointer bloat?\n\nI don't think that my mental model is significantly different to yours\nhere. Like everybody else, I can easily imagine how this *might* have\nvalue. All I'm really saying is that a burden of proof exists for this\npatch (or any performance orientated patch). In my opinion that burden\nhas yet to be met by this patch. My guess is that Matthias can show a\nclear, measurable benefit with a little more work. If that happens\nthen all of my marginal concerns about the risk of bugs become\nrelatively unimportant.\n\nIt might even be okay to commit the patch on the basis of \"what could\nthe harm be?\" if there was some effort to demonstrate empirically that\nthe performance downside really was zero.\n\n> It's not clear why truncating unused line pointers would break DDL.\n\nI'm just saying that it's obviously not possible now, with the\nVACUUM-only PageTruncateLinePointerArray()/lazy_vacuum_heap_page()\ncode path added to Postgres 14 -- because VACUUM's relation-level lock\nmakes sure of that. That property has some value. Certainly not enough\nvalue to block progress on a feature that is clearly useful, but\nenough to give me pause.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 4 Aug 2021 15:24:41 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Wed, Aug 4, 2021 at 7:39 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> How would it hurt?\n>\n> It's easy to see the harm caused by not shortening the line pointer\n> array when it is possible to do so: we're using up space in the page\n> that could have been made free. It's not so obvious to me what the\n> downside of shortening it might be.\n\nI think that that's probably true. That in itself doesn't seem like a\ngood enough reason to commit the patch.\n\nMaybe this really won't be difficult for Matthias. I just want to see\nsome concrete testing, maybe with pgbench, or with an artificial test\ncase. Maybe something synthetic that shows a benefit measurable in\non-disk table size. Or at least the absence of any regressions.\nSomething.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 4 Aug 2021 15:29:03 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Wed, 4 Aug 2021 at 15:39, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Aug 3, 2021 at 8:44 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > This time it's quite different: we're truncating the line pointer\n> > array during pruning. Pruning often occurs opportunistically, during\n> > regular query processing. In fact I'd say that it's far more common\n> > than pruning by VACUUM. So the chances of line pointer array\n> > truncation hurting rather than helping seems higher.\n>\n> How would it hurt?\n>\n> It's easy to see the harm caused by not shortening the line pointer\n> array when it is possible to do so: we're using up space in the page\n> that could have been made free. It's not so obvious to me what the\n> downside of shortening it might be. I suppose there is a risk that we\n> shorten it and get no benefit, or even shorten it and have to lengthen\n> it again almost immediately. But neither of those things really\n> matters unless shortening is expensive. If we can piggy-back it on an\n> existing WAL record, I don't really see what the problem is.\n\nHmm, there is no information in WAL to describe the line pointers\nbeing truncated by PageTruncateLinePointerArray(). We just truncate\nevery time we see a XLOG_HEAP2_VACUUM record and presume it does the\nsame thing as the original change.\n\nIf that is safe, then we don't need to put the truncation on a WAL\nrecord at all, we just truncate after every XLOG_HEAP2_PRUNE record.\n\nIf that is not safe... then we have a PG14 bug.\n\nIf we do want to see this in WAL, both xl_heap_vacuum and\nxl_heap_prune lend themselves to just adding one more OffsetNumber\nonto the existing array, to represent the highest offset after\ntruncation. So we don't need to change the WAL format.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 5 Aug 2021 14:28:37 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Thu, Aug 5, 2021 at 6:28 AM Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> Hmm, there is no information in WAL to describe the line pointers\n> being truncated by PageTruncateLinePointerArray(). We just truncate\n> every time we see a XLOG_HEAP2_VACUUM record and presume it does the\n> same thing as the original change.\n>\n> If that is safe, then we don't need to put the truncation on a WAL\n> record at all, we just truncate after every XLOG_HEAP2_PRUNE record.\n\nI agree that that's how we'd do it. This approach is no different to\nassuming that PageRepairFragmentation() reliably produces a final\ndefragmented page image deterministically when called after we prune.\n\nThese days we automatically verify assumptions like this via\nwal_consistency_checking. It would absolutely be able to catch any\nbugs in PageTruncateLinePointerArray(), since the truncate code path\nhas plenty of coverage within the regression tests.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 5 Aug 2021 09:14:59 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "This thread has stalled, and the request for benchmark/test has gone unanswered\nso I'm marking this patch Returned with Feedback. Please feel free to resubmit\nthis patch if it is picked up again.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 2 Dec 2021 11:17:43 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Thu, 2 Dec 2021 at 11:17, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> This thread has stalled, and the request for benchmark/test has gone unanswered\n> so I'm marking this patch Returned with Feedback. Please feel free to resubmit\n> this patch if it is picked up again.\n\nWell then, here we go. It took me some time to find the time and\nexamples, but here we are. Attached is v7 of the patchset, which is a\nrebase on top of the latest release, with some updated comments.\n\nPeter Geoghegan asked for good arguments for the two changes\nimplemented. Below are my arguments detailed, with adversarial loads\nthat show the problematic behaviour of the line pointer array that is\nfixed with the patch.\n\nKind regards,\n\nMatthias van de Meent\n\n\nTruncating lp_array to 0 line pointers\n===========================\n\nOn 32-bit pre-patch systems the heap grows without limit; post-patch\nthe relation doesn't grow beyond 16kiB (assuming no other sessions in\nthe database):\n\n> -- setup\n> CREATE TABLE tst (filler text);\n> ALTER TABLE tst SET (autovacuum_enabled = off, fillfactor = 10); -- disable autovacuum, and trigger pruning more often\n> ALTER TABLE tst ALTER COLUMN filler SET STORAGE PLAIN;\n> INSERT INTO tst VALUES ('');\n> -- processing load\n> VACUUM tst; UPDATE tst SET filler = repeat('1', 8134); -- ~ max size heaptuple in MAXIMUM_ALIGNOF = 4 systems\n\nOn 64-bit systems this load is not an issue due to slightly larger\ntolerances in the FSM.\n\n# Truncating lp_array during pruning\n===========================\n\nThe following adversarial load grows the heap relation, but with the\npatch the relation keeps its size. The point being that HOT updates\ncan temporarily inflate the LP array significantly, and this patch can\nactively combat that issue while we're waiting for the 2nd pass of\nvacuum to arrive.\n\n> -- Initialize experiment\n> TRUNCATE tst;\n> INSERT INTO tst SELECT null;\n> UPDATE tst SET filler = null;\n> VACUUM tst;\n>\n> -- start workload by filling all line pointer slots with minsized heap tuples\n> do language plpgsql $$\n> begin\n> for i in 1..289 loop\n> update tst set filler = null;\n> end loop;\n> end;\n> $$;\n> -- all line pointers on page 0 are now filled with hot updates of 1st line pointer\n>\n> -- hot-update hits the page; pruning is first applied\n> -- All but first and last LP are now empty. new tuple is inserted at\n> -- offset=2\n> UPDATE tst SET filler = null;\n>\n> -- Insert large tuple, filling most of the now free space between the end of\n> -- the max size line pointer array\n> UPDATE tst SET filler = repeat('1', 7918);\n>\n> -- insert slightly smaller tuple, that is ~ the size of the unused space in\n> -- the LP array\n> UPDATE tst SET filler = repeat('1', 1144);\n>\n> -- reset experiment to initialized state\n> UPDATE tst SET filler = null;",
"msg_date": "Tue, 15 Feb 2022 19:47:56 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Tue, Feb 15, 2022 at 10:48 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> Peter Geoghegan asked for good arguments for the two changes\n> implemented. Below are my arguments detailed, with adversarial loads\n> that show the problematic behaviour of the line pointer array that is\n> fixed with the patch.\n\nWhy is it okay that lazy_scan_prune() still calls\nPageGetMaxOffsetNumber() once for the page, before it ever calls\nheap_page_prune()? Won't lazy_scan_prune() need to reestablish maxoff\nnow, if only so that its scan-page-items loop doesn't get confused\nwhen it goes on to read \"former line pointers\"? This is certainly\npossible with the CLOBBER_FREED_MEMORY stuff in place (which will\nmemset the truncated line pointer space with a 0x7F7F7F7F pattern).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 16 Feb 2022 11:54:14 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Wed, 16 Feb 2022 at 20:54, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Feb 15, 2022 at 10:48 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > Peter Geoghegan asked for good arguments for the two changes\n> > implemented. Below are my arguments detailed, with adversarial loads\n> > that show the problematic behaviour of the line pointer array that is\n> > fixed with the patch.\n>\n> Why is it okay that lazy_scan_prune() still calls\n> PageGetMaxOffsetNumber() once for the page, before it ever calls\n> heap_page_prune()? Won't lazy_scan_prune() need to reestablish maxoff\n> now, if only so that its scan-page-items loop doesn't get confused\n> when it goes on to read \"former line pointers\"? This is certainly\n> possible with the CLOBBER_FREED_MEMORY stuff in place (which will\n> memset the truncated line pointer space with a 0x7F7F7F7F pattern).\n\nGood catch, it is not. Attached a version that re-establishes maxoff\nafter each prune operation.\n\n-Matthias",
"msg_date": "Wed, 16 Feb 2022 21:14:16 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Wed, 16 Feb 2022 at 21:14, Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Wed, 16 Feb 2022 at 20:54, Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Tue, Feb 15, 2022 at 10:48 AM Matthias van de Meent\n> > <boekewurm+postgres@gmail.com> wrote:\n> > > Peter Geoghegan asked for good arguments for the two changes\n> > > implemented. Below are my arguments detailed, with adversarial loads\n> > > that show the problematic behaviour of the line pointer array that is\n> > > fixed with the patch.\n> >\n> > Why is it okay that lazy_scan_prune() still calls\n> > PageGetMaxOffsetNumber() once for the page, before it ever calls\n> > heap_page_prune()? Won't lazy_scan_prune() need to reestablish maxoff\n> > now, if only so that its scan-page-items loop doesn't get confused\n> > when it goes on to read \"former line pointers\"? This is certainly\n> > possible with the CLOBBER_FREED_MEMORY stuff in place (which will\n> > memset the truncated line pointer space with a 0x7F7F7F7F pattern).\n>\n> Good catch, it is not. Attached a version that re-establishes maxoff\n> after each prune operation.\n\nI double-checked the changes, and to me it seems like that was the\nonly place in the code where PageGetMaxOffsetNumber was not handled\ncorrectly. This was fixed in the latest patch (v8).\n\nPeter, would you have time to further review this patch and/or commit it?\n\n- Matthias\n\n\n",
"msg_date": "Thu, 10 Mar 2022 14:49:13 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Thu, Mar 10, 2022 at 5:49 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> I double-checked the changes, and to me it seems like that was the\n> only place in the code where PageGetMaxOffsetNumber was not handled\n> correctly. This was fixed in the latest patch (v8).\n>\n> Peter, would you have time to further review this patch and/or commit it?\n\nI'll definitely review it some more before too long.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 14 Mar 2022 11:33:57 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Tue, Feb 15, 2022 at 10:48 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> # Truncating lp_array during pruning\n> ===========================\n>\n> The following adversarial load grows the heap relation, but with the\n> patch the relation keeps its size. The point being that HOT updates\n> can temporarily inflate the LP array significantly, and this patch can\n> actively combat that issue while we're waiting for the 2nd pass of\n> vacuum to arrive.\n\nI am sympathetic to the idea that giving the system a more accurate\npicture of how much free space is available on each heap page is an\nintrinsic good. This might help us in a few different areas. For\nexample, the FSM cares about relatively small differences in available\nfree space *among* heap pages that are \"in competition\" in\nRelationGetBufferForTuple(). Plus we have a heuristic based on\nPageGetHeapFreeSpace() in heap_page_prune_opt() to consider.\n\nWe should definitely increase MaxHeapTuplesPerPage before too long,\nfor a variety of reasons that I have talked about in the past. Its\ncurrent value is 291 on all mainstream platforms, a value that's\nderived from accidental historic details -- which predate HOT.\nObviously an increase in MaxHeapTuplesPerPage is likely to make the\nproblem that the patch proposes to solve worse. I lean towards\ncommitting the patch now as work in that direction, in fact.\n\nIt helps that this patch now seems relatively low risk.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 4 Apr 2022 19:24:22 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Mon, Apr 4, 2022 at 7:24 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I am sympathetic to the idea that giving the system a more accurate\n> picture of how much free space is available on each heap page is an\n> intrinsic good. This might help us in a few different areas. For\n> example, the FSM cares about relatively small differences in available\n> free space *among* heap pages that are \"in competition\" in\n> RelationGetBufferForTuple(). Plus we have a heuristic based on\n> PageGetHeapFreeSpace() in heap_page_prune_opt() to consider.\n\nPushed a slightly revised version of this just now. Differences:\n\n* Rewrote the comments, and adjusted related comments in vacuumlazy.c.\nMostly just made them shorter.\n\n* I eventually decided that it was fine to just accept the issue with\nmaxoff in lazy_scan_prune (the pruning path used by VACUUM).\n\nThere seemed to be no need to reestablish a maxoff for the page here\nfollowing further reflection. I changed my mind.\n\nSetting reclaimed line pointer array space to a pattern of 0x7F bytes\nwasn't adding much here. Pruning either runs in VACUUM, or\nopportunistically. When it runs in VACUUM things are highly\nconstrained already. Opportunistic pruning for heap_page_prune_opt()\ncallers doesn't even require that the caller start out with a buffer\nlock. Pruning only goes ahead when we successfully acquire a cleanup\nlock -- so callers can't be relying on maxoff not changing.\n\n* Didn't keep the changes to PageTruncateLinePointerArray().\n\nThere is at least no reason to tie this question about VACUUM to how\npruning behaves. I still see some small value in avoiding creating a\nnew path that allows PageIsEmpty() pages to come into existence in a\nnew way, which is no less true with the patch I pushed.\n\nThanks\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 7 Apr 2022 15:43:05 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-04 19:24:22 -0700, Peter Geoghegan wrote:\n> We should definitely increase MaxHeapTuplesPerPage before too long,\n> for a variety of reasons that I have talked about in the past. Its\n> current value is 291 on all mainstream platforms, a value that's\n> derived from accidental historic details -- which predate HOT.\n\nI'm on-board with that - but I think we should rewrite a bunch of places that\nuse MaxHeapTuplesPerPage sized-arrays on the stack first. It's not great using\nseveral KB of stack at the current the current value already (*), but if it grows\nfurther...\n\nGreetings,\n\nAndres Freund\n\n\n* lazy_scan_prune() has\n\tOffsetNumber deadoffsets[MaxHeapTuplesPerPage];\n\txl_heap_freeze_tuple frozen[MaxHeapTuplesPerPage];\nwhich already works out to 4074 bytes.\n\n\n",
"msg_date": "Thu, 7 Apr 2022 16:01:01 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Thu, Apr 7, 2022 at 4:01 PM Andres Freund <andres@anarazel.de> wrote:\n> I'm on-board with that - but I think we should rewrite a bunch of places that\n> use MaxHeapTuplesPerPage sized-arrays on the stack first. It's not great using\n> several KB of stack at the current the current value already (*), but if it grows\n> further...\n\nNo arguments here. There are probably quite a few places that won't\nneed to be fixed, because it just doesn't matter, but\nlazy_scan_prune() will.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 7 Apr 2022 16:03:41 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Fri, 8 Apr 2022 at 01:01, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-04-04 19:24:22 -0700, Peter Geoghegan wrote:\n> > We should definitely increase MaxHeapTuplesPerPage before too long,\n> > for a variety of reasons that I have talked about in the past. Its\n> > current value is 291 on all mainstream platforms, a value that's\n> > derived from accidental historic details -- which predate HOT.\n>\n> I'm on-board with that - but I think we should rewrite a bunch of places that\n> use MaxHeapTuplesPerPage sized-arrays on the stack first. It's not great using\n> several KB of stack at the current the current value already (*), but if it grows\n> further...\n\nYeah, I think we should definately support more line pointers on a\nheap page, but abusing MaxHeapTuplesPerPage for that is misleading:\nthe current value is the physical limit for heap tuples, as we have at\nmost 1 heap tuple per line pointer and thus the MaxHeapTuplesPerPage\nwon't change. A macro MaxHeapLinePointersPerPage would probably be\nmore useful, which could be as follows (assuming we don't want to\nallow filling a page with effectively only dead line pointers):\n\n#define MaxHeapLinePointersPerPage \\\n ((int) (((BLCKSZ - SizeOfPageHeaderData) / \\\n (MAXALIGN(SizeofHeapTupleHeader) + 2 * sizeof(ItemIdData))) * 2))\n\nThis accounts for the worst case of one redirect + one min-sized live\nheap tuple, and fills the page with it. Although impossible to put a\npage in such a state, that would be the worst case of live line\npointers on a page.\nFor the default BLCKSZ of 8kB, that results in 510 line pointers\nused-but-not-dead, an increase of ~ 70% over what's currently\navailable.\n\n-Matthias\n\n\n",
"msg_date": "Fri, 8 Apr 2022 13:38:22 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Thu, Apr 7, 2022 at 7:01 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-04-04 19:24:22 -0700, Peter Geoghegan wrote:\n> > We should definitely increase MaxHeapTuplesPerPage before too long,\n> > for a variety of reasons that I have talked about in the past. Its\n> > current value is 291 on all mainstream platforms, a value that's\n> > derived from accidental historic details -- which predate HOT.\n>\n> I'm on-board with that - but I think we should rewrite a bunch of places that\n> use MaxHeapTuplesPerPage sized-arrays on the stack first. It's not great using\n> several KB of stack at the current the current value already (*), but if it grows\n> further...\n\nI agree that the value of 291 is pretty much accidental, but it also\nseems fairly generous to me. The bigger you make it, the more space\nyou can waste. I must have missed (or failed to understand) previous\ndiscussions about why raising it would be a good idea.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 8 Apr 2022 09:17:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Fri, Apr 8, 2022 at 4:38 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> Yeah, I think we should definately support more line pointers on a\n> heap page, but abusing MaxHeapTuplesPerPage for that is misleading:\n> the current value is the physical limit for heap tuples, as we have at\n> most 1 heap tuple per line pointer and thus the MaxHeapTuplesPerPage\n> won't change. A macro MaxHeapLinePointersPerPage would probably be\n> more useful, which could be as follows (assuming we don't want to\n> allow filling a page with effectively only dead line pointers):\n\nThat's a good point. Sounds like it might be the right approach.\n\nI suppose that it will depend on how much use of MaxHeapTuplesPerPage\nremains once it is split in two like this.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 8 Apr 2022 09:44:12 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Fri, Apr 8, 2022 at 6:17 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I agree that the value of 291 is pretty much accidental, but it also\n> seems fairly generous to me. The bigger you make it, the more space\n> you can waste. I must have missed (or failed to understand) previous\n> discussions about why raising it would be a good idea.\n\nWhat do you mean about wasting space? Wasting space on the stack? I\ncan't imagine you meant wasting space on the page, since being able to\naccomodate more items on each heap page seems like it would be\nstrictly better, barring any unintended weird FSM issues.\n\nAs far as I know the only real downside to increasing it is the impact\non tidbitmap.c. Increasing the number of valid distinct TID values\nmight have a negative impact on performance during bitmap scans, which\nwill need to be managed. However, I don't think that increased stack\nspace usage will be a problem, with a little work. It either won't\nmatter at all (e.g. an array of offset numbers on the stack still\nwon't be very big), or it can be fixed locally where it turns out to\nmatter (like in lazy_scan_prune).\n\nWe used to routinely use MaxOffsetNumber for arrays of item offset\nnumbers. I cut down on that in the B-Tree code, reducing it to\nMaxIndexTuplesPerPage (which is typically 407) in a few places. So\nanything close to our current MaxIndexTuplesPerPage ought to be fine\nfor most individual arrays stored on the stack.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 8 Apr 2022 09:56:34 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Fri, Apr 8, 2022 at 9:44 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Fri, Apr 8, 2022 at 4:38 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> > Yeah, I think we should definately support more line pointers on a\n> > heap page, but abusing MaxHeapTuplesPerPage for that is misleading:\n> > the current value is the physical limit for heap tuples, as we have at\n> > most 1 heap tuple per line pointer and thus the MaxHeapTuplesPerPage\n> > won't change. A macro MaxHeapLinePointersPerPage would probably be\n> > more useful, which could be as follows (assuming we don't want to\n> > allow filling a page with effectively only dead line pointers):\n>\n> That's a good point. Sounds like it might be the right approach.\n>\n> I suppose that it will depend on how much use of MaxHeapTuplesPerPage\n> remains once it is split in two like this.\n\nThinking about this some more, I wonder if it would make sense to\nsplit MaxHeapTuplesPerPage into two new constants (a true\nMaxHeapTuplesPerPage, plus MaxHeapLinePointersPerPage), for the\nreasons discussed, but also as a way of getting a *smaller* effective\nMaxHeapTuplesPerPage than 291 in some contexts only.\n\nThere are some ways in which the current MaxHeapTuplesPerPage isn't\nenough, but also some ways in which it is excessive. It might be\nuseful if PageGetHeapFreeSpace() usually considered a heap page to\nhave no free space if the number of tuples with storage (or some cheap\nproxy thereof) was about 227, which is the largest number of distinct\nheap tuples that can *plausibly* ever be stored on an 8KiB page (it\nignores zero column tables). Most current PageGetHeapFreeSpace()\ncallers (including VACUUM) would continue to call that function in the\nsame way as today, and get this lower limit.\n\nA few of the existing PageGetHeapFreeSpace() callers could store more\nline pointers than that (MaxHeapLinePointersPerPage, which might be\n510 in practice) -- but only those involved in updates. The overall\nidea is to recognize that free space is not interchangeable -- updates\nshould have some kind of advantage over plain inserts when it comes to\nthe space on the page of the tuple that they're updating.\n\nWe might even want to make our newly defined, lower\nMaxHeapTuplesPerPage into a tunable storage param.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 8 Apr 2022 11:57:53 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Fri, Apr 8, 2022 at 12:57 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> What do you mean about wasting space? Wasting space on the stack? I\n> can't imagine you meant wasting space on the page, since being able to\n> accomodate more items on each heap page seems like it would be\n> strictly better, barring any unintended weird FSM issues.\n\nI meant wasting space in the page. I think that's a real concern.\nImagine you allowed 1000 line pointers per page. Each one consumes 2\nbytes. So now you could have ~25% of each page in the table storing\ndead line pointers. That sounds awful, and just running VACUUM won't\nfix it once it's happened, because the still-live line pointers are\nlikely to be at the end of the line pointer array and thus truncating\nit won't necessarily be possible.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 8 Apr 2022 15:04:37 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Fri, Apr 8, 2022 at 12:04 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I meant wasting space in the page. I think that's a real concern.\n> Imagine you allowed 1000 line pointers per page. Each one consumes 2\n> bytes. So now you could have ~25% of each page in the table storing\n> dead line pointers. That sounds awful, and just running VACUUM won't\n> fix it once it's happened, because the still-live line pointers are\n> likely to be at the end of the line pointer array and thus truncating\n> it won't necessarily be possible.\n\nI see. That's a legitimate concern, though one that I believe can be\naddressed. I have learned to dread any kind of bloat that's\nirreversible, no matter how minor it might seem when seen as an\nisolated event, so I'm certainly sympathetic to these concerns. You\ncan make a similar argument in favor of a higher\nMaxHeapLinePointersPerPage limit, though -- and that's why I believe\nan increase of some kind makes sense. The argument goes like this:\n\nWhat if we miss the opportunity to systematically keep successor\nversions of a given logical row on the same heap page over time, due\nonly to the current low MaxHeapLinePointersPerPage limit of 291? If we\nhad only been able to \"absorb\" just a few extra versions in the short\nterm, we would have had stability (in the sense of being able to\npreserve locality among related logical rows) in the long term. We\ncould have kept everything together, if only we didn't overreact to\nwhat were actually short term, rare perturbations.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 8 Apr 2022 12:30:46 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Fri, Apr 8, 2022 at 3:31 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> What if we miss the opportunity to systematically keep successor\n> versions of a given logical row on the same heap page over time, due\n> only to the current low MaxHeapLinePointersPerPage limit of 291? If we\n> had only been able to \"absorb\" just a few extra versions in the short\n> term, we would have had stability (in the sense of being able to\n> preserve locality among related logical rows) in the long term. We\n> could have kept everything together, if only we didn't overreact to\n> what were actually short term, rare perturbations.\n\nHmm. I wonder if we could teach the system to figure out which of\nthose things is happening. In the case that I'm worried about, when\nwe're considering growing the line pointer array, either the line\npointers will be dead or the line pointers will be used but the tuples\nto which they point will be dead. In the case you describe here, there\nshould be very few dead tuples or line pointers in the page. Maybe\nwhen the number of line pointers starts to get big, we refuse to add\nmore without checking the number of dead tuples and dead line pointers\nand verifying that those numbers are still small. Or, uh, something.\n\nOne fly in the ointment is that if we refuse to expand the line\npointer array, we might extend the relation instead, which is another\nkind of bloat and thus not great. But if the line pointer array is\nsomehow filling up with tons of dead tuples, we're going to have to\nextend the relation anyway. I suspect that in some circumstances it's\nbetter to just accept that outcome and hope that it leads to some\npages becoming empty, thus allowing their line pointer arrays to be\nreset.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 8 Apr 2022 16:28:58 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-08 09:17:40 -0400, Robert Haas wrote:\n> I agree that the value of 291 is pretty much accidental, but it also\n> seems fairly generous to me. The bigger you make it, the more space\n> you can waste. I must have missed (or failed to understand) previous\n> discussions about why raising it would be a good idea.\n\nIt's not hard to hit scenarios where pages are effectively unusable, because\nthey have close to 291 dead items, without autovacuum triggering (or\nautovacuum just taking a while). You basically just need updates / deletes to\nconcentrate in a certain range of the table and have indexing that prevents\nHOT updates. Because the overall percentage of dead tuples is low, no\nautovacuum is triggered, yet a range of the table contains little but dead\nitems. At which point you basically waste 7k bytes (1164 bytes for dead items\nIIRC) until a vacuum finally kicks in - way more than what what you'd waste if\nthe number of line items were limited at e.g. 2 x MaxHeapTuplesPerPage\n\nThis has become a bit more pronounced with vacuum skipping index cleanup when\nthere's \"just a few\" dead items - if all your updates concentrate in a small\nregion, 2% of the whole relation size isn't actually that small.\n\n\nI wonder if we could reduce the real-world space wastage of the line pointer\narray, if we changed the the logic about which OffsetNumbers to use during\ninserts / updates and and made a few tweaks to to pruning.\n\n1) It's kind of OK for heap-only tuples to get a high OffsetNumber - we can\n reclaim them during pruning once they're dead. They don't leave behind a\n dead item that's unreclaimable until the next vacuum with an index cleanup\n pass.\n\n2) Arguably the OffsetNumber of a redirect target can be changed. It might\n break careless uses of WHERE ctid = ... though (which likely are already\n broken, just harder to hit).\n\nThese leads me to a few potential improvements:\n\na) heap_page_prune_prune() should take the number of used items into account\n when deciding whether to prune. Right now we trigger hot pruning based on\n the number of items only if PageGetMaxOffsetNumber(page) >=\n MaxHeapTuplesPerPage. But because it requires a vacuum to reclaim an ItemId\n used for a root tuple, we should trigger HOT pruning when it might lower\n which OffsetNumber get used.\n\nb) heap_page_prune_prune() should be triggered in more paths. E.g. when\n inserting / updating, we should prune if it allows us to avoid using a high\n OffsetNumber.\n\nc) What if we left some percentage of ItemIds unused, when looking for the\n OffsetNumber of a new HOT row version? That'd make it more likely for\n non-HOT updates and inserts to fit onto the page, without permanently\n increasing the size of the line pointer array.\n\nd) If we think 2) is acceptable, we could move the targets of redirects to\n make space for new root tuples, without increasing the permanent size of\n the line pointer array.\n\nCrazy?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 8 Apr 2022 14:06:01 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-08 15:04:37 -0400, Robert Haas wrote:\n> I meant wasting space in the page. I think that's a real concern.\n> Imagine you allowed 1000 line pointers per page. Each one consumes 2\n> bytes.\n\nIt's 4 bytes per line pointer, right?\n\nstruct ItemIdData {\n unsigned int lp_off:15; /* 0: 0 4 */\n unsigned int lp_flags:2; /* 0:15 4 */\n unsigned int lp_len:15; /* 0:17 4 */\n\n /* size: 4, cachelines: 1, members: 3 */\n /* last cacheline: 4 bytes */\n};\n\nOr am I confusing myself somehow?\n\n\nI do wish the length of the tuple weren't in ItemIdData, but part of the\ntuple, so we'd not waste this space for dead items (I think it'd also simplify\nmore code than it'd complicate). But ...\n\n- Andres\n\n\n",
"msg_date": "Fri, 8 Apr 2022 14:18:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Fri, Apr 8, 2022 at 2:18 PM Andres Freund <andres@anarazel.de> wrote:\n> It's 4 bytes per line pointer, right?\n\nYeah, it's 4 bytes in Postgres. Most other DB systems only need 2\nbytes, which is implemented in exactly the way that you're imagining.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 8 Apr 2022 14:19:50 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Fri, Apr 8, 2022 at 2:06 PM Andres Freund <andres@anarazel.de> wrote:\n> It's not hard to hit scenarios where pages are effectively unusable, because\n> they have close to 291 dead items, without autovacuum triggering (or\n> autovacuum just taking a while).\n\nI think that this is mostly a problem with HOT-updates, and regular\nupdates to a lesser degree. Deletes seem less troublesome.\n\nI find that it's useful to think in terms of the high watermark number\nof versions required for a given logical row over time. It's probably\nquite rare for most individual logical rows to truly require more than\n2 or 3 versions per row at the same time, to serve queries. Even in\nupdate-heavy tables. And without doing anything fancy with the\ndefinition of HeapTupleSatisfiesVacuum(). There are important\nexceptions, certainly, but overall I think that we're still not doing\ngood enough with these easier cases.\n\nThe high watermark number of versions is probably going to be\nsignificantly greater than the typical number of versions for the same\nrow. So maybe we give up on keeping a row on its original heap block\ntoday, all because of a once-off (or very rare) event where we needed\nslightly more extra space for only a fraction of a second.\n\nThe tell-tale sign of these kinds of problems can sometimes be seen\nwith synthetic, rate-limited benchmarks. If it takes a very long time\nfor the problem to grow, but nothing about the workload really ever\nchanges, then that suggests problems that have this quality. The\nprobability of any given logical row being moved to another heap block\nis very low. And yet it is inevitable that many (even all) will, given\nenough time, given enough opportunities to get unlucky.\n\n> This has become a bit more pronounced with vacuum skipping index cleanup when\n> there's \"just a few\" dead items - if all your updates concentrate in a small\n> region, 2% of the whole relation size isn't actually that small.\n\nThe 2% threshold was chosen based on the observation that it was below\nthe effective threshold where autovacuum just won't ever launch\nanything on a moderate sized table (unless you set\nautovacuum_vacuum_scale_factor to something absurdly low). The real\nproblem is that IMV. That's why I think that we need to drive it based\nprimarily on page-level characteristics. While effectively ignoring\npages that are all-visible when deciding if enough bloat is present to\nnecessitate vacuuming.\n\n> 1) It's kind of OK for heap-only tuples to get a high OffsetNumber - we can\n> reclaim them during pruning once they're dead. They don't leave behind a\n> dead item that's unreclaimable until the next vacuum with an index cleanup\n> pass.\n\nI like the general direction here, but this particular idea doesn't\nseem like a winner.\n\n> 2) Arguably the OffsetNumber of a redirect target can be changed. It might\n> break careless uses of WHERE ctid = ... though (which likely are already\n> broken, just harder to hit).\n\nThat makes perfect sense to me, though.\n\n> a) heap_page_prune_prune() should take the number of used items into account\n> when deciding whether to prune. Right now we trigger hot pruning based on\n> the number of items only if PageGetMaxOffsetNumber(page) >=\n> MaxHeapTuplesPerPage. But because it requires a vacuum to reclaim an ItemId\n> used for a root tuple, we should trigger HOT pruning when it might lower\n> which OffsetNumber get used.\n\nUnsure about this.\n\n> b) heap_page_prune_prune() should be triggered in more paths. E.g. when\n> inserting / updating, we should prune if it allows us to avoid using a high\n> OffsetNumber.\n\nUnsure about this too.\n\nI prototyped a design that gives individual backends soft ownership of\nheap blocks that were recently allocated, and later prunes the heap\npage when it fills [1]. Useful for aborted transactions, where it\npreserves locality -- leaving aborted tuples behind makes their space\nultimately reused for unrelated inserts, which is bad. But eager\npruning allows the inserter to leave behind more or less pristine heap\npages, which don't need to be pruned later on.\n\n> c) What if we left some percentage of ItemIds unused, when looking for the\n> OffsetNumber of a new HOT row version? That'd make it more likely for\n> non-HOT updates and inserts to fit onto the page, without permanently\n> increasing the size of the line pointer array.\n\nThat sounds promising.\n\n[1] https://postgr.es/m/CAH2-Wzm-VhVeQYTH8hLyYho2wdG8Ecrm0uPQJWjap6BOVfe9Og@mail.gmail.com\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 8 Apr 2022 14:43:31 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
},
{
"msg_contents": "On Fri, Apr 8, 2022 at 1:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Hmm. I wonder if we could teach the system to figure out which of\n> those things is happening. In the case that I'm worried about, when\n> we're considering growing the line pointer array, either the line\n> pointers will be dead or the line pointers will be used but the tuples\n> to which they point will be dead. In the case you describe here, there\n> should be very few dead tuples or line pointers in the page. Maybe\n> when the number of line pointers starts to get big, we refuse to add\n> more without checking the number of dead tuples and dead line pointers\n> and verifying that those numbers are still small. Or, uh, something.\n\nIt seems like the central idea is that we think in terms of \"versions\nper logical row\", even in low level code that traditionally hasn't\nmade those kinds of distinctions.\n\nIdeally we could structure pruning a little bit more like a B-Tree\npage split, where there is an explicit \"incoming tuple\" that won't fit\n(without splitting the page, or maybe doing some kind of deletion). If\nthe so-called incoming tuple that we'd rather like to fit on the page\nis an insert of an unrelated row, don't allow it (don't prune, give\nup). But if it's an update (especially a hot update), be much more\npermissive about allowing it, and/or going ahead with pruning in order\nto make sure it happens.\n\nI like Andres' idea of altering LP_REDIRECTs just to be able to use up\nlower line pointers first. Or preserving a few extra LP_UNUSED items\non insert. Those seem promising to me.\n\n> One fly in the ointment is that if we refuse to expand the line\n> pointer array, we might extend the relation instead, which is another\n> kind of bloat and thus not great. But if the line pointer array is\n> somehow filling up with tons of dead tuples, we're going to have to\n> extend the relation anyway. I suspect that in some circumstances it's\n> better to just accept that outcome and hope that it leads to some\n> pages becoming empty, thus allowing their line pointer arrays to be\n> reset.\n\nI agree. Sometimes the problem is that we don't cut our losses when we\nshould -- sometimes just accepting a limited downside is the right\nthing to do. Like with the FSM; we diligently use every last scrap of\nfree space, without concern for the bigger picture. It's penny-wise,\npound-foolish.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 8 Apr 2022 14:56:37 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Lowering the ever-growing heap->pd_lower"
}
] |
[
{
"msg_contents": "The security team received a report from Theodor-Arsenij\nLarionov-Trichkin of PostgresPro that it's possible to crash the\nbackend with an assertion or null-pointer dereference by trying to\ncall a window function via the \"fast path function call\" protocol\nmessage. fastpath.c doesn't set up any WindowObject function context,\nof course, but most of the built-in window functions just assume there\nwill be one. We concluded that there's no possibility of anything\nmore interesting than an immediate core dump, so per our usual rules\nthis isn't a CVE-grade bug. However, we poked around to see if there\nwere any related problems, and soon found that fastpath.c will happily\nattempt to call procedures as well as functions. That seems to work,\naccidentally IMO, for simple procedures --- but if the procedure tries\nto COMMIT or ROLLBACK then you get \"ERROR: invalid transaction\ntermination\". (There might be other edge-case problems; I've not\ntried subtransactions or OUT parameters for example.)\n\nSo the question on the table is what to do about this. As far as\nwindow functions go, it seems clear that fastpath.c should just reject\nany attempt to call a window function that way (or an aggregate for\nthat matter; aggregates fail already, but with relatively obscure\nerror messages). Perhaps there's also an argument that window\nfunctions should have run-time tests, not just assertions, that\nthey're called in a valid way.\n\nAs for procedures, I'm of the opinion that we should just reject those\ntoo, but some other security team members were not happy with that\nidea. Conceivably we could attempt to make the case actually work,\nbut is it worth the trouble? Given the lack of field complaints\nabout the \"invalid transaction termination\" failure, it seems unlikely\nthat it's occurred to anyone to try to call procedures this way.\nWe'd need special infrastructure to test the case, too, since psql\noffers no access to fastpath calls.\n\nA compromise suggestion was to prohibit calling procedures via\nfastpath as of HEAD, but leave existing releases as they are,\nin case anyone is using a case that happens to work.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Mar 2021 14:15:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Procedures versus the \"fastpath\" API"
},
{
"msg_contents": "On 3/9/21 2:15 PM, Tom Lane wrote:\n> So the question on the table is what to do about this. As far as\n> window functions go, it seems clear that fastpath.c should just reject\n> any attempt to call a window function that way (or an aggregate for\n> that matter; aggregates fail already, but with relatively obscure\n> error messages). Perhaps there's also an argument that window\n> functions should have run-time tests, not just assertions, that\n> they're called in a valid way.\n> \n> As for procedures, I'm of the opinion that we should just reject those\n> too, but some other security team members were not happy with that\n> idea. Conceivably we could attempt to make the case actually work,\n> but is it worth the trouble? Given the lack of field complaints\n> about the \"invalid transaction termination\" failure, it seems unlikely\n> that it's occurred to anyone to try to call procedures this way.\n> We'd need special infrastructure to test the case, too, since psql\n> offers no access to fastpath calls.\n\n+1\n\n> A compromise suggestion was to prohibit calling procedures via\n> fastpath as of HEAD, but leave existing releases as they are,\n> in case anyone is using a case that happens to work.\n> \n> Thoughts?\n\nMy vote would be reject using fastpath for procedures in all relevant branches.\nIf someday someone cares enough to make it work, it is a new feature for a new\nmajor release.\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Tue, 9 Mar 2021 14:33:47 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Procedures versus the \"fastpath\" API"
},
{
"msg_contents": "On Tue, 2021-03-09 at 14:15 -0500, Tom Lane wrote:\n> The security team received a report from Theodor-Arsenij\n> Larionov-Trichkin of PostgresPro that it's possible to crash the\n> backend with an assertion or null-pointer dereference by trying to\n> call a window function via the \"fast path function call\" protocol\n> message.\n> \n> So the questthemion on the table is what to do about this.\n> \n> As for procedures, I'm of the opinion that we should just reject those\n> too, but some other security team members were not happy with that\n> idea. Conceivably we could attempt to make the case actually work,\n> but is it worth the trouble? Given the lack of field complaints\n> about the \"invalid transaction termination\" failure, it seems unlikely\n> that it's occurred to anyone to try to call procedures this way.\n> We'd need special infrastructure to test the case, too, since psql\n> offers no access to fastpath calls.\n> \n> A compromise suggestion was to prohibit calling procedures via\n> fastpath as of HEAD, but leave existing releases as they are,\n> in case anyone is using a case that happens to work.\n\nThe \"invalid transaction termination\" failure alone doesn't\nworry or surprise me - transaction handling in procedures only works\nunder rather narrow conditions anyway (no SELECT on the call stack,\nno explicit transaction was started outside the procedure).\n\nIf that is the only problem, I'd just document it. The hard work is\nof course that there is no other problem with calling procedures that\nway. If anybody wants to do that work, and transaction handling is\nthe only thing that doesn't work with the fastpath API, we can call\nit supported and document the exception.\n\nIn case of doubt, I would agree with you and forbid it in HEAD\nas a corner case with little real-world use.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 10 Mar 2021 10:03:24 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Procedures versus the \"fastpath\" API"
},
{
"msg_contents": "On Wed, Mar 10, 2021 at 10:03:24AM +0100, Laurenz Albe wrote:\n> On Tue, 2021-03-09 at 14:15 -0500, Tom Lane wrote:\n> > As for procedures, I'm of the opinion that we should just reject those\n> > too, but some other security team members were not happy with that\n> > idea. Conceivably we could attempt to make the case actually work,\n> > but is it worth the trouble? Given the lack of field complaints\n> > about the \"invalid transaction termination\" failure, it seems unlikely\n> > that it's occurred to anyone to try to call procedures this way.\n> > We'd need special infrastructure to test the case, too, since psql\n> > offers no access to fastpath calls.\n> > \n> > A compromise suggestion was to prohibit calling procedures via\n> > fastpath as of HEAD, but leave existing releases as they are,\n> > in case anyone is using a case that happens to work.\n\n(That was my suggestion.)\n\n> The \"invalid transaction termination\" failure alone doesn't\n> worry or surprise me - transaction handling in procedures only works\n> under rather narrow conditions anyway (no SELECT on the call stack,\n> no explicit transaction was started outside the procedure).\n> \n> If that is the only problem, I'd just document it. The hard work is\n> of course that there is no other problem with calling procedures that\n> way. If anybody wants to do that work, and transaction handling is\n> the only thing that doesn't work with the fastpath API, we can call\n> it supported and document the exception.\n\nI'd be fine with that, too.\n\n> In case of doubt, I would agree with you and forbid it in HEAD\n> as a corner case with little real-world use.\n\nThe PQfn(some-procedure) feature has no known bugs and no known users, so I\nthink the decision carries little weight. Removing the feature would look\nwise if we later discover some hard-to-fix bug therein. Removal also obeys\nthe typical pattern that a given parse context accepts either procedures or\nfunctions, not both. Keeping the feature, at least in back branches, would\nlook wise if it avoids urgent s/PQfn/PQexecParams/ work for some user trying\nto update to 13.3.\n\n\n",
"msg_date": "Thu, 11 Mar 2021 17:52:28 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Procedures versus the \"fastpath\" API"
},
{
"msg_contents": "On Tue, Mar 09, 2021 at 02:33:47PM -0500, Joe Conway wrote:\n> My vote would be reject using fastpath for procedures in all relevant branches.\n> If someday someone cares enough to make it work, it is a new feature for a new\n> major release.\n\nFWIW, my vote would go for issuing an error if attempting to use a\nprocedure in the fast path for all the branches. The lack of\ncomplaint about the error you are mentioning sounds like a pretty good\nargument to fail properly on existing branches, and work on this as a\nnew feature in the future if there is anybody willing to make a case\nfor it.\n--\nMichael",
"msg_date": "Mon, 15 Mar 2021 14:07:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Procedures versus the \"fastpath\" API"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Mar 09, 2021 at 02:33:47PM -0500, Joe Conway wrote:\n>> My vote would be reject using fastpath for procedures in all relevant branches.\n>> If someday someone cares enough to make it work, it is a new feature for a new\n>> major release.\n\n> FWIW, my vote would go for issuing an error if attempting to use a\n> procedure in the fast path for all the branches. The lack of\n> complaint about the error you are mentioning sounds like a pretty good\n> argument to fail properly on existing branches, and work on this as a\n> new feature in the future if there is anybody willing to make a case\n> for it.\n\nI let this thread grow cold because I was hoping for some more votes,\nbut with the quarterly releases approaching, it's time to close out\nthe issue one way or the other.\n\nBy my count, we have three votes for forbidding procedure calls via\nfastpath in all branches (me, Joe, Michael), and two for doing\nsomething laxer (Noah, Laurenz). The former is surely the safer\nchoice, so I'm going to go do that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Apr 2021 12:57:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Procedures versus the \"fastpath\" API"
},
{
"msg_contents": "On Fri, Apr 30, 2021 at 12:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> By my count, we have three votes for forbidding procedure calls via\n> fastpath in all branches (me, Joe, Michael), and two for doing\n> something laxer (Noah, Laurenz). The former is surely the safer\n> choice, so I'm going to go do that.\n\nFWIW, I'm also for the stricter approach.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 30 Apr 2021 14:04:42 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Procedures versus the \"fastpath\" API"
}
] |
[
{
"msg_contents": "Enable parallel SELECT for \"INSERT INTO ... SELECT ...\".\n\nParallel SELECT can't be utilized for INSERT in the following cases:\n- INSERT statement uses the ON CONFLICT DO UPDATE clause\n- Target table has a parallel-unsafe: trigger, index expression or\n predicate, column default expression or check constraint\n- Target table has a parallel-unsafe domain constraint on any column\n- Target table is a partitioned table with a parallel-unsafe partition key\n expression or support function\n\nThe planner is updated to perform additional parallel-safety checks for\nthe cases listed above, for determining whether it is safe to run INSERT\nin parallel-mode with an underlying parallel SELECT. The planner will\nconsider using parallel SELECT for \"INSERT INTO ... SELECT ...\", provided\nnothing unsafe is found from the additional parallel-safety checks, or\nfrom the existing parallel-safety checks for SELECT.\n\nWhile checking parallel-safety, we need to check it for all the partitions\non the table which can be costly especially when we decide not to use a\nparallel plan. So, in a separate patch, we will introduce a GUC and or a\nreloption to enable/disable parallelism for Insert statements.\n\nPrior to entering parallel-mode for the execution of INSERT with parallel\nSELECT, a TransactionId is acquired and assigned to the current\ntransaction state. This is necessary to prevent the INSERT from attempting\nto assign the TransactionId whilst in parallel-mode, which is not allowed.\nThis approach has a disadvantage in that if the underlying SELECT does not\nreturn any rows, then the TransactionId is not used, however that\nshouldn't happen in practice in many cases.\n\nAuthor: Greg Nancarrow, Amit Langote, Amit Kapila\nReviewed-by: Amit Langote, Hou Zhijie, Takayuki Tsunakawa, Antonin Houska, Bharath Rupireddy, Dilip Kumar, Vignesh C, Zhihong Yu, Amit Kapila\nTested-by: Tang, Haiying\nDiscussion: https://postgr.es/m/CAJcOf-cXnB5cnMKqWEp2E2z7Mvcd04iLVmV=qpFJrR3AcrTS3g@mail.gmail.com\nDiscussion: https://postgr.es/m/CAJcOf-fAdj=nDKMsRhQzndm-O13NY4dL6xGcEvdX5Xvbbi0V7g@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/05c8482f7f69a954fd65fce85f896e848fc48197\n\nModified Files\n--------------\ndoc/src/sgml/parallel.sgml | 4 +-\nsrc/backend/access/transam/xact.c | 26 ++\nsrc/backend/executor/execMain.c | 3 +\nsrc/backend/nodes/copyfuncs.c | 1 +\nsrc/backend/nodes/outfuncs.c | 2 +\nsrc/backend/nodes/readfuncs.c | 1 +\nsrc/backend/optimizer/plan/planner.c | 37 +-\nsrc/backend/optimizer/util/clauses.c | 550 +++++++++++++++++++++++++-\nsrc/backend/utils/cache/plancache.c | 33 +-\nsrc/include/access/xact.h | 15 +\nsrc/include/nodes/pathnodes.h | 2 +\nsrc/include/nodes/plannodes.h | 2 +\nsrc/include/optimizer/clauses.h | 3 +-\nsrc/test/regress/expected/insert_parallel.out | 536 +++++++++++++++++++++++++\nsrc/test/regress/parallel_schedule | 1 +\nsrc/test/regress/serial_schedule | 1 +\nsrc/test/regress/sql/insert_parallel.sql | 335 ++++++++++++++++\n17 files changed, 1531 insertions(+), 21 deletions(-)",
"msg_date": "Wed, 10 Mar 2021 02:17:54 +0000",
"msg_from": "Amit Kapila <akapila@postgresql.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Enable parallel SELECT for \"INSERT INTO ... SELECT ...\"."
},
{
"msg_contents": "Amit Kapila <akapila@postgresql.org> writes:\n> Enable parallel SELECT for \"INSERT INTO ... SELECT ...\".\n\nskink (valgrind) is unhappy:\n\ncreating configuration files ... ok\nrunning bootstrap script ... ok\nperforming post-bootstrap initialization ... ==4085668== VALGRINDERROR-BEGIN\n==4085668== Conditional jump or move depends on uninitialised value(s)\n==4085668== at 0x4AEB77: max_parallel_hazard_walker (clauses.c:700)\n==4085668== by 0x445287: expression_tree_walker (nodeFuncs.c:2188)\n==4085668== by 0x4AEBB8: max_parallel_hazard_walker (clauses.c:860)\n==4085668== by 0x4B045E: is_parallel_safe (clauses.c:637)\n==4085668== by 0x4985D0: grouping_planner (planner.c:2070)\n==4085668== by 0x49AE4F: subquery_planner (planner.c:1024)\n==4085668== by 0x49B4F5: standard_planner (planner.c:404)\n==4085668== by 0x49BAD2: planner (planner.c:273)\n==4085668== by 0x5818BE: pg_plan_query (postgres.c:809)\n==4085668== by 0x581977: pg_plan_queries (postgres.c:900)\n==4085668== by 0x581E70: exec_simple_query (postgres.c:1092)\n==4085668== by 0x583F7A: PostgresMain (postgres.c:4327)\n==4085668== Uninitialised value was created by a stack allocation\n==4085668== at 0x4B0363: is_parallel_safe (clauses.c:599)\n==4085668== \n==4085668== VALGRINDERROR-END\n\nThere are a few other commits that skink hasn't seen before, but given\nthe apparent connection to parallel planning, none of the others look\nlike plausible candidates to explain this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 09 Mar 2021 22:37:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Enable parallel SELECT for \"INSERT INTO ... SELECT ...\"."
},
{
"msg_contents": "On Wed, Mar 10, 2021 at 9:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <akapila@postgresql.org> writes:\n> > Enable parallel SELECT for \"INSERT INTO ... SELECT ...\".\n>\n> skink (valgrind) is unhappy:\n>\n> creating configuration files ... ok\n> running bootstrap script ... ok\n> performing post-bootstrap initialization ... ==4085668== VALGRINDERROR-BEGIN\n> ==4085668== Conditional jump or move depends on uninitialised value(s)\n> ==4085668== at 0x4AEB77: max_parallel_hazard_walker (clauses.c:700)\n> ==4085668== by 0x445287: expression_tree_walker (nodeFuncs.c:2188)\n> ==4085668== by 0x4AEBB8: max_parallel_hazard_walker (clauses.c:860)\n> ==4085668== by 0x4B045E: is_parallel_safe (clauses.c:637)\n> ==4085668== by 0x4985D0: grouping_planner (planner.c:2070)\n> ==4085668== by 0x49AE4F: subquery_planner (planner.c:1024)\n> ==4085668== by 0x49B4F5: standard_planner (planner.c:404)\n> ==4085668== by 0x49BAD2: planner (planner.c:273)\n> ==4085668== by 0x5818BE: pg_plan_query (postgres.c:809)\n> ==4085668== by 0x581977: pg_plan_queries (postgres.c:900)\n> ==4085668== by 0x581E70: exec_simple_query (postgres.c:1092)\n> ==4085668== by 0x583F7A: PostgresMain (postgres.c:4327)\n> ==4085668== Uninitialised value was created by a stack allocation\n> ==4085668== at 0x4B0363: is_parallel_safe (clauses.c:599)\n> ==4085668==\n> ==4085668== VALGRINDERROR-END\n>\n> There are a few other commits that skink hasn't seen before, but given\n> the apparent connection to parallel planning, none of the others look\n> like plausible candidates to explain this.\n>\n\nRight, the patch forgot to initialize a new variable in\nmax_parallel_hazard_context via is_parallel_safe. I think we need to\ninitialize all the new variables as NULL because is_parallel_safe is\nused to check parallel-safety of expressions. These new variables are\nonly required for checking parallel-safety of target relation which is\nalready done at the time of initial checks in standard_planner.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 10 Mar 2021 09:46:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Enable parallel SELECT for \"INSERT INTO ... SELECT ...\"."
}
] |
[
{
"msg_contents": "Hi,\n\nWhile providing thoughts on [1], I observed that the error messages\nthat are emitted while adding foreign, temporary and unlogged tables\ncan be improved a bit from the existing [2] to [3]. For instance, the\nexisting message when foreign table is tried to add into the\npublication \"f1\" is not a table\" looks odd. Because it says that the\nforeign table is not a table at all.\n\nAttaching a small patch. Thoughts?\n\n[1] - https://www.postgresql.org/message-id/CALj2ACWAxO3vSToT0o5nXL%3Drz5cNx90zaV-at%3DcvM14Tag4%3DcQ%40mail.gmail.com\n[2] - t1 is a temporary table:\npostgres=# CREATE PUBLICATION testpub FOR TABLE t1;\nERROR: table \"t1\" cannot be replicated\nDETAIL: Temporary and unlogged relations cannot be replicated.\n\nt1 is an unlogged table:\npostgres=# CREATE PUBLICATION testpub FOR TABLE t1;\nERROR: table \"t1\" cannot be replicated\nDETAIL: Temporary and unlogged relations cannot be replicated.\n\nf1 is a foreign table:\npostgres=# CREATE PUBLICATION testpub FOR TABLE f1;\nERROR: \"f1\" is not a table\nDETAIL: Only tables can be added to publications.\n\n[3] - t1 is a temporary table:\npostgres=# CREATE PUBLICATION testpub FOR TABLE t1;\nERROR: temporary table \"t1\" cannot be replicated\nDETAIL: Temporary, unlogged and foreign relations cannot be replicated.\n\nt1 is an unlogged table:\npostgres=# CREATE PUBLICATION testpub FOR TABLE t1;\nERROR: unlogged table \"t1\" cannot be replicated\nDETAIL: Temporary, unlogged and foreign relations cannot be replicated.\n\nf1 is a foreign table:\npostgres=# CREATE PUBLICATION testpub FOR TABLE f1;\nERROR: foreign table \"f1\" cannot be replicated\nDETAIL: Temporary, unlogged and foreign relations cannot be replicated.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 10 Mar 2021 10:44:12 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Logical Replication - improve error message while adding tables to\n the publication in check_publication_add_relation"
},
{
"msg_contents": "On Wed, Mar 10, 2021 at 10:44 AM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> Hi,\n>\n> While providing thoughts on [1], I observed that the error messages\n> that are emitted while adding foreign, temporary and unlogged tables\n> can be improved a bit from the existing [2] to [3].\n>\n\n+1 for improving the error messages here.\n\n\n> Attaching a small patch. Thoughts?\n>\n\nI had a look at the patch and it looks good to me. However, I think after\nyou have added the specific kind of table type in the error message itself,\nnow the error details seem to be giving redundant information, but others\nmight\nhave different thoughts.\n\nThe patch itself looks good otherwise. Also the make check and postgres_fdw\ncheck looking good.\n\nRegards,\nJeevan Ladhe\n\nOn Wed, Mar 10, 2021 at 10:44 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:Hi,\n\nWhile providing thoughts on [1], I observed that the error messages\nthat are emitted while adding foreign, temporary and unlogged tables\ncan be improved a bit from the existing [2] to [3]. +1 for improving the error messages here. \nAttaching a small patch. Thoughts?I had a look at the patch and it looks good to me. However, I think afteryou have added the specific kind of table type in the error message itself,now the error details seem to be giving redundant information, but others mighthave different thoughts.The patch itself looks good otherwise. Also the make check and postgres_fdwcheck looking good.Regards,Jeevan Ladhe",
"msg_date": "Wed, 10 Mar 2021 13:26:34 +0530",
"msg_from": "Jeevan Ladhe <jeevan.ladhe@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "On Wed, Mar 10, 2021 at 1:27 PM Jeevan Ladhe\n<jeevan.ladhe@enterprisedb.com> wrote:\n>\n> On Wed, Mar 10, 2021 at 10:44 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> Hi,\n>>\n>> While providing thoughts on [1], I observed that the error messages\n>> that are emitted while adding foreign, temporary and unlogged tables\n>> can be improved a bit from the existing [2] to [3].\n>\n> +1 for improving the error messages here.\n\nThanks for taking a look at the patch.\n\n>> Attaching a small patch. Thoughts?\n>\n> I had a look at the patch and it looks good to me. However, I think after\n> you have added the specific kind of table type in the error message itself,\n> now the error details seem to be giving redundant information, but others might\n> have different thoughts.\n\nThe error detail is to give a bit of information of what and all\nrelation types are unsupported with the create publication statement.\nBut with the error message now showing up the type of relation, the\ndetail message looks redundant to me as well. If agreed, I can remove\nthat. Thoughts?\n\n> The patch itself looks good otherwise. Also the make check and postgres_fdw\n> check looking good.\n\nThanks.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 10 Mar 2021 17:03:40 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "On Wed, Mar 10, 2021, at 2:14 AM, Bharath Rupireddy wrote:\n> While providing thoughts on [1], I observed that the error messages\n> that are emitted while adding foreign, temporary and unlogged tables\n> can be improved a bit from the existing [2] to [3]. For instance, the\n> existing message when foreign table is tried to add into the\n> publication \"f1\" is not a table\" looks odd. Because it says that the\n> foreign table is not a table at all.\nI wouldn't mix [regular|partitioned|temporary|unlogged] tables with foreign\ntables. Although, they have a pg_class entry in common, foreign tables aren't\n\"real\" tables (external storage); they even have different DDLs to handle it\n(CREATE TABLE x CREATE FOREIGN TABLE).\n\npostgres=# CREATE PUBLICATION testpub FOR TABLE f1;\nERROR: \"f1\" is not a table\nDETAIL: Only tables can be added to publications.\n\nI agree that \"f1 is not a table\" is a confusing message at first because\nforeign table has \"table\" as description. Maybe if we apply the negation in\nboth messages it would be clear (using the same wording as system tables).\n\nERROR: \"f1\" is a foreign table\nDETAIL: Foreign tables cannot be added to publications.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Mar 10, 2021, at 2:14 AM, Bharath Rupireddy wrote:While providing thoughts on [1], I observed that the error messagesthat are emitted while adding foreign, temporary and unlogged tablescan be improved a bit from the existing [2] to [3]. For instance, theexisting message when foreign table is tried to add into thepublication \"f1\" is not a table\" looks odd. Because it says that theforeign table is not a table at all.I wouldn't mix [regular|partitioned|temporary|unlogged] tables with foreigntables. Although, they have a pg_class entry in common, foreign tables aren't\"real\" tables (external storage); they even have different DDLs to handle it(CREATE TABLE x CREATE FOREIGN TABLE).postgres=# CREATE PUBLICATION testpub FOR TABLE f1;ERROR: \"f1\" is not a tableDETAIL: Only tables can be added to publications.I agree that \"f1 is not a table\" is a confusing message at first becauseforeign table has \"table\" as description. Maybe if we apply the negation inboth messages it would be clear (using the same wording as system tables).ERROR: \"f1\" is a foreign tableDETAIL: Foreign tables cannot be added to publications.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 10 Mar 2021 09:17:54 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re:_Logical_Replication_-_improve_error_message_while_adding_t?=\n =?UTF-8?Q?ables_to_the_publication_in_check=5Fpublication=5Fadd=5Frelat?=\n =?UTF-8?Q?ion?="
},
{
"msg_contents": "On Wed, Mar 10, 2021 at 5:48 PM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Wed, Mar 10, 2021, at 2:14 AM, Bharath Rupireddy wrote:\n>\n> While providing thoughts on [1], I observed that the error messages\n> that are emitted while adding foreign, temporary and unlogged tables\n> can be improved a bit from the existing [2] to [3]. For instance, the\n> existing message when foreign table is tried to add into the\n> publication \"f1\" is not a table\" looks odd. Because it says that the\n> foreign table is not a table at all.\n>\n> I wouldn't mix [regular|partitioned|temporary|unlogged] tables with foreign\n> tables. Although, they have a pg_class entry in common, foreign tables aren't\n> \"real\" tables (external storage); they even have different DDLs to handle it\n> (CREATE TABLE x CREATE FOREIGN TABLE).\n>\n> postgres=# CREATE PUBLICATION testpub FOR TABLE f1;\n> ERROR: \"f1\" is not a table\n> DETAIL: Only tables can be added to publications.\n>\n> I agree that \"f1 is not a table\" is a confusing message at first because\n> foreign table has \"table\" as description. Maybe if we apply the negation in\n> both messages it would be clear (using the same wording as system tables).\n>\n> ERROR: \"f1\" is a foreign table\n> DETAIL: Foreign tables cannot be added to publications.\n\nThanks. Changed the error message and detail to the way we have it for\nsystem tables presently. Attaching v2 patch for further review.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 11 Mar 2021 20:26:05 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "On Thu, Mar 11, 2021 at 8:26 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Mar 10, 2021 at 5:48 PM Euler Taveira <euler@eulerto.com> wrote:\n> >\n> > On Wed, Mar 10, 2021, at 2:14 AM, Bharath Rupireddy wrote:\n> >\n> > While providing thoughts on [1], I observed that the error messages\n> > that are emitted while adding foreign, temporary and unlogged tables\n> > can be improved a bit from the existing [2] to [3]. For instance, the\n> > existing message when foreign table is tried to add into the\n> > publication \"f1\" is not a table\" looks odd. Because it says that the\n> > foreign table is not a table at all.\n> >\n> > I wouldn't mix [regular|partitioned|temporary|unlogged] tables with foreign\n> > tables. Although, they have a pg_class entry in common, foreign tables aren't\n> > \"real\" tables (external storage); they even have different DDLs to handle it\n> > (CREATE TABLE x CREATE FOREIGN TABLE).\n> >\n> > postgres=# CREATE PUBLICATION testpub FOR TABLE f1;\n> > ERROR: \"f1\" is not a table\n> > DETAIL: Only tables can be added to publications.\n> >\n> > I agree that \"f1 is not a table\" is a confusing message at first because\n> > foreign table has \"table\" as description. Maybe if we apply the negation in\n> > both messages it would be clear (using the same wording as system tables).\n> >\n> > ERROR: \"f1\" is a foreign table\n> > DETAIL: Foreign tables cannot be added to publications.\n>\n> Thanks. Changed the error message and detail to the way we have it for\n> system tables presently. Attaching v2 patch for further review.\n\nHere's the v3 patch rebased on the latest master.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 26 Mar 2021 09:25:11 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "On Fri, Mar 26, 2021 at 9:25 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Here's the v3 patch rebased on the latest master.\n\nHere's the v4 patch reabsed on the latest master, please review it further.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 5 Apr 2021 08:57:41 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "On Mon, Apr 5, 2021, at 12:27 AM, Bharath Rupireddy wrote:\n> On Fri, Mar 26, 2021 at 9:25 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com <mailto:bharath.rupireddyforpostgres%40gmail.com>> wrote:\n> > Here's the v3 patch rebased on the latest master.\n> \n> Here's the v4 patch reabsed on the latest master, please review it further.\n/* UNLOGGED and TEMP relations cannot be part of publication. */\nif (!RelationIsPermanent(targetrel))\n- ereport(ERROR,\n- (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n- errmsg(\"table \\\"%s\\\" cannot be replicated\",\n- RelationGetRelationName(targetrel)),\n- errdetail(\"Temporary and unlogged relations cannot be replicated.\")));\n+ {\n+ if (RelationUsesLocalBuffers(targetrel))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"\\\"%s\\\" is a temporary table\",\n+ RelationGetRelationName(targetrel)),\n+ errdetail(\"Temporary tables cannot be added to publications.\")));\n+ else if (targetrel->rd_rel->relpersistence == RELPERSISTENCE_UNLOGGED)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"\\\"%s\\\" is an unlogged table\",\n+ RelationGetRelationName(targetrel)),\n+ errdetail(\"Unlogged tables cannot be added to publications.\")));\n+ }\n\nRelationIsPermanent(), RelationUsesLocalBuffers(), and\ntargetrel->rd_rel->relpersistence all refers to relpersistence. Hence, it is\nnot necessary to test !RelationIsPermanent().\n\nI would slightly rewrite the commit message to something like:\n\nImprove publication error messages\n\nAdding a foreign table into a publication prints an error saying \"foo is not a\ntable\". Although, a foreign table is not a regular table, this message could\npossibly confuse users. Provide a suitable error message according to the\nobject class (table vs foreign table). While at it, separate unlogged/temp\ntable error message into 2 messages.\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Apr 5, 2021, at 12:27 AM, Bharath Rupireddy wrote:On Fri, Mar 26, 2021 at 9:25 AM Bharath Rupireddy<bharath.rupireddyforpostgres@gmail.com> wrote:> Here's the v3 patch rebased on the latest master.Here's the v4 patch reabsed on the latest master, please review it further./* UNLOGGED and TEMP relations cannot be part of publication. */if (!RelationIsPermanent(targetrel))-\t\tereport(ERROR,-\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),-\t\t\t\t errmsg(\"table \\\"%s\\\" cannot be replicated\",-\t\t\t\t\t\tRelationGetRelationName(targetrel)),-\t\t\t\t errdetail(\"Temporary and unlogged relations cannot be replicated.\")));+\t{+\t\tif (RelationUsesLocalBuffers(targetrel))+\t\t\tereport(ERROR,+\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),+\t\t\t\t\t errmsg(\"\\\"%s\\\" is a temporary table\",+\t\t\t\t\t\t\tRelationGetRelationName(targetrel)),+\t\t\t\t\t errdetail(\"Temporary tables cannot be added to publications.\")));+\t\telse if (targetrel->rd_rel->relpersistence == RELPERSISTENCE_UNLOGGED)+\t\t\tereport(ERROR,+\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),+\t\t\t\t\t errmsg(\"\\\"%s\\\" is an unlogged table\",+\t\t\t\t\t\t\tRelationGetRelationName(targetrel)),+\t\t\t\t\t errdetail(\"Unlogged tables cannot be added to publications.\")));+\t}RelationIsPermanent(), RelationUsesLocalBuffers(), andtargetrel->rd_rel->relpersistence all refers to relpersistence. Hence, it isnot necessary to test !RelationIsPermanent().I would slightly rewrite the commit message to something like:Improve publication error messagesAdding a foreign table into a publication prints an error saying \"foo is not atable\". Although, a foreign table is not a regular table, this message couldpossibly confuse users. Provide a suitable error message according to theobject class (table vs foreign table). While at it, separate unlogged/temptable error message into 2 messages.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 05 Apr 2021 10:11:04 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re:_Logical_Replication_-_improve_error_message_while_adding_t?=\n =?UTF-8?Q?ables_to_the_publication_in_check=5Fpublication=5Fadd=5Frelat?=\n =?UTF-8?Q?ion?="
},
{
"msg_contents": "On Mon, Apr 5, 2021 at 6:41 PM Euler Taveira <euler@eulerto.com> wrote:\n> Here's the v4 patch reabsed on the latest master, please review it further.\n>\n> /* UNLOGGED and TEMP relations cannot be part of publication. */\n> if (!RelationIsPermanent(targetrel))\n> - ereport(ERROR,\n> - (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> - errmsg(\"table \\\"%s\\\" cannot be replicated\",\n> - RelationGetRelationName(targetrel)),\n> - errdetail(\"Temporary and unlogged relations cannot be replicated.\")));\n> + {\n> + if (RelationUsesLocalBuffers(targetrel))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"\\\"%s\\\" is a temporary table\",\n> + RelationGetRelationName(targetrel)),\n> + errdetail(\"Temporary tables cannot be added to publications.\")));\n> + else if (targetrel->rd_rel->relpersistence == RELPERSISTENCE_UNLOGGED)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"\\\"%s\\\" is an unlogged table\",\n> + RelationGetRelationName(targetrel)),\n> + errdetail(\"Unlogged tables cannot be added to publications.\")));\n> + }\n>\n> RelationIsPermanent(), RelationUsesLocalBuffers(), and\n> targetrel->rd_rel->relpersistence all refers to relpersistence. Hence, it is\n> not necessary to test !RelationIsPermanent().\n\nDone.\n\n> I would slightly rewrite the commit message to something like:\n>\n> Improve publication error messages\n>\n> Adding a foreign table into a publication prints an error saying \"foo is not a\n> table\". Although, a foreign table is not a regular table, this message could\n> possibly confuse users. Provide a suitable error message according to the\n> object class (table vs foreign table). While at it, separate unlogged/temp\n> table error message into 2 messages.\n\nThanks for the better wording.\n\nAttaching v5 patch, please have a look.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 5 Apr 2021 19:19:25 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "On Mon, Apr 5, 2021 at 7:19 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Apr 5, 2021 at 6:41 PM Euler Taveira <euler@eulerto.com> wrote:\n> > Here's the v4 patch reabsed on the latest master, please review it further.\n> >\n> > /* UNLOGGED and TEMP relations cannot be part of publication. */\n> > if (!RelationIsPermanent(targetrel))\n> > - ereport(ERROR,\n> > - (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > - errmsg(\"table \\\"%s\\\" cannot be replicated\",\n> > - RelationGetRelationName(targetrel)),\n> > - errdetail(\"Temporary and unlogged relations cannot be replicated.\")));\n> > + {\n> > + if (RelationUsesLocalBuffers(targetrel))\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > + errmsg(\"\\\"%s\\\" is a temporary table\",\n> > + RelationGetRelationName(targetrel)),\n> > + errdetail(\"Temporary tables cannot be added to publications.\")));\n> > + else if (targetrel->rd_rel->relpersistence == RELPERSISTENCE_UNLOGGED)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > + errmsg(\"\\\"%s\\\" is an unlogged table\",\n> > + RelationGetRelationName(targetrel)),\n> > + errdetail(\"Unlogged tables cannot be added to publications.\")));\n> > + }\n> >\n> > RelationIsPermanent(), RelationUsesLocalBuffers(), and\n> > targetrel->rd_rel->relpersistence all refers to relpersistence. Hence, it is\n> > not necessary to test !RelationIsPermanent().\n>\n> Done.\n>\n> > I would slightly rewrite the commit message to something like:\n> >\n> > Improve publication error messages\n> >\n> > Adding a foreign table into a publication prints an error saying \"foo is not a\n> > table\". Although, a foreign table is not a regular table, this message could\n> > possibly confuse users. Provide a suitable error message according to the\n> > object class (table vs foreign table). While at it, separate unlogged/temp\n> > table error message into 2 messages.\n>\n> Thanks for the better wording.\n>\n> Attaching v5 patch, please have a look.\n>\n\nWe get the following error while adding an index:\ncreate publication mypub for table idx_t1;\nERROR: \"idx_t1\" is an index\n\nThis error occurs internally from table_openrv function call, if we\ncould replace this with relation_openrv and then check the table kind,\nwe could throw a similar error message here too like the other changes\nin the patch.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 26 May 2021 19:38:18 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "On Wed, May 26, 2021 at 7:38 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Attaching v5 patch, please have a look.\n>\n> We get the following error while adding an index:\n> create publication mypub for table idx_t1;\n> ERROR: \"idx_t1\" is an index\n>\n> This error occurs internally from table_openrv function call, if we\n> could replace this with relation_openrv and then check the table kind,\n> we could throw a similar error message here too like the other changes\n> in the patch.\n\nDo you say that we replace table_open in publication_add_relation with\nrelation_open and have the \"\\\"%s\\\" is an index\" or \"\\\"%s\\\" is a\ncomposite type\" checks in check_publication_add_relation? If that is\nso, I don't think it's a good idea to have the extra code in\ncheck_publication_add_relation and I would like it to be the way it is\ncurrently.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 26 May 2021 19:55:39 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "On Wed, May 26, 2021 at 7:55 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, May 26, 2021 at 7:38 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > Attaching v5 patch, please have a look.\n> >\n> > We get the following error while adding an index:\n> > create publication mypub for table idx_t1;\n> > ERROR: \"idx_t1\" is an index\n> >\n> > This error occurs internally from table_openrv function call, if we\n> > could replace this with relation_openrv and then check the table kind,\n> > we could throw a similar error message here too like the other changes\n> > in the patch.\n>\n> Do you say that we replace table_open in publication_add_relation with\n> relation_open and have the \"\\\"%s\\\" is an index\" or \"\\\"%s\\\" is a\n> composite type\" checks in check_publication_add_relation? If that is\n> so, I don't think it's a good idea to have the extra code in\n> check_publication_add_relation and I would like it to be the way it is\n> currently.\n\nBefore calling check_publication_add_relation, we will call\nOpenTableList to get the list of relations. In openTableList we don't\ninclude the errordetail for the failure like you have fixed it in\ncheck_publication_add_relation. When a user tries to add index objects\nor composite types, the error will be thrown earlier itself. I didn't\nmean to change check_publication_add_relation, I meant to change\ntable_openrv to relation_openrv in OpenTableList and include error\ndetails in case of failure like the change attached. If you are ok,\nplease include the change in your patch.\n\nRegards,\nVignesh",
"msg_date": "Thu, 27 May 2021 21:01:48 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "On Thu, May 27, 2021 at 9:02 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Do you say that we replace table_open in publication_add_relation with\n> > relation_open and have the \"\\\"%s\\\" is an index\" or \"\\\"%s\\\" is a\n> > composite type\" checks in check_publication_add_relation? If that is\n> > so, I don't think it's a good idea to have the extra code in\n> > check_publication_add_relation and I would like it to be the way it is\n> > currently.\n>\n> Before calling check_publication_add_relation, we will call\n> OpenTableList to get the list of relations. In openTableList we don't\n> include the errordetail for the failure like you have fixed it in\n> check_publication_add_relation. When a user tries to add index objects\n> or composite types, the error will be thrown earlier itself. I didn't\n> mean to change check_publication_add_relation, I meant to change\n> table_openrv to relation_openrv in OpenTableList and include error\n> details in case of failure like the change attached. If you are ok,\n> please include the change in your patch.\n\nI don't think we need to change that. General intuition is that with\nCREATE PUBLICATION ... FOR TABLE/FOR ALL TABLES one can specify only\ntables and if at all an index/composite type is specified, the error\nmessages \"\"XXXX\" is an index\"/\"\"XXXX\" is a composite type\" can imply\nthat they are not supported with CREATE PUBLICATION. There's no need\nfor a detailed error message saying \"Index/Composite Type cannot be\nadded to publications.\". Whereas foreign/unlogged/temporary/system\ntables are actually tables, and we don't support them. So a detailed\nerror message here can state that explicitly.\n\nI'm not taking the patch, attaching v5 again here to make cfbot happy\nand for further review.\n\nBTW, when we use relation_openrv, we have to use relation_close.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 27 May 2021 22:28:23 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "On Thu, May 27, 2021 at 10:28 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> I'm not taking the patch, attaching v5 again here to make cfbot happy\n> and for further review.\n\nAttaching v6 patch rebased onto the latest master.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Wed, 7 Jul 2021 17:35:02 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "On Wed, Jul 7, 2021 at 5:35 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Attaching v6 patch rebased onto the latest master.\n\nI came across a recent commit 81d5995 and have used the same error\nmessage for temporary and unlogged tables. Also added, test cases to\ncover these error cases for foreign, temporary, unlogged and system\ntables with CREATE PUBLICATION command. PSA v7.\n\ncommit 81d5995b4b78017ef9e5c6f151361d1fb949924c\nAuthor: Peter Eisentraut <peter@eisentraut.org>\nDate: Wed Jul 21 07:40:05 2021 +0200\n\n More improvements of error messages about mismatching relkind\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Mon, 26 Jul 2021 13:03:18 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "> On 26 Jul 2021, at 09:33, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n\n> PSA v7.\n\nThis patch no longer applies on top of HEAD, please submit a rebased version.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 3 Nov 2021 13:51:48 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "On Wed, Nov 3, 2021 at 6:21 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 26 Jul 2021, at 09:33, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> > PSA v7.\n>\n> This patch no longer applies on top of HEAD, please submit a rebased version.\n\nHere's a rebased v8 patch. Please review it.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Thu, 4 Nov 2021 09:54:19 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "> On 4 Nov 2021, at 05:24, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> \n> On Wed, Nov 3, 2021 at 6:21 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> On 26 Jul 2021, at 09:33, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> \n>>> PSA v7.\n>> \n>> This patch no longer applies on top of HEAD, please submit a rebased version.\n> \n> Here's a rebased v8 patch. Please review it.\n\nThis improves the user experience by increasing the granularity of error\nreporting, and maps well with the precedent set in 81d5995b4. I'm marking this\nReady for Committer and will go ahead and apply this unless there are\nobjections.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 12 Nov 2021 13:41:58 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "On Fri, Nov 12, 2021, at 9:41 AM, Daniel Gustafsson wrote:\n> > On 4 Nov 2021, at 05:24, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > \n> > On Wed, Nov 3, 2021 at 6:21 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >> \n> >>> On 26 Jul 2021, at 09:33, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >> \n> >>> PSA v7.\n> >> \n> >> This patch no longer applies on top of HEAD, please submit a rebased version.\n> > \n> > Here's a rebased v8 patch. Please review it.\n> \n> This improves the user experience by increasing the granularity of error\n> reporting, and maps well with the precedent set in 81d5995b4. I'm marking this\n> Ready for Committer and will go ahead and apply this unless there are\n> objections.\nShouldn't we modify errdetail_relkind_not_supported() to include relpersistence\nas a 2nd parameter and move those messages to it? I experiment this idea with\nthe attached patch. The idea is to provide a unique function that reports\naccurate detail messages.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/",
"msg_date": "Fri, 12 Nov 2021 15:35:55 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the\n publication in check_publication_add_relation"
},
{
"msg_contents": "On Sat, Nov 13, 2021 at 12:06 AM Euler Taveira <euler@eulerto.com> wrote:\n> > Here's a rebased v8 patch. Please review it.\n>\n> This improves the user experience by increasing the granularity of error\n> reporting, and maps well with the precedent set in 81d5995b4. I'm marking this\n> Ready for Committer and will go ahead and apply this unless there are\n> objections.\n>\n> Shouldn't we modify errdetail_relkind_not_supported() to include relpersistence\n> as a 2nd parameter and move those messages to it? I experiment this idea with\n> the attached patch. The idea is to provide a unique function that reports\n> accurate detail messages.\n\nThanks. It is a good idea to use errdetail_relkind_not_supported. I\nslightly modified the API to \"int errdetail_relkind_not_supported(Oid\nrelid, Form_pg_class rd_rel);\" to simplify things and increase the\nusability of the function further. For instance, it can report the\nspecific error for the catalog tables as well. And, also added \"int\nerrdetail_relkind_not_supported _v2(Oid relid, char relkind, char\nrelpersistence);\" so that the callers not having Form_pg_class (there\nare 3 callers exist) can pass the parameters directly.\n\nPSA v10.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Sat, 13 Nov 2021 08:30:10 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "On Sat, Nov 13, 2021, at 12:00 AM, Bharath Rupireddy wrote:\n> On Sat, Nov 13, 2021 at 12:06 AM Euler Taveira <euler@eulerto.com> wrote:\n> > > Here's a rebased v8 patch. Please review it.\n> >\n> > This improves the user experience by increasing the granularity of error\n> > reporting, and maps well with the precedent set in 81d5995b4. I'm marking this\n> > Ready for Committer and will go ahead and apply this unless there are\n> > objections.\n> >\n> > Shouldn't we modify errdetail_relkind_not_supported() to include relpersistence\n> > as a 2nd parameter and move those messages to it? I experiment this idea with\n> > the attached patch. The idea is to provide a unique function that reports\n> > accurate detail messages.\n> \n> Thanks. It is a good idea to use errdetail_relkind_not_supported. I\n> slightly modified the API to \"int errdetail_relkind_not_supported(Oid\n> relid, Form_pg_class rd_rel);\" to simplify things and increase the\n> usability of the function further. For instance, it can report the\n> specific error for the catalog tables as well. And, also added \"int\n> errdetail_relkind_not_supported _v2(Oid relid, char relkind, char\n> relpersistence);\" so that the callers not having Form_pg_class (there\n> are 3 callers exist) can pass the parameters directly.\nDo we really need 2 functions? I don't think so. Besides that, relid is\nredundant since this information is available in the Form_pg_class struct.\n\nint errdetail_relkind_not_supported(Oid relid, Form_pg_class rd_rel);\n\nMy suggestion is to keep only the 3 parameter function:\n\nint errdetail_relkind_not_supported(Oid relid, char relkind, char relpersistence);\n\nMultiple functions that is just a wrapper for a central one is a good idea for\nbackward compatibility. That's not the case here.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Sat, Nov 13, 2021, at 12:00 AM, Bharath Rupireddy wrote:On Sat, Nov 13, 2021 at 12:06 AM Euler Taveira <euler@eulerto.com> wrote:> > Here's a rebased v8 patch. Please review it.>> This improves the user experience by increasing the granularity of error> reporting, and maps well with the precedent set in 81d5995b4. I'm marking this> Ready for Committer and will go ahead and apply this unless there are> objections.>> Shouldn't we modify errdetail_relkind_not_supported() to include relpersistence> as a 2nd parameter and move those messages to it? I experiment this idea with> the attached patch. The idea is to provide a unique function that reports> accurate detail messages.Thanks. It is a good idea to use errdetail_relkind_not_supported. Islightly modified the API to \"int errdetail_relkind_not_supported(Oidrelid, Form_pg_class rd_rel);\" to simplify things and increase theusability of the function further. For instance, it can report thespecific error for the catalog tables as well. And, also added \"interrdetail_relkind_not_supported _v2(Oid relid, char relkind, charrelpersistence);\" so that the callers not having Form_pg_class (thereare 3 callers exist) can pass the parameters directly.Do we really need 2 functions? I don't think so. Besides that, relid isredundant since this information is available in the Form_pg_class struct.int errdetail_relkind_not_supported(Oid relid, Form_pg_class rd_rel);My suggestion is to keep only the 3 parameter function:int errdetail_relkind_not_supported(Oid relid, char relkind, char relpersistence);Multiple functions that is just a wrapper for a central one is a good idea forbackward compatibility. That's not the case here.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Sat, 13 Nov 2021 10:46:07 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the\n publication in check_publication_add_relation"
},
{
"msg_contents": "On Sat, Nov 13, 2021 at 7:16 PM Euler Taveira <euler@eulerto.com> wrote:\n> Thanks. It is a good idea to use errdetail_relkind_not_supported. I\n> slightly modified the API to \"int errdetail_relkind_not_supported(Oid\n> relid, Form_pg_class rd_rel);\" to simplify things and increase the\n> usability of the function further. For instance, it can report the\n> specific error for the catalog tables as well. And, also added \"int\n> errdetail_relkind_not_supported _v2(Oid relid, char relkind, char\n> relpersistence);\" so that the callers not having Form_pg_class (there\n> are 3 callers exist) can pass the parameters directly.\n>\n> Do we really need 2 functions? I don't think so. Besides that, relid is\n> redundant since this information is available in the Form_pg_class struct.\n\nYeah. The relid is available in Form_pg_class.\n\nFirstly, I didn't quite like the function\nerrdetail_relkind_not_supported name to be too long here and adding to\nit the 2 or 3 parameters, in many places we are crossing 80 char\nlimit. Above these, a function with one parameter is always better\nthan function with 3 parameters.\n\nHaving two functions isn't a big deal at all, I think we have many\nfunctions like that in the core (although I'm not gonna spend time\nfinding all those functions, I'm sure there will be such functions).\n\nI would still go with with 2 functions:\n\nint errdetail_relkind_not_supported(Form_pg_class rd_rel);\nint errdetail_relkind_not_supported_v2(Oid relid, char relkind, char\nrelpersistence);\n\n> int errdetail_relkind_not_supported(Oid relid, Form_pg_class rd_rel);\n>\n> My suggestion is to keep only the 3 parameter function:\n>\n> int errdetail_relkind_not_supported(Oid relid, char relkind, char relpersistence);\n>\n> Multiple functions that is just a wrapper for a central one is a good idea for\n> backward compatibility. That's not the case here.\n\nSince we are modifying it on the master, I think it is okay to have 2\nfunctions given the code simplification advantages we get with\nerrdetail_relkind_not_supported(Form_pg_class rd_rel).\n\nI would even think further to rename \"errdetail_relkind_not_supported\"\nand have the following, because we don't have to be always descriptive\nin the function names. The errdetail would tell the function is going\nto give some error detail.\n\nint errdetail_relkind(Form_pg_class rd_rel);\nint errdetail_relkind_v2(Oid relid, char relkind, char relpersistence);\n\nor\n\nint errdetail_rel(Form_pg_class rd_rel);\nint errdetail_rel_v2(Oid relid, char relkind, char relpersistence);\n\nI prefer the above among the three function names.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Sat, 13 Nov 2021 19:57:54 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "On Sat, Nov 13, 2021 at 7:57 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sat, Nov 13, 2021 at 7:16 PM Euler Taveira <euler@eulerto.com> wrote:\n> > Thanks. It is a good idea to use errdetail_relkind_not_supported. I\n> > slightly modified the API to \"int errdetail_relkind_not_supported(Oid\n> > relid, Form_pg_class rd_rel);\" to simplify things and increase the\n> > usability of the function further. For instance, it can report the\n> > specific error for the catalog tables as well. And, also added \"int\n> > errdetail_relkind_not_supported _v2(Oid relid, char relkind, char\n> > relpersistence);\" so that the callers not having Form_pg_class (there\n> > are 3 callers exist) can pass the parameters directly.\n> >\n> > Do we really need 2 functions? I don't think so. Besides that, relid is\n> > redundant since this information is available in the Form_pg_class\nstruct.\n>\n> Yeah. The relid is available in Form_pg_class.\n>\n> Firstly, I didn't quite like the function\n> errdetail_relkind_not_supported name to be too long here and adding to\n> it the 2 or 3 parameters, in many places we are crossing 80 char\n> limit. Above these, a function with one parameter is always better\n> than function with 3 parameters.\n>\n> Having two functions isn't a big deal at all, I think we have many\n> functions like that in the core (although I'm not gonna spend time\n> finding all those functions, I'm sure there will be such functions).\n>\n> I would still go with with 2 functions:\n>\n> int errdetail_relkind_not_supported(Form_pg_class rd_rel);\n> int errdetail_relkind_not_supported_v2(Oid relid, char relkind, char\n> relpersistence);\n>\n> > int errdetail_relkind_not_supported(Oid relid, Form_pg_class rd_rel);\n> >\n> > My suggestion is to keep only the 3 parameter function:\n> >\n> > int errdetail_relkind_not_supported(Oid relid, char relkind, char\nrelpersistence);\n> >\n> > Multiple functions that is just a wrapper for a central one is a good\nidea for\n> > backward compatibility. That's not the case here.\n>\n> Since we are modifying it on the master, I think it is okay to have 2\n> functions given the code simplification advantages we get with\n> errdetail_relkind_not_supported(Form_pg_class rd_rel).\n>\n> I would even think further to rename \"errdetail_relkind_not_supported\"\n> and have the following, because we don't have to be always descriptive\n> in the function names. The errdetail would tell the function is going\n> to give some error detail.\n>\n> int errdetail_relkind(Form_pg_class rd_rel);\n> int errdetail_relkind_v2(Oid relid, char relkind, char relpersistence);\n>\n> or\n>\n> int errdetail_rel(Form_pg_class rd_rel);\n> int errdetail_rel_v2(Oid relid, char relkind, char relpersistence);\n>\n> I prefer the above among the three function names.\n>\n> Thoughts?\n\nPSA v11 patch with 2 APIs with much simpler parameters and small function\nnames:\n\nint errdetail_rel(Form_pg_class rd_rel);\nint errdetail_rel_v2(Oid relid, char relkind, char relpersistence);\n\nPlease review it.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Sun, 14 Nov 2021 17:48:18 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "On 14.11.21 13:18, Bharath Rupireddy wrote:\n> PSA v11 patch with 2 APIs with much simpler parameters and small \n> function names:\n> \n> int errdetail_rel(Form_pg_class rd_rel);\n> int errdetail_rel_v2(Oid relid, char relkind, char relpersistence);\n> \n> Please review it.\n\nI think this is not an improvement. It loses the ability of the caller \nthe specify exactly why a relation is not acceptable. Before, a caller \ncould say, it's the wrong relkind, or it's the wrong persistence, or \nwhatever. Now, it just spits out some details about the relation, but \nyou can't control which. It could easily be wrong, too: AFAICT, this \nwill complain that a temporary table is not supported, but it could also \nbe that a table in general is not supported.\n\nIn my mind, this leads us back into the mess that we have before \nerrdetail_relkind_not_supported(): Very detailed error messages that \ndidn't actually hit the point.\n\nI think a separate errdetail_relpersistence_not_supported() would be a \nbetter solution (assuming there are enough callers to make it worth a \nseparate function).\n\n\n",
"msg_date": "Mon, 15 Nov 2021 09:15:39 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "> On 15 Nov 2021, at 09:15, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> I think this is not an improvement. It loses the ability of the caller the\n> specify exactly why a relation is not acceptable.\n\n\nAgreed.\n\n> I think a separate errdetail_relpersistence_not_supported() would be a better\n> solution (assuming there are enough callers to make it worth a separate\n> function).\n\n\nI still think that the v8 patch posted earlier is the better option, which\nincrease granularity of error reporting with a small code footprint.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 15 Nov 2021 09:44:28 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "On Mon, Nov 15, 2021 at 2:14 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 15 Nov 2021, at 09:15, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>\n> > I think this is not an improvement. It loses the ability of the caller the\n> > specify exactly why a relation is not acceptable.\n>\n>\n> Agreed.\n\n+1.\n\n> > I think a separate errdetail_relpersistence_not_supported() would be a better\n> > solution (assuming there are enough callers to make it worth a separate\n> > function).\n>\n>\n> I still think that the v8 patch posted earlier is the better option, which\n> increase granularity of error reporting with a small code footprint.\n\nThanks. Attaching the v8 here again.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Mon, 15 Nov 2021 15:08:26 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "On 15.11.21 10:38, Bharath Rupireddy wrote:\n>> I still think that the v8 patch posted earlier is the better option, which\n>> increase granularity of error reporting with a small code footprint.\n> Thanks. Attaching the v8 here again.\n\nI find the use of RelationUsesLocalBuffers() confusing in this patch. \nIt would be clearer to check relpersistence directly in both branches of \nthe if statement.\n\n\n",
"msg_date": "Mon, 15 Nov 2021 19:42:41 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "> On 15 Nov 2021, at 19:42, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 15.11.21 10:38, Bharath Rupireddy wrote:\n>>> I still think that the v8 patch posted earlier is the better option, which\n>>> increase granularity of error reporting with a small code footprint.\n>> Thanks. Attaching the v8 here again.\n> \n> I find the use of RelationUsesLocalBuffers() confusing in this patch. It would be clearer to check relpersistence directly in both branches of the if statement.\n\nAdmittedly it didn't bother me, but the more I think about it the more I agree\nwith Peter, so +1 on changing.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 15 Nov 2021 22:36:05 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "On Tue, Nov 16, 2021 at 3:06 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 15 Nov 2021, at 19:42, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > On 15.11.21 10:38, Bharath Rupireddy wrote:\n> >>> I still think that the v8 patch posted earlier is the better option, which\n> >>> increase granularity of error reporting with a small code footprint.\n> >> Thanks. Attaching the v8 here again.\n> >\n> > I find the use of RelationUsesLocalBuffers() confusing in this patch. It would be clearer to check relpersistence directly in both branches of the if statement.\n>\n> Admittedly it didn't bother me, but the more I think about it the more I agree\n> with Peter, so +1 on changing.\n\nDone. PSA v9 patch.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Tue, 16 Nov 2021 08:00:40 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
},
{
"msg_contents": "> On 16 Nov 2021, at 03:30, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n\n> Done. PSA v9 patch.\n\nPushed with some tweaking of the commit message, thanks!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 17 Nov 2021 14:43:06 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Logical Replication - improve error message while adding tables\n to the publication in check_publication_add_relation"
}
] |
[
{
"msg_contents": "Move tablespace path re-creation from the makefiles to pg_regress\n\nMoving this logic into pg_regress fixes a potential failure with\nparallel tests when pg_upgrade and the main regression test suite both\ntrigger the makefile rule that cleaned up testtablespace/ under\nsrc/test/regress. Even if pg_upgrade was triggering this rule, it has\nno need to do so as it uses a different tablespace path. So if\npg_upgrade triggered the makefile rule for the tablespace setup while\nthe main regression test suite ran the tablespace cases, it would fail.\n\n61be85a was a similar attempt at achieving that, but that broke cases\nwhere the regression tests require to run under an Administrator\naccount, like with Appveyor.\n\nReported-by: Andres Freund, Kyotaro Horiguchi\nReviewed-by: Peter Eisentraut\nDiscussion: https://postgr.es/m/20201209012911.uk4d6nxcnkp7ehrx@alap3.anarazel.de\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/6c788d9f6aadb41d76a72d56149268371a7895ee\n\nModified Files\n--------------\nsrc/bin/pg_upgrade/test.sh | 1 -\nsrc/test/regress/GNUmakefile | 21 +++++++--------------\nsrc/test/regress/pg_regress.c | 14 ++++++--------\nsrc/tools/msvc/vcregress.pl | 1 -\n4 files changed, 13 insertions(+), 24 deletions(-)",
"msg_date": "Wed, 10 Mar 2021 05:55:05 +0000",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "pgsql: Move tablespace path re-creation from the makefiles to\n pg_regres"
},
{
"msg_contents": "Re: Michael Paquier\n> Move tablespace path re-creation from the makefiles to pg_regress\n> \n> Moving this logic into pg_regress fixes a potential failure with\n> parallel tests when pg_upgrade and the main regression test suite both\n> trigger the makefile rule that cleaned up testtablespace/ under\n> src/test/regress. Even if pg_upgrade was triggering this rule, it has\n> no need to do so as it uses a different tablespace path. So if\n> pg_upgrade triggered the makefile rule for the tablespace setup while\n> the main regression test suite ran the tablespace cases, it would fail.\n> \n> 61be85a was a similar attempt at achieving that, but that broke cases\n> where the regression tests require to run under an Administrator\n> account, like with Appveyor.\n\nThis change broke running the testsuite on an existing PG server, if\nserver user and pg_regress client user are different. This is one of\nthe tests exercised by Debian's autopkgtest suite.\n\nPreviously I could create the tablespace directory, chown it to\npostgres, and fire up pg_regress with the correct options. Now\npg_regress wipes that directory, recreates it, and then the server\ncan't use it because user \"postgres\" can't write to it.\n\nI'm working around the problem now by running the tests as user\n\"postgres\", but does completely break in environments where users want\nto run the testsuite from a separate compilation user but don't have root.\n\nOld code: https://salsa.debian.org/postgresql/postgresql/-/blob/8b1217fcae3e64155bc35517acbd50c6f166d997/debian/tests/installcheck\nWorkaround: https://salsa.debian.org/postgresql/postgresql/-/blob/cbc0240bec738b6ab3b61c498825b82c8ff21a70/debian/tests/installcheck\n\nChristoph\n\n\n",
"msg_date": "Tue, 23 Mar 2021 12:50:29 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Move tablespace path re-creation from the makefiles to\n pg_regres"
},
{
"msg_contents": "On Tue, Mar 23, 2021 at 12:50:29PM +0100, Christoph Berg wrote:\n> I'm working around the problem now by running the tests as user\n> \"postgres\", but does completely break in environments where users want\n> to run the testsuite from a separate compilation user but don't have root.\n> \n> Old code: https://salsa.debian.org/postgresql/postgresql/-/blob/8b1217fcae3e64155bc35517acbd50c6f166d997/debian/tests/installcheck\n> Workaround: https://salsa.debian.org/postgresql/postgresql/-/blob/cbc0240bec738b6ab3b61c498825b82c8ff21a70/debian/tests/installcheck\n\nSo you basically mimicked the makefile rule that this commit removed\ninto your own test suite. Reverting the change does not really help,\nbecause we'd be back to square one where there would be problems in\nparallel runs for developers. Saying that, I would not mind adding an\noption to pg_regress to control if this cleanup code is triggered or\nnot, say something like --no-tablespace-cleanup? Then, you could just\npass down the option by yourself before creating your tablespace path\nas you wish.\n--\nMichael",
"msg_date": "Wed, 24 Mar 2021 10:08:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Move tablespace path re-creation from the makefiles to\n pg_regres"
},
{
"msg_contents": "Re: Michael Paquier\n> So you basically mimicked the makefile rule that this commit removed\n> into your own test suite. Reverting the change does not really help,\n> because we'd be back to square one where there would be problems in\n> parallel runs for developers. Saying that, I would not mind adding an\n> option to pg_regress to control if this cleanup code is triggered or\n> not, say something like --no-tablespace-cleanup? Then, you could just\n> pass down the option by yourself before creating your tablespace path\n> as you wish.\n\nI don't think adding more snowflake code just for this use case makes\nsense, so I can stick to my workaround.\n\nI just wanted to point out that the only thing preventing the core\ntestsuite from being run as a true client app is this tablespace\nthing, which might be a worthwhile fix on its own.\n\nMaybe creating the tablespace directory from within the testsuite\nwould suffice?\n\nCREATE TABLE foo (t text);\nCOPY foo FROM PROGRAM 'mkdir @testtablespace@';\nCREATE TABLESPACE regress_tblspace LOCATION '@testtablespace@';\n\nChristoph\n\n\n",
"msg_date": "Wed, 24 Mar 2021 10:56:29 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Move tablespace path re-creation from the makefiles to\n pg_regres"
},
{
"msg_contents": "On Wed, Mar 24, 2021 at 5:56 AM Christoph Berg <myon@debian.org> wrote:\n> Maybe creating the tablespace directory from within the testsuite\n> would suffice?\n>\n> CREATE TABLE foo (t text);\n> COPY foo FROM PROGRAM 'mkdir @testtablespace@';\n> CREATE TABLESPACE regress_tblspace LOCATION '@testtablespace@';\n\nWould that work on Windows?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Mar 2021 10:50:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Move tablespace path re-creation from the makefiles to\n pg_regres"
},
{
"msg_contents": "On Wed, Mar 24, 2021 at 10:50:50AM -0400, Robert Haas wrote:\n> On Wed, Mar 24, 2021 at 5:56 AM Christoph Berg <myon@debian.org> wrote:\n>> Maybe creating the tablespace directory from within the testsuite\n>> would suffice?\n>>\n>> CREATE TABLE foo (t text);\n>> COPY foo FROM PROGRAM 'mkdir @testtablespace@';\n>> CREATE TABLESPACE regress_tblspace LOCATION '@testtablespace@';\n> \n> Would that work on Windows?\n\nI doubt that all the Windows environments would be able to get their\nhands on that. And I am not sure either that this would work when it\ncomes to the CI case, again on Windows.\n--\nMichael",
"msg_date": "Thu, 25 Mar 2021 07:44:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Move tablespace path re-creation from the makefiles to\n pg_regres"
},
{
"msg_contents": "On Thu, Mar 25, 2021 at 07:44:02AM +0900, Michael Paquier wrote:\n> On Wed, Mar 24, 2021 at 10:50:50AM -0400, Robert Haas wrote:\n> > On Wed, Mar 24, 2021 at 5:56 AM Christoph Berg <myon@debian.org> wrote:\n> >> Maybe creating the tablespace directory from within the testsuite\n> >> would suffice?\n> >>\n> >> CREATE TABLE foo (t text);\n> >> COPY foo FROM PROGRAM 'mkdir @testtablespace@';\n> >> CREATE TABLESPACE regress_tblspace LOCATION '@testtablespace@';\n> > \n> > Would that work on Windows?\n\nThat would entail forbidding various shell metacharacters in @testtablespace@.\nOne could avoid imposing such a restriction by adding a mkdir() function to\nregress.c and writing it like:\n\nCREATE FUNCTION mkdir(text)\n RETURNS void AS '@libdir@/regress@DLSUFFIX@', 'regress_mkdir'\n LANGUAGE C STRICT\\;\n REVOKE ALL ON FUNCTION mkdir FROM public;\nSELECT mkdir('@testtablespace@');\nCREATE TABLESPACE regress_tblspace LOCATION '@testtablespace@';\n\n> I doubt that all the Windows environments would be able to get their\n> hands on that.\n\n> And I am not sure either that this would work when it\n> comes to the CI case, again on Windows.\n\nHow might a CI provider break that?\n\n\n",
"msg_date": "Wed, 24 Mar 2021 19:56:59 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Move tablespace path re-creation from the makefiles to\n pg_regres"
},
{
"msg_contents": "On Wed, Mar 24, 2021 at 07:56:59PM -0700, Noah Misch wrote:\n> That would entail forbidding various shell metacharacters in @testtablespace@.\n> One could avoid imposing such a restriction by adding a mkdir() function to\n> regress.c and writing it like:\n> \n> CREATE FUNCTION mkdir(text)\n> RETURNS void AS '@libdir@/regress@DLSUFFIX@', 'regress_mkdir'\n> LANGUAGE C STRICT\\;\n> REVOKE ALL ON FUNCTION mkdir FROM public;\n> SELECT mkdir('@testtablespace@');\n> CREATE TABLESPACE regress_tblspace LOCATION '@testtablespace@';\n\nSounds simple.\n\n>> I doubt that all the Windows environments would be able to get their\n>> hands on that.\n> \n>> And I am not sure either that this would work when it\n>> comes to the CI case, again on Windows.\n> \n> How might a CI provider break that?\n\nI am wondering about potential issues when it comes to create the\nbase tablespace path from the Postgres backend, while HEAD does it\nwhile relying on a restricted token obtained when starting\npg_regress.\n\nI have compiled a simple patch that uses a SQL function for the base\ntablespace directory creation, that I have tested on Linux and MSVC.\nTo get some coverage with the CF bot, I am adding a CF entry with this\nthread.\n\nI am still not sure if people would prefer this approach over what's\non HEAD. So if there are any opinions, please feel free.\n--\nMichael",
"msg_date": "Fri, 9 Apr 2021 15:00:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Move tablespace path re-creation from the makefiles to\n pg_regres"
},
{
"msg_contents": "On Fri, Apr 09, 2021 at 03:00:31PM +0900, Michael Paquier wrote:\n> I have compiled a simple patch that uses a SQL function for the base\n> tablespace directory creation, that I have tested on Linux and MSVC.\n\n> I am still not sure if people would prefer this approach over what's\n> on HEAD. So if there are any opinions, please feel free.\n\n\"pg_regress --outputdir\" is not a great location for a file or directory\ncreated by a user other than the user running pg_regress. If one does \"make\ncheck\" and then \"make installcheck\" against a cluster running as a different\nuser, the rmtree() will fail, assuming typical umask values. An rmtree() at\nthe end of the tablespace test would mostly prevent that, but that can't help\nin the event of a mid-test crash.\n\nI'm not sure we should support installcheck against a server running as a\ndifferent user. If we should support it, then I'd probably look at letting\nthe caller pass in a server-writable directory. That directory would house\nthe tablespace instead of outputdir doing so.\n\n\n",
"msg_date": "Fri, 9 Apr 2021 20:07:10 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Move tablespace path re-creation from the makefiles to\n pg_regres"
},
{
"msg_contents": "On Fri, Apr 09, 2021 at 08:07:10PM -0700, Noah Misch wrote:\n> \"pg_regress --outputdir\" is not a great location for a file or directory\n> created by a user other than the user running pg_regress. If one does \"make\n> check\" and then \"make installcheck\" against a cluster running as a different\n> user, the rmtree() will fail, assuming typical umask values. An rmtree() at\n> the end of the tablespace test would mostly prevent that, but that can't help\n> in the event of a mid-test crash.\n\nYeah, I really don't think that we need to worry about multi-user\nscenarios with pg_regress like that though.\n\n> I'm not sure we should support installcheck against a server running as a\n> different user. If we should support it, then I'd probably look at letting\n> the caller pass in a server-writable directory. That directory would house\n> the tablespace instead of outputdir doing so.\n\nBut, then, we would be back to the pre-13 position where we'd need to\nhave something external to pg_regress in charge of cleaning up and\ncreating the tablespace path, no? That's basically what we want to\navoid with the Makefile rules. I can get that it could be interesting\nto be able to pass down a custom path for the test tablespace, but do\nwe really have a need for that?\n\nIt took some time for the CF bot to run the patch of this thread, but\nfrom what I can see the tests are passing on Windows under Cirrus CI:\nhttp://commitfest.cputube.org/michael-paquier.html\n\nSo it looks like this could be a different answer.\n--\nMichael",
"msg_date": "Mon, 12 Apr 2021 14:25:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Move tablespace path re-creation from the makefiles to\n pg_regres"
},
{
"msg_contents": "Re: Michael Paquier\n> http://commitfest.cputube.org/michael-paquier.html\n> \n> So it looks like this could be a different answer.\n\nThe mkdir() function looks like a sane and clean approach to me.\n\nChristoph\n\n\n",
"msg_date": "Mon, 12 Apr 2021 10:09:39 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Move tablespace path re-creation from the makefiles to\n pg_regres"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 02:25:53PM +0900, Michael Paquier wrote:\n> On Fri, Apr 09, 2021 at 08:07:10PM -0700, Noah Misch wrote:\n> > \"pg_regress --outputdir\" is not a great location for a file or directory\n> > created by a user other than the user running pg_regress. If one does \"make\n> > check\" and then \"make installcheck\" against a cluster running as a different\n> > user, the rmtree() will fail, assuming typical umask values. An rmtree() at\n> > the end of the tablespace test would mostly prevent that, but that can't help\n> > in the event of a mid-test crash.\n> \n> Yeah, I really don't think that we need to worry about multi-user\n> scenarios with pg_regress like that though.\n\nChristoph Berg's first message on this thread reported doing that. If\nsupporting server_user!=pg_regress_user is unwarranted and Christoph Berg\nshould stop, then already-committed code suffices.\n\n> > I'm not sure we should support installcheck against a server running as a\n> > different user. If we should support it, then I'd probably look at letting\n> > the caller pass in a server-writable directory. That directory would house\n> > the tablespace instead of outputdir doing so.\n> \n> But, then, we would be back to the pre-13 position where we'd need to\n> have something external to pg_regress in charge of cleaning up and\n> creating the tablespace path, no?\n\nCorrect.\n\n> That's basically what we want to\n> avoid with the Makefile rules.\n\nThe race that commit 6c788d9 fixed is not inherent to Makefile rules. For\nexample, you could have instead caused test.sh to issue 'make\ninstallcheck-parallel TABLESPACEDIR=\"$outputdir\"/testtablespace' and have the\nmakefiles consult $(TABLESPACEDIR) rather than hard-code ./testtablespace.\n(That said, I like how 6c788d9 consolidated Windows and non-Windows paths.)\n\n> I can get that it could be interesting\n> to be able to pass down a custom path for the test tablespace, but do\n> we really have a need for that?\n\nI don't know. I never considered server_user!=pg_regress_user before this\nthread, and I don't plan to use it myself. Your latest patch originated to\nmake that case work, and my last message was reporting that the patch works\nfor a rather-narrow interpretation of server_user!=pg_regress_user, failing on\nvariations thereof. That might be fine.\n\n\n",
"msg_date": "Mon, 12 Apr 2021 22:36:10 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Move tablespace path re-creation from the makefiles to\n pg_regres"
},
{
"msg_contents": "On Mon, Apr 12, 2021 at 10:36:10PM -0700, Noah Misch wrote:\n> Christoph Berg's first message on this thread reported doing that. If\n> supporting server_user!=pg_regress_user is unwarranted and Christoph Berg\n> should stop, then already-committed code suffices.\n\nNot sure that we have ever claimed that. It is unfortunate that this\nhas broken a case that used to work, perhaps accidentally. At the \nsame time, Christoph has a workaround for the Debian suite, so it does\nnot seem like there is anything to do now, anyway.\n\n> The race that commit 6c788d9 fixed is not inherent to Makefile rules. For\n> example, you could have instead caused test.sh to issue 'make\n> installcheck-parallel TABLESPACEDIR=\"$outputdir\"/testtablespace' and have the\n> makefiles consult $(TABLESPACEDIR) rather than hard-code ./testtablespace.\n> (That said, I like how 6c788d9 consolidated Windows and non-Windows paths.)\n\nFWIW, I don't really want to split again this code path across\nplatforms. Better to have one to rule them all.\n\n> I don't know. I never considered server_user!=pg_regress_user before this\n> thread, and I don't plan to use it myself. Your latest patch originated to\n> make that case work, and my last message was reporting that the patch works\n> for a rather-narrow interpretation of server_user!=pg_regress_user, failing on\n> variations thereof. That might be fine.\n\nOkay. So.. As I am not sure if there is anything that needs to be\nacted on here for the moment, I would just close the case. If there\nare more voices in favor of the SQL function using mkdir(), I would\nnot object to use that, as that looks to work for all the cases where\npg_regress is used.\n--\nMichael",
"msg_date": "Tue, 13 Apr 2021 16:26:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Move tablespace path re-creation from the makefiles to\n pg_regres"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Move tablespace path re-creation from the makefiles to pg_regress\n\nSo this didn't seem like a problem at the time, but while building\nbeta1 tarballs I discovered that it leaves behind \"testtablespace\"\nsubdirectories in various places where they aren't cleaned by\n\"make distclean\", resulting in scary noise in my diff against the\ntarballs:\n\nOnly in /home/postgres/pgsql/contrib/dblink: testtablespace\nOnly in /home/postgres/pgsql/contrib/file_fdw: testtablespace\nOnly in /home/postgres/pgsql/src/pl/plpgsql/src: testtablespace\n\nThis appears to be because pg_regress.c will now create the\ntablespace directory in any directory that has an \"input\"\nsubdirectory (and that randomness is because somebody saw\nfit to drop the code into convert_sourcefiles_in(), where\nit surely has no business being, not least because that\nmeans it's run twice).\n\n(BTW, the reason we don't see git complaining about this seems\nto be that it doesn't complain about empty subdirectories.)\n\nI think what we want to do is have this code invoked only in\ntest directories that explicitly ask for it, say with a new\n\"--make-testtablespace\" switch for pg_regress.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 May 2021 16:53:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Move tablespace path re-creation from the makefiles to\n pg_regres"
},
{
"msg_contents": "I wrote:\n> I think what we want to do is have this code invoked only in\n> test directories that explicitly ask for it, say with a new\n> \"--make-testtablespace\" switch for pg_regress.\n\nSay, as attached. (Windows part is untested.)\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 17 May 2021 17:51:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Move tablespace path re-creation from the makefiles to\n pg_regres"
},
{
"msg_contents": "On Mon, May 17, 2021 at 05:51:54PM -0400, Tom Lane wrote:\n> I wrote:\n>> I think what we want to do is have this code invoked only in\n>> test directories that explicitly ask for it, say with a new\n>> \"--make-testtablespace\" switch for pg_regress.\n> \n> Say, as attached. (Windows part is untested.)\n\nThanks. I was going to produce something this morning, but you have\nbeen faster than me.\n\nOne thing that's changing in this patch is that testtablespace would\nbe created even if the input directory does not exist when using\n--make-testtablespace-dir. I would have kept the creation of the\ntablespace path within convert_sourcefiles_in() for this reason.\nWorth noting that snprintf() is used twice instead of once to build\nthe tablespace path string. The Windows part works correctly.\n--\nMichael",
"msg_date": "Tue, 18 May 2021 09:26:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Move tablespace path re-creation from the makefiles to\n pg_regres"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, May 17, 2021 at 05:51:54PM -0400, Tom Lane wrote:\n>> Say, as attached. (Windows part is untested.)\n\n> One thing that's changing in this patch is that testtablespace would\n> be created even if the input directory does not exist when using\n> --make-testtablespace-dir.\n\nYeah, I do not see a reason for there to be a connection. It's not\npg_regress' job to decide whether the testtablespace directory is\nneeded or not.\n\n> Worth noting that snprintf() is used twice instead of once to build\n> the tablespace path string.\n\nYeah. I considered making a global variable so that there'd be\njust one instance of that, but didn't think it amounted to less\nmess in the end.\n\n> The Windows part works correctly.\n\nThanks for testing! I'll push this after the release is tagged.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 May 2021 20:57:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Move tablespace path re-creation from the makefiles to\n pg_regres"
},
{
"msg_contents": "On Mon, May 17, 2021 at 08:57:55PM -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> One thing that's changing in this patch is that testtablespace would\n>> be created even if the input directory does not exist when using\n>> --make-testtablespace-dir.\n> \n> Yeah, I do not see a reason for there to be a connection. It's not\n> pg_regress' job to decide whether the testtablespace directory is\n> needed or not.\n\nFine by me. I don't think that's a big deal either way.\n\n>> Worth noting that snprintf() is used twice instead of once to build\n>> the tablespace path string.\n> \n> Yeah. I considered making a global variable so that there'd be\n> just one instance of that, but didn't think it amounted to less\n> mess in the end.\n\nNo problem from me here either.\n\n>> The Windows part works correctly.\n> \n> Thanks for testing! I'll push this after the release is tagged.\n\nThanks!\n--\nMichael",
"msg_date": "Tue, 18 May 2021 10:01:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Move tablespace path re-creation from the makefiles to\n pg_regres"
}
] |
[
{
"msg_contents": "Hi Julien,\r\nThanks a lot for the quick review. Please see my answer below in blue. Attached is the new patch.\r\n\r\n\r\n\r\n------------------ Original ------------------\r\nFrom: \"Julien Rouhaud\" <rjuju123@gmail.com>;\r\nDate: Tue, Mar 9, 2021 05:09 PM\r\nTo: \"Erica Zhang\"<ericazhangy@qq.com>;\r\nCc: \"pgsql-hackers\"<pgsql-hackers@postgresql.org>;\r\nSubject: Re: Add some tests for pg_stat_statements compatibility verification under contrib\r\n\r\n\r\n\r\nHi,\r\n\r\nOn Tue, Mar 09, 2021 at 11:35:14AM +0800, Erica Zhang wrote:\r\n> Hi All,\r\n> On the master branch, it is possible to install multiple versions of pg_stat_statements with CREATE EXTENSION, but all the tests in sql/ on look at the latest version available, without testing past compatibility. \r\n> \r\n> Since we support to install lowest version 1.4 currently, add some tests to verify compatibility, upgrade from lower versions of pg_stat_statements.\r\n\r\nThe upgrade scripts are already tested as postgres will install 1.4 and perform\r\nall upgrades to reach the default version.\r\nThanks for pointing that the upgrades paths are covered by upgrade scripts tests. So I don't need to verify the upgrade, I will test the installation of different versions directly, any concern?\r\n\r\n\r\nBut an additional thing being tested here is the ABI compatibility when there's\r\na mismatch between the library and the SQL definition, which seems like a\r\nreasonable thing to test.\r\n\r\nLooking at the patch:\r\n\r\n+SELECT * FROM pg_available_extensions WHERE name = 'pg_stat_statements' and installed_version = '1.4';\r\n\r\n\r\nWhat is this supposed to test? All those tests will break every time we change\r\nthe default version, which will add maintenance efforts. It could be good to\r\nhave one test breaking when changing the version to remind us to add a test for\r\nthe new version, but not more.\r\nHere I just want to verify that \"installed\" version is the expected version. But we do have the issue as you mentioned which will add maintenance efforts. \r\n\r\nSo I prefer to keep one test as now which can remind us to add a new version. As for others, just to check the count(*) to make sure installation is success.\r\nSuch as SELECT count(*) FROM pg_available_extensions WHERE name = 'pg_stat_statements' and installed_version = '1.4'; What do you think?",
"msg_date": "Wed, 10 Mar 2021 14:51:07 +0800",
"msg_from": "\"=?ISO-8859-1?B?RXJpY2EgWmhhbmc=?=\" <ericazhangy@qq.com>",
"msg_from_op": true,
"msg_subject": "Re: Add some tests for pg_stat_statements compatibility verification\n under contrib"
}
] |
[
{
"msg_contents": "Dear hacker:\r\n I am a student from Nanjing University. I have some troubles about DDL statement convertion. I have several MySQL DDL statements from MySQL dump command. Now I wanna convert the statements' grammar so that they can be supported by PostgreSQL. However I have tried the tool - 'SQL Data Definition Language (DDL) Conversion to MySQL, PostgreSQL and MS-SQL' on the wiki website of pgsql but it seems to have some bugs and it cannot convert contents of my .sql files. So is there any tools avaliable for me to accomplish it?\r\n The mail's attachment is an example I wanna convert. Looking forward to your reply.\r\n Yours sincerely.",
"msg_date": "Wed, 10 Mar 2021 16:09:49 +0800",
"msg_from": "\"=?gb18030?B?0e7S3bTm?=\" <1057206466@qq.com>",
"msg_from_op": true,
"msg_subject": "Queries for PostgreSQL DDL convert"
},
{
"msg_contents": "On Wed, Mar 10, 2021 at 04:09:49PM +0800, 杨逸存 wrote:\n> Dear hacker:\n> I am a student from Nanjing University. I have some troubles about DDL statement convertion. I have several MySQL DDL statements from MySQL dump command. Now I wanna convert the statements' grammar so that they can be supported by PostgreSQL. However I have tried the tool - 'SQL Data Definition Language (DDL) Conversion to MySQL, PostgreSQL and MS-SQL' on the wiki website of pgsql but it seems to have some bugs and it cannot convert contents of my .sql files. So is there any tools avaliable for me to accomplish it?\n> The mail's attachment is an example I wanna convert. Looking forward to your reply.\n\nOra2pg (http://ora2pg.darold.net/documentation.html) supports migration from\nmysql to postgres. It should be able to take care of the DDL conversion for\nyou.\n\n\n",
"msg_date": "Thu, 11 Mar 2021 15:28:13 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Queries for PostgreSQL DDL convert"
}
] |
[
{
"msg_contents": "Hi,\n\nI was reviewing logical decoding of two-phase transactions feature,\nwhile reviewing the feature I was checking if there is any impact on\npublisher/subscriber upgrade.\n\nI checked the existing pg_upgrade behaviour with logical replication.\nI made a logical replication data instance with publisher and\nsubscriber with subscriber subscribed to a table. Then I tried\nupgrading publisher and subscriber individually. After upgrade I\nnoticed the following:\n\n1) Pg_logical/mappings files were not copied in the upgraded data folder:\n------------------------------------------------------------------\nPg_logical contents in old data folder:\npublisher/pg_logical/replorigin_checkpoint\npublisher/pg_logical/mappings:\nmap-32cb-4df-0_1767088-225-225\npublisher/pg_logical/snapshots:\n0-1643650.snap\n\nNew upgraded data folder:\npublisher1/pg_logical/replorigin_checkpoint\npublisher1/pg_logical/mappings:\npublisher1/pg_logical/snapshots:\n\n2) Replication slots were not copied:\nselect * from pg_replication_slots;\nslot_name | plugin | slot_type | datoid | database | temporary |\nactive | active_pid | xmin | catalog_xmin | restart_lsn |\nconfirmed_flush_lsn | wal_status | safe_wal_size | t\nwo_phase\n-----------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------+------------+---------------+--\n---------\n(0 rows)\n\n3) The subscriber is in disabled mode in the upgraded data:\nselect * from pg_subscription;\n oid | subdbid | subname | subowner | subenabled | subbinary |\nsubstream | subtwophase | subconninfo |\nsubslotname | subsynccommit | subpublicati\nons\n-------+---------+---------+----------+------------+-----------+-----------+-------------+------------------------------------------+-------------+---------------+-------------\n----\n16404 | 16401 | mysub | 10 | f | f | f\n | f | host=localhost port=5432 dbname=postgres | mysub\n | off | {mypub}\n(1 row)\n\n4) The pg_subscription_rel contents also were not copied:\nselect * from pg_subscription_rel;\n srsubid | srrelid | srsubstate | srsublsn\n---------+---------+------------+----------\n(0 rows)\n\n5) While logical decoding of transactions, the decoded changes will be\nserialized based on logical_decoding_work_mem configuration. Even\nthese files were not copied during upgrade.\n\nDo we support upgrading of logical replication, if it is supported\ncould someone point me to the document link on how to upgrade logical\nreplication?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 10 Mar 2021 16:03:38 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Do we support upgrade of logical replication?"
},
{
"msg_contents": "On Wed, Mar 10, 2021 at 4:03 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi,\n>\n> I was reviewing logical decoding of two-phase transactions feature,\n> while reviewing the feature I was checking if there is any impact on\n> publisher/subscriber upgrade.\n>\n> I checked the existing pg_upgrade behaviour with logical replication.\n> I made a logical replication data instance with publisher and\n> subscriber with subscriber subscribed to a table. Then I tried\n> upgrading publisher and subscriber individually. After upgrade I\n> noticed the following:\n>\n> 1) Pg_logical/mappings files were not copied in the upgraded data folder:\n> ------------------------------------------------------------------\n> Pg_logical contents in old data folder:\n> publisher/pg_logical/replorigin_checkpoint\n> publisher/pg_logical/mappings:\n> map-32cb-4df-0_1767088-225-225\n> publisher/pg_logical/snapshots:\n> 0-1643650.snap\n>\n> New upgraded data folder:\n> publisher1/pg_logical/replorigin_checkpoint\n> publisher1/pg_logical/mappings:\n> publisher1/pg_logical/snapshots:\n>\n> 2) Replication slots were not copied:\n> select * from pg_replication_slots;\n> slot_name | plugin | slot_type | datoid | database | temporary |\n> active | active_pid | xmin | catalog_xmin | restart_lsn |\n> confirmed_flush_lsn | wal_status | safe_wal_size | t\n> wo_phase\n> -----------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------+------------+---------------+--\n> ---------\n> (0 rows)\n>\n> 3) The subscriber is in disabled mode in the upgraded data:\n> select * from pg_subscription;\n> oid | subdbid | subname | subowner | subenabled | subbinary |\n> substream | subtwophase | subconninfo |\n> subslotname | subsynccommit | subpublicati\n> ons\n> -------+---------+---------+----------+------------+-----------+-----------+-------------+------------------------------------------+-------------+---------------+-------------\n> ----\n> 16404 | 16401 | mysub | 10 | f | f | f\n> | f | host=localhost port=5432 dbname=postgres | mysub\n> | off | {mypub}\n> (1 row)\n>\n> 4) The pg_subscription_rel contents also were not copied:\n> select * from pg_subscription_rel;\n> srsubid | srrelid | srsubstate | srsublsn\n> ---------+---------+------------+----------\n> (0 rows)\n>\n> 5) While logical decoding of transactions, the decoded changes will be\n> serialized based on logical_decoding_work_mem configuration. Even\n> these files were not copied during upgrade.\n>\n> Do we support upgrading of logical replication, if it is supported\n> could someone point me to the document link on how to upgrade logical\n> replication?\n>\n\nAs far as I can understand, the main reason we don't copy all these\nthings is that we can't retain the slots after upgrade. I think the\nreason for the same is that we reset WAL during upgrade and slots\nmight point to some old WAL location. Now, one might think that we can\ntry to copy WAL as well to allow slots to work after upgrade but the\nWAL format can change in newer version so that might be tricky.\n\nSo users need to probably recreate the slots and then perform the\ntablesync again via Alter Subscription ... Refresh Publication. Also,\nthat might require truncating the previous data. I am also not very\nsure about the procedure but maybe someone else can correct me or add\nmore to it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 12 Mar 2021 18:05:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Do we support upgrade of logical replication?"
}
] |
[
{
"msg_contents": "I initially posted this on the pgsql-general mailing list [5] but was\nadvised to also post this to the -hackers list as it deals with internals.\n\nWe've encountered a production performance problem with pg13 related to\nhow it fsyncs the whole data directory in certain scenarios, related to\nwhat Paul (bcc'ed) described in a post to pgsql-hackers [1].\n\nBackground:\n\nWe've observed the full recursive fsync is triggered when\n\n* pg_basebackup receives a streaming backup (via [2] fsync_dir_recurse\nor fsync_pgdata) unless --no-sync is specified\n* postgres starts up unclean (via [3] SyncDataDirectory)\n\nWe run multiple postgres clusters and some of those clusters have many\n(~450) databases (one database-per-customer) meaning that the postgres\ndata directory has around 700,000 files.\n\nOn one of our less loaded servers this takes ~7 minutes to complete, but\non another [4] this takes ~90 minutes.\n\nObviously this is untenable risk. We've modified our process that\nbootstraps a replica via pg_basebackup to instead do \"pg_basebackup\n--no-sync…\" followed by a \"sync\", but we don't have any way to do the\nequivalent for the postgres startup.\n\nI presume the reason postgres doesn't blindly run a sync() is that we\ndon't know what other I/O is on the system and it'd be rude to affect\nother services. That makes sense, except for our environment the work\ndone by the recursive fsync is orders of magnitude more disruptive than\na sync().\n\nMy questions are:\n\n* is there a knob missing we can configure?\n* can we get an opt-in knob to use a single sync() call instead of a\nrecursive fsync()?\n* would you be open to merging a patch providing said knob?\n* is there something else we missed?\n\nThanks!\n\n[1]:\nhttps://www.postgresql.org/message-id/flat/CAEET0ZHGnbXmi8yF3ywsDZvb3m9CbdsGZgfTXscQ6agcbzcZAw@mail.gmail.com\n[2]:\nhttps://github.com/postgres/postgres/blob/master/src/bin/pg_basebackup/pg_basebackup.c#L2181\n[3]:\nhttps://github.com/postgres/postgres/blob/master/src/backend/access/transam/xlog.c#L6495\n[4]: It should be identical config-wise. It isn't starved for IO but\ndoes have other regular write workloads\n[5]:\nhttps://www.postgresql-archive.org/fdatasync-performance-problem-with-large-number-of-DB-files-td6184094.html\n\n-- \nMichael Brown\nCivilized Discourse Construction Kit, Inc.\nhttps://www.discourse.org/\n\n\n\n",
"msg_date": "Wed, 10 Mar 2021 15:21:54 -0500",
"msg_from": "Michael Brown <michael.brown@discourse.org>",
"msg_from_op": true,
"msg_subject": "fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Thu, Mar 11, 2021 at 11:01 AM Michael Brown\n<michael.brown@discourse.org> wrote:\n> * pg_basebackup receives a streaming backup (via [2] fsync_dir_recurse\n> or fsync_pgdata) unless --no-sync is specified\n> * postgres starts up unclean (via [3] SyncDataDirectory)\n>\n> We run multiple postgres clusters and some of those clusters have many\n> (~450) databases (one database-per-customer) meaning that the postgres\n> data directory has around 700,000 files.\n>\n> On one of our less loaded servers this takes ~7 minutes to complete, but\n> on another [4] this takes ~90 minutes.\n\nOuch.\n\n> My questions are:\n>\n> * is there a knob missing we can configure?\n> * can we get an opt-in knob to use a single sync() call instead of a\n> recursive fsync()?\n> * would you be open to merging a patch providing said knob?\n> * is there something else we missed?\n\nAs discussed on that other thread, I don't think sync() is an option\n(it doesn't wait on all OSes or in the standard and it doesn't report\nerrors). syncfs() on Linux 5.8+ looks like a good candidate though,\nand I think we'd consider a patch like that. I mean, I even posted\none[1] in that other thread. There will of course be cases where\nthat's slower (small database sharing filesystem with other software\nthat has a lot of dirty data to write back).\n\nI also wrote a WAL-and-checkpoint based prototype[2], which, among\nother advantages such as being faster, not ignoring errors and not\ntriggering collateral write-back storms, happens to work on all\noperating systems. On the other hand it requires a somewhat dogmatic\nswitch in thinking about the meaning of checkpoints (I mean, it\nrequires humans to promise not to falsify checkpoints by copying\ndatabases around underneath us), which may be hard to sell (I didn't\ntry very hard), and there may be subtleties I have missed...\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGKT6XiPiEJrqeOFGi7RYCGzbBysF9pyWwv0-jm-oNajxg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CA%2BhUKGKHhDNnN6fxf6qrAx9h%2BmjdNU2Zmx7ztJzFQ0C5%3Du3QPg%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 11 Mar 2021 11:38:07 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Thu, Mar 11, 2021 at 11:38 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Mar 11, 2021 at 11:01 AM Michael Brown\n> <michael.brown@discourse.org> wrote:\n> > * is there a knob missing we can configure?\n> > * can we get an opt-in knob to use a single sync() call instead of a\n> > recursive fsync()?\n> > * would you be open to merging a patch providing said knob?\n> > * is there something else we missed?\n>\n> As discussed on that other thread, I don't think sync() is an option\n> (it doesn't wait on all OSes or in the standard and it doesn't report\n> errors). syncfs() on Linux 5.8+ looks like a good candidate though,\n> and I think we'd consider a patch like that. I mean, I even posted\n> one[1] in that other thread. There will of course be cases where\n> that's slower (small database sharing filesystem with other software\n> that has a lot of dirty data to write back).\n\nThinking about this some more, if you were to propose a patch like\nthat syncfs() one but make it a configurable option, I'd personally be\nin favour of trying to squeeze it into v14. Others might object on\ncommitfest procedural grounds, I dunno, but I think this is a real\noperational issue and that's a fairly simple and localised change.\nI've run into a couple of users who have just commented that recursive\nfsync() code out!\n\nI'd probably make it an enum-style GUC, because I intend to do some\nmore work on my \"precise\" alternative, though not in time for this\nrelease, and it could just as well be an option too.\n\n\n",
"msg_date": "Thu, 11 Mar 2021 12:30:56 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Thinking about this some more, if you were to propose a patch like\n> that syncfs() one but make it a configurable option, I'd personally be\n> in favour of trying to squeeze it into v14. Others might object on\n> commitfest procedural grounds, I dunno, but I think this is a real\n> operational issue and that's a fairly simple and localised change.\n> I've run into a couple of users who have just commented that recursive\n> fsync() code out!\n\nI'm a little skeptical about the \"simple\" part. At minimum, you'd\nhave to syncfs() each tablespace, since we have no easy way to tell\nwhich of them are on different filesystems. (Although, if we're\npresuming this is Linux-only, we might be able to tell with some\nunportable check or other.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Mar 2021 19:16:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Thu, Mar 11, 2021 at 1:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Thinking about this some more, if you were to propose a patch like\n> > that syncfs() one but make it a configurable option, I'd personally be\n> > in favour of trying to squeeze it into v14. Others might object on\n> > commitfest procedural grounds, I dunno, but I think this is a real\n> > operational issue and that's a fairly simple and localised change.\n> > I've run into a couple of users who have just commented that recursive\n> > fsync() code out!\n>\n> I'm a little skeptical about the \"simple\" part. At minimum, you'd\n> have to syncfs() each tablespace, since we have no easy way to tell\n> which of them are on different filesystems. (Although, if we're\n> presuming this is Linux-only, we might be able to tell with some\n> unportable check or other.)\n\nRight, the patch knows about that:\n\n+ /*\n+ * On Linux, we don't have to open every single file one by one. We can\n+ * use syncfs() to sync whole filesystems. We only expect filesystem\n+ * boundaries to exist where we tolerate symlinks, namely pg_wal and the\n+ * tablespaces, so we call syncfs() for each of those directories.\n+ */\n\n\n",
"msg_date": "Thu, 11 Mar 2021 13:17:38 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "\n\nOn 2021/03/11 8:30, Thomas Munro wrote:\n> On Thu, Mar 11, 2021 at 11:38 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> On Thu, Mar 11, 2021 at 11:01 AM Michael Brown\n>> <michael.brown@discourse.org> wrote:\n>>> * is there a knob missing we can configure?\n>>> * can we get an opt-in knob to use a single sync() call instead of a\n>>> recursive fsync()?\n>>> * would you be open to merging a patch providing said knob?\n>>> * is there something else we missed?\n>>\n>> As discussed on that other thread, I don't think sync() is an option\n>> (it doesn't wait on all OSes or in the standard and it doesn't report\n>> errors). syncfs() on Linux 5.8+ looks like a good candidate though,\n>> and I think we'd consider a patch like that. I mean, I even posted\n>> one[1] in that other thread. There will of course be cases where\n>> that's slower (small database sharing filesystem with other software\n>> that has a lot of dirty data to write back).\n> \n> Thinking about this some more, if you were to propose a patch like\n> that syncfs() one but make it a configurable option, I'd personally be\n> in favour of trying to squeeze it into v14. Others might object on\n> commitfest procedural grounds, I dunno, but I think this is a real\n> operational issue and that's a fairly simple and localised change.\n\n+1 to push this kind of change into v14!!\n\n> I've run into a couple of users who have just commented that recursive\n> fsync() code out!\n\nBTW, we can skip that recursive fsync() by disabling fsync GUC even without\ncommenting out the code?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 11 Mar 2021 10:00:37 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Thu, Mar 11, 2021 at 2:00 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2021/03/11 8:30, Thomas Munro wrote:\n> > I've run into a couple of users who have just commented that recursive\n> > fsync() code out!\n>\n> BTW, we can skip that recursive fsync() by disabling fsync GUC even without\n> commenting out the code?\n\nThose users wanted fsync=on because they wanted to recover to a normal\nonline system after a crash, but they believed that the preceding\nfsync of the data directory was useless, because replaying the WAL\nshould be enough. IMHO they were nearly on the right track, and the\nprototype patch I linked earlier as [2] was my attempt to find the\nspecific reasons why that doesn't work and fix them. So far, I\nfigured out that you still have to remember to fsync the WAL files\n(otherwise you're replaying WAL that potentially hasn't reached the\ndisk), and data files holding blocks that recovery decided to skip due\nto BLK_DONE (otherwise you might decide to skip replay because of a\nhigher LSN that is on a page that is in the kernel's cache but not yet\non disk).\n\n\n",
"msg_date": "Thu, 11 Mar 2021 14:20:38 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Mar 11, 2021 at 1:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'm a little skeptical about the \"simple\" part. At minimum, you'd\n>> have to syncfs() each tablespace, since we have no easy way to tell\n>> which of them are on different filesystems. (Although, if we're\n>> presuming this is Linux-only, we might be able to tell with some\n>> unportable check or other.)\n\n> Right, the patch knows about that:\n\nI noticed that the syncfs man page present in RHEL8 seemed a little\nsquishy on the critical question of error reporting. It promises\nthat syncfs will wait for I/O completion, but it doesn't say in so\nmany words that I/O errors will be reported (and the list of\napplicable errno codes is only EBADF, not very reassuring).\n\nTrolling the net, I found a newer-looking version of the man page,\nand behold it says\n\n In mainline kernel versions prior to 5.8, syncfs() will fail only\n when passed a bad file descriptor (EBADF). Since Linux 5.8,\n syncfs() will also report an error if one or more inodes failed\n to be written back since the last syncfs() call.\n\nSo this means that in less-than-bleeding-edge kernels, syncfs can\nonly be regarded as a dangerous toy. If we expose an option to use\nit, there had better be large blinking warnings in the docs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Mar 2021 20:25:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Thu, Mar 11, 2021 at 2:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Trolling the net, I found a newer-looking version of the man page,\n> and behold it says\n>\n> In mainline kernel versions prior to 5.8, syncfs() will fail only\n> when passed a bad file descriptor (EBADF). Since Linux 5.8,\n> syncfs() will also report an error if one or more inodes failed\n> to be written back since the last syncfs() call.\n>\n> So this means that in less-than-bleeding-edge kernels, syncfs can\n> only be regarded as a dangerous toy. If we expose an option to use\n> it, there had better be large blinking warnings in the docs.\n\nAgreed. Perhaps we could also try to do something programmatic about that.\n\nIts fsync() was also pretty rough for the first 28 years.\n\n\n",
"msg_date": "Thu, 11 Mar 2021 14:32:07 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Thu, Mar 11, 2021 at 2:32 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Mar 11, 2021 at 2:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Trolling the net, I found a newer-looking version of the man page,\n> > and behold it says\n> >\n> > In mainline kernel versions prior to 5.8, syncfs() will fail only\n> > when passed a bad file descriptor (EBADF). Since Linux 5.8,\n> > syncfs() will also report an error if one or more inodes failed\n> > to be written back since the last syncfs() call.\n> >\n> > So this means that in less-than-bleeding-edge kernels, syncfs can\n> > only be regarded as a dangerous toy. If we expose an option to use\n> > it, there had better be large blinking warnings in the docs.\n>\n> Agreed. Perhaps we could also try to do something programmatic about that.\n\nTime being of the essence, here is the patch I posted last year, this\ntime with a GUC and some docs. You can set sync_after_crash to\n\"fsync\" (default) or \"syncfs\" if you have it.\n\nI would plan to extend that to include a third option as already\ndiscussed in the other thread, maybe something like \"wal\" (= sync WAL\nfiles and then do extra analysis of WAL data to sync only data\nmodified since checkpoint but not replayed), but that'd be material\nfor PG15.",
"msg_date": "Mon, 15 Mar 2021 11:52:35 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Mon, Mar 15, 2021 at 11:52 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Time being of the essence, here is the patch I posted last year, this\n> time with a GUC and some docs. You can set sync_after_crash to\n> \"fsync\" (default) or \"syncfs\" if you have it.\n\nCfbot told me to add HAVE_SYNCFS to Solution.pm, and I fixed a couple of typos.",
"msg_date": "Mon, 15 Mar 2021 12:33:49 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "\r\n\r\n> On 2021/3/15, 7:34 AM, \"Thomas Munro\" <thomas.munro@gmail.com> wrote:\r\n\r\n >> On Mon, Mar 15, 2021 at 11:52 AM Thomas Munro <thomas.munro@gmail.com> wrote:\r\n >> Time being of the essence, here is the patch I posted last year, this\r\n >> time with a GUC and some docs. You can set sync_after_crash to\r\n >> \"fsync\" (default) or \"syncfs\" if you have it.\r\n\r\n> Cfbot told me to add HAVE_SYNCFS to Solution.pm, and I fixed a couple of typos.\r\n\r\nBy the way, there is a usual case that we could skip fsync: A fsync-ed already standby generated by pg_rewind/pg_basebackup.\r\nThe state of those standbys are surely not DB_SHUTDOWNED/DB_SHUTDOWNED_IN_RECOVERY, so the\r\npgdata directory is fsync-ed again during startup when starting those pg instances. We could ask users to not fsync\r\nduring pg_rewind&pg_basebackup, but we probably want to just fsync some files in pg_rewind (see [1]), so better\r\nlet the startup process skip the unnecessary fsync? As to the solution, using guc or writing something in some files like\r\nbackup_label(?) does not seem to be good ideas since\r\n1. Use guc, we still expect fsync after real crash recovery so we need to reset the guc also need to specify pgoptions in pg_ctl command.\r\n2. Write some hint information to files like backup_label(?) in pg_rewind/pg_basebackup, but people might\r\n copy the pgdata directory and then we still need fsync.\r\nThe only one simple solution I can think out is to let user touch a file to hint startup, before starting the pg instance.\r\n\r\n[1] https://www.postgresql.org/message-id/flat/25CFBDF2-5551-4CC3-ADEB-434B6B1BAD16%40vmware.com#734e7dc77f0760a3a64e808476ecc592\r\n\r\n",
"msg_date": "Mon, 15 Mar 2021 14:30:13 +0000",
"msg_from": "Paul Guo <guopa@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Tue, Mar 16, 2021 at 3:30 AM Paul Guo <guopa@vmware.com> wrote:\n> By the way, there is a usual case that we could skip fsync: A fsync-ed already standby generated by pg_rewind/pg_basebackup.\n> The state of those standbys are surely not DB_SHUTDOWNED/DB_SHUTDOWNED_IN_RECOVERY, so the\n> pgdata directory is fsync-ed again during startup when starting those pg instances. We could ask users to not fsync\n> during pg_rewind&pg_basebackup, but we probably want to just fsync some files in pg_rewind (see [1]), so better\n> let the startup process skip the unnecessary fsync? As to the solution, using guc or writing something in some files like\n> backup_label(?) does not seem to be good ideas since\n> 1. Use guc, we still expect fsync after real crash recovery so we need to reset the guc also need to specify pgoptions in pg_ctl command.\n> 2. Write some hint information to files like backup_label(?) in pg_rewind/pg_basebackup, but people might\n> copy the pgdata directory and then we still need fsync.\n> The only one simple solution I can think out is to let user touch a file to hint startup, before starting the pg instance.\n\nAs a thought experiment only, I wonder if there is a way to make your\ntouch-a-special-signal-file scheme more reliable and less dangerous\n(considering people might copy the signal file around or otherwise\nscrew this up). It seems to me that invalidation is the key, and\n\"unlink the signal file after the first crash recovery\" isn't good\nenough. Hmm What if the file contained a fingerprint containing...\nlet's see... checkpoint LSN, hostname, MAC address, pgdata path, ...\n(add more seasoning to taste), and then also some flags to say what is\nknown to be fully fsync'd already: the WAL, pgdata but only as far as\nchanges up to the checkpoint LSN, or all of pgdata? Then you could be\nconservative for a non-match, but skip the extra work in some common\ncases like pg_basebackup, as long as you trust the fingerprint scheme\nnot to produce false positives. Or something like that...\n\nI'm not too keen to invent clever new schemes for PG14, though. This\nsync_after_crash=syncfs scheme is pretty simple, and has the advantage\nthat it's very cheap to do it extra redundant times assuming nothing\nelse is creating new dirty kernel pages in serious quantities. Is\nthat useful enough? In particular it avoids the dreaded \"open\n1,000,000 uncached files over high latency network storage\" problem.\n\nI don't want to add a hypothetical sync_after_crash=none, because it\nseems like generally a bad idea. We already have a\nrunning-with-scissors mode you could use for that: fsync=off.\n\n\n",
"msg_date": "Tue, 16 Mar 2021 12:15:05 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "\n\nOn 2021/03/15 8:33, Thomas Munro wrote:\n> On Mon, Mar 15, 2021 at 11:52 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Time being of the essence, here is the patch I posted last year, this\n>> time with a GUC and some docs. You can set sync_after_crash to\n>> \"fsync\" (default) or \"syncfs\" if you have it.\n> \n> Cfbot told me to add HAVE_SYNCFS to Solution.pm, and I fixed a couple of typos.\n\nThanks for the patch!\n\n+ When set to <literal>fsync</literal>, which is the default,\n+ <productname>PostgreSQL</productname> will recursively open and fsync\n+ all files in the data directory before crash recovery begins.\n\nIsn't this a bit misleading? This may cause users to misunderstand that\nsuch fsync can happen only in the case of crash recovery.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 16 Mar 2021 17:10:28 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "\n\nOn 2021/03/16 8:15, Thomas Munro wrote:\n> On Tue, Mar 16, 2021 at 3:30 AM Paul Guo <guopa@vmware.com> wrote:\n>> By the way, there is a usual case that we could skip fsync: A fsync-ed already standby generated by pg_rewind/pg_basebackup.\n>> The state of those standbys are surely not DB_SHUTDOWNED/DB_SHUTDOWNED_IN_RECOVERY, so the\n>> pgdata directory is fsync-ed again during startup when starting those pg instances. We could ask users to not fsync\n>> during pg_rewind&pg_basebackup, but we probably want to just fsync some files in pg_rewind (see [1]), so better\n>> let the startup process skip the unnecessary fsync? As to the solution, using guc or writing something in some files like\n>> backup_label(?) does not seem to be good ideas since\n>> 1. Use guc, we still expect fsync after real crash recovery so we need to reset the guc also need to specify pgoptions in pg_ctl command.\n>> 2. Write some hint information to files like backup_label(?) in pg_rewind/pg_basebackup, but people might\n>> copy the pgdata directory and then we still need fsync.\n>> The only one simple solution I can think out is to let user touch a file to hint startup, before starting the pg instance.\n> \n> As a thought experiment only, I wonder if there is a way to make your\n> touch-a-special-signal-file scheme more reliable and less dangerous\n> (considering people might copy the signal file around or otherwise\n> screw this up). It seems to me that invalidation is the key, and\n> \"unlink the signal file after the first crash recovery\" isn't good\n> enough. Hmm What if the file contained a fingerprint containing...\n> let's see... checkpoint LSN, hostname, MAC address, pgdata path, ...\n> (add more seasoning to taste), and then also some flags to say what is\n> known to be fully fsync'd already: the WAL, pgdata but only as far as\n> changes up to the checkpoint LSN, or all of pgdata? Then you could be\n> conservative for a non-match, but skip the extra work in some common\n> cases like pg_basebackup, as long as you trust the fingerprint scheme\n> not to produce false positives. Or something like that...\n> \n> I'm not too keen to invent clever new schemes for PG14, though. This\n> sync_after_crash=syncfs scheme is pretty simple, and has the advantage\n> that it's very cheap to do it extra redundant times assuming nothing\n> else is creating new dirty kernel pages in serious quantities. Is\n> that useful enough? In particular it avoids the dreaded \"open\n> 1,000,000 uncached files over high latency network storage\" problem.\n> \n> I don't want to add a hypothetical sync_after_crash=none, because it\n> seems like generally a bad idea. We already have a\n> running-with-scissors mode you could use for that: fsync=off.\n\nI heard that some backup tools sync the database directory when restoring it.\nI guess that those who use such tools might want the option to disable such\nstartup sync (i.e., sync_after_crash=none) because it's not necessary.\n\nThey can skip that sync by fsync=off. But if they just want to skip only that\nstartup sync and make subsequent recovery (or standby server) work with\nfsync=on, they would need to shutdown the server after that startup sync\nfinishes, enable fsync, and restart the server. In this case, since the server\nis restarted with the state=DB_SHUTDOWNED_IN_RECOVERY, the startup sync\nwould not be performed. This procedure is tricky. So IMO supporting\nsync_after_crash=none would be helpful for this case and simple.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 16 Mar 2021 17:29:05 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Tue, Mar 16, 2021 at 4:29 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2021/03/16 8:15, Thomas Munro wrote:\n> > On Tue, Mar 16, 2021 at 3:30 AM Paul Guo <guopa@vmware.com> wrote:\n> >> By the way, there is a usual case that we could skip fsync: A fsync-ed already standby generated by pg_rewind/pg_basebackup.\n> >> The state of those standbys are surely not DB_SHUTDOWNED/DB_SHUTDOWNED_IN_RECOVERY, so the\n> >> pgdata directory is fsync-ed again during startup when starting those pg instances. We could ask users to not fsync\n> >> during pg_rewind&pg_basebackup, but we probably want to just fsync some files in pg_rewind (see [1]), so better\n> >> let the startup process skip the unnecessary fsync? As to the solution, using guc or writing something in some files like\n> >> backup_label(?) does not seem to be good ideas since\n> >> 1. Use guc, we still expect fsync after real crash recovery so we need to reset the guc also need to specify pgoptions in pg_ctl command.\n> >> 2. Write some hint information to files like backup_label(?) in pg_rewind/pg_basebackup, but people might\n> >> copy the pgdata directory and then we still need fsync.\n> >> The only one simple solution I can think out is to let user touch a file to hint startup, before starting the pg instance.\n> >\n> > As a thought experiment only, I wonder if there is a way to make your\n> > touch-a-special-signal-file scheme more reliable and less dangerous\n> > (considering people might copy the signal file around or otherwise\n> > screw this up). It seems to me that invalidation is the key, and\n> > \"unlink the signal file after the first crash recovery\" isn't good\n> > enough. Hmm What if the file contained a fingerprint containing...\n> > let's see... checkpoint LSN, hostname, MAC address, pgdata path, ...\n\nhostname, mac address, or pgdata path (or e.g. inode of a file?) might\nbe the same after vm cloning or directory copying though it is not usual.\nI can not figure out a stable solution that makes the information is out of\ndate after vm/directory cloning/copying, so the simplest way seems to\nbe that leaves the decision (i.e. touching a file) to users, instead of\nwriting the information automatically by pg_rewind/pg_basebackup.\n\n> > (add more seasoning to taste), and then also some flags to say what is\n> > known to be fully fsync'd already: the WAL, pgdata but only as far as\n> > changes up to the checkpoint LSN, or all of pgdata? Then you could be\n> > conservative for a non-match, but skip the extra work in some common\n> > cases like pg_basebackup, as long as you trust the fingerprint scheme\n> > not to produce false positives. Or something like that...\n> >\n> > I'm not too keen to invent clever new schemes for PG14, though. This\n> > sync_after_crash=syncfs scheme is pretty simple, and has the advantage\n> > that it's very cheap to do it extra redundant times assuming nothing\n> > else is creating new dirty kernel pages in serious quantities. Is\n> > that useful enough? In particular it avoids the dreaded \"open\n> > 1,000,000 uncached files over high latency network storage\" problem.\n> >\n> > I don't want to add a hypothetical sync_after_crash=none, because it\n> > seems like generally a bad idea. We already have a\n> > running-with-scissors mode you could use for that: fsync=off.\n>\n> I heard that some backup tools sync the database directory when restoring it.\n> I guess that those who use such tools might want the option to disable such\n> startup sync (i.e., sync_after_crash=none) because it's not necessary.\n\nThis scenario seems to be a support to the file touching solution since\nwe do not have an automatic solution to skip the fsync. I thought using\nsync_after_crash=none to fix my issue but as I said we need to reset\nthe guc since we still expect fsync/syncfs after the 2nd crash.\n\n> They can skip that sync by fsync=off. But if they just want to skip only that\n> startup sync and make subsequent recovery (or standby server) work with\n> fsync=on, they would need to shutdown the server after that startup sync\n> finishes, enable fsync, and restart the server. In this case, since the server\n> is restarted with the state=DB_SHUTDOWNED_IN_RECOVERY, the startup sync\n> would not be performed. This procedure is tricky. So IMO supporting\n\nThis seems to make the process complex. From the perspective of product design,\nthis seems to be not attractive.\n\n> sync_after_crash=none would be helpful for this case and simple.\n\nRegards,\nPaul Guo (Vmware)\n\n\n",
"msg_date": "Tue, 16 Mar 2021 17:44:26 +0800",
"msg_from": "Paul Guo <paulguo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Tue, Mar 16, 2021 at 9:10 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> Thanks for the patch!\n>\n> + When set to <literal>fsync</literal>, which is the default,\n> + <productname>PostgreSQL</productname> will recursively open and fsync\n> + all files in the data directory before crash recovery begins.\n>\n> Isn't this a bit misleading? This may cause users to misunderstand that\n> such fsync can happen only in the case of crash recovery.\n\nIf I insert the following extra sentence after that one, is it better?\n \"This applies whenever starting a database cluster that did not shut\ndown cleanly, including copies created with pg_basebackup.\"\n\n\n",
"msg_date": "Wed, 17 Mar 2021 16:02:58 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Tue, Mar 16, 2021 at 9:29 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2021/03/16 8:15, Thomas Munro wrote:\n> > I don't want to add a hypothetical sync_after_crash=none, because it\n> > seems like generally a bad idea. We already have a\n> > running-with-scissors mode you could use for that: fsync=off.\n>\n> I heard that some backup tools sync the database directory when restoring it.\n> I guess that those who use such tools might want the option to disable such\n> startup sync (i.e., sync_after_crash=none) because it's not necessary.\n\nHopefully syncfs() will return quickly in that case, without doing any work?\n\n> They can skip that sync by fsync=off. But if they just want to skip only that\n> startup sync and make subsequent recovery (or standby server) work with\n> fsync=on, they would need to shutdown the server after that startup sync\n> finishes, enable fsync, and restart the server. In this case, since the server\n> is restarted with the state=DB_SHUTDOWNED_IN_RECOVERY, the startup sync\n> would not be performed. This procedure is tricky. So IMO supporting\n> sync_after_crash=none would be helpful for this case and simple.\n\nI still do not like this footgun :-) However, perhaps I am being\noverly dogmatic. Consider the change in d8179b00, which decided that\nI/O errors in this phase should be reported at LOG level rather than\nERROR. In contrast, my \"sync_after_crash=wal\" mode (which I need to\nrebase over this) will PANIC in this case, because any syncing will be\nhandled through the usual checkpoint codepaths.\n\nDo you think it would be OK to commit this feature with just \"fsync\"\nand \"syncfs\", and then to continue to consider adding \"none\" as a\npossible separate commit?\n\n\n",
"msg_date": "Wed, 17 Mar 2021 16:45:05 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Wed, Mar 17, 2021 at 11:45 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Tue, Mar 16, 2021 at 9:29 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > On 2021/03/16 8:15, Thomas Munro wrote:\n> > > I don't want to add a hypothetical sync_after_crash=none, because it\n> > > seems like generally a bad idea. We already have a\n> > > running-with-scissors mode you could use for that: fsync=off.\n> >\n> > I heard that some backup tools sync the database directory when restoring it.\n> > I guess that those who use such tools might want the option to disable such\n> > startup sync (i.e., sync_after_crash=none) because it's not necessary.\n>\n> Hopefully syncfs() will return quickly in that case, without doing any work?\n\nI just quickly reviewed the patch (the code part). It looks good. Only\none concern\nor question is do_syncfs() for symlink opened fd for syncfs() - I'm\nnot 100% sure.\n\nI think we could consider reviewing and then pushing the syncfs patch\nat this moment since\nthe fsync issue really affects much although there seems to be a\nbetter plan for this in the future,\nit may make the sync step in startup much faster. Today I first\nencountered a real\ncase in a production environment. startup spends >1hour on the fsync\nstep: I'm pretty\nsure that the pgdata is sync-ed, and per startup pstack I saw the\nstartup process\none by one slowly open(), fsync() (surely do nothing) and close(), and\nthe pre_sync_fname() is also a burden in such a scenario. So this\nissue is really\nannoying.\n\nWe could discuss further optimizing the special crash recovery\nscenario that users\nexplicitly know the sync step could be skipped (this scenario is\nsurely not unusual),\neven having the patch. The syncfs patch could be used for this\nscenario also but the\nfilesystem might be shared by other applications (this is not unusual\nand happens in my\ncustomer's environment for example) so syncfs() is (probably much) slower than\nskipping the sync step.\n\n-- \nPaul Guo (Vmware)\n\n\n",
"msg_date": "Wed, 17 Mar 2021 18:42:46 +0800",
"msg_from": "Paul Guo <paulguo@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "\n\nOn 2021/03/17 12:02, Thomas Munro wrote:\n> On Tue, Mar 16, 2021 at 9:10 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> Thanks for the patch!\n>>\n>> + When set to <literal>fsync</literal>, which is the default,\n>> + <productname>PostgreSQL</productname> will recursively open and fsync\n>> + all files in the data directory before crash recovery begins.\n>>\n>> Isn't this a bit misleading? This may cause users to misunderstand that\n>> such fsync can happen only in the case of crash recovery.\n> \n> If I insert the following extra sentence after that one, is it better?\n> \"This applies whenever starting a database cluster that did not shut\n> down cleanly, including copies created with pg_basebackup.\"\n\nYes. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 18 Mar 2021 15:13:16 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "\n\nOn 2021/03/17 12:45, Thomas Munro wrote:\n> On Tue, Mar 16, 2021 at 9:29 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> On 2021/03/16 8:15, Thomas Munro wrote:\n>>> I don't want to add a hypothetical sync_after_crash=none, because it\n>>> seems like generally a bad idea. We already have a\n>>> running-with-scissors mode you could use for that: fsync=off.\n>>\n>> I heard that some backup tools sync the database directory when restoring it.\n>> I guess that those who use such tools might want the option to disable such\n>> startup sync (i.e., sync_after_crash=none) because it's not necessary.\n> \n> Hopefully syncfs() will return quickly in that case, without doing any work?\n\nYes, in Linux.\n\n> \n>> They can skip that sync by fsync=off. But if they just want to skip only that\n>> startup sync and make subsequent recovery (or standby server) work with\n>> fsync=on, they would need to shutdown the server after that startup sync\n>> finishes, enable fsync, and restart the server. In this case, since the server\n>> is restarted with the state=DB_SHUTDOWNED_IN_RECOVERY, the startup sync\n>> would not be performed. This procedure is tricky. So IMO supporting\n>> sync_after_crash=none would be helpful for this case and simple.\n> \n> I still do not like this footgun :-) However, perhaps I am being\n> overly dogmatic. Consider the change in d8179b00, which decided that\n> I/O errors in this phase should be reported at LOG level rather than\n> ERROR. In contrast, my \"sync_after_crash=wal\" mode (which I need to\n> rebase over this) will PANIC in this case, because any syncing will be\n> handled through the usual checkpoint codepaths.\n> \n> Do you think it would be OK to commit this feature with just \"fsync\"\n> and \"syncfs\", and then to continue to consider adding \"none\" as a\n> possible separate commit?\n\n+1. \"syncfs\" feature is useful whether we also support \"none\" mode or not.\nIt's good idea to commit \"syncfs\" in advance.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 18 Mar 2021 15:46:11 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "About the syncfs patch, my first impression on the guc name sync_after_crash\r\nis that it is a boolean type. Not sure about other people's feeling. Do you guys think\r\nIt is better to rename it to a clearer name like sync_method_after_crash or others?\r\n\r\n",
"msg_date": "Thu, 18 Mar 2021 07:52:29 +0000",
"msg_from": "Paul Guo <guopa@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Thu, Mar 18, 2021 at 8:52 PM Paul Guo <guopa@vmware.com> wrote:\n> About the syncfs patch, my first impression on the guc name sync_after_crash\n> is that it is a boolean type. Not sure about other people's feeling. Do you guys think\n> It is better to rename it to a clearer name like sync_method_after_crash or others?\n\nWorks for me. Here is a new version like that, also including the\ndocumentation change discussed with Fujii-san, and a couple of\ncosmetic changes.",
"msg_date": "Thu, 18 Mar 2021 23:19:13 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Wed, Mar 17, 2021 at 11:42 PM Paul Guo <paulguo@gmail.com> wrote:\n> I just quickly reviewed the patch (the code part). It looks good. Only\n> one concern\n> or question is do_syncfs() for symlink opened fd for syncfs() - I'm\n> not 100% sure.\n\nAlright, let me try to prove that it works the way we want with an experiment.\n\nI'll make a directory with a file in it, and create a symlink to it in\nanother filesystem:\n\ntmunro@x1:~/junk$ mkdir my_wal_dir\ntmunro@x1:~/junk$ touch my_wal_dir/foo\ntmunro@x1:~/junk$ ln -s /home/tmunro/junk/my_wal_dir /dev/shm/my_wal_dir_symlink\ntmunro@x1:~/junk$ ls /dev/shm/my_wal_dir_symlink/\nfoo\n\nNow I'll write a program that repeatedly dirties the first block of\nfoo, and calls syncfs() on the containing directory that it opened\nusing the symlink:\n\ntmunro@x1:~/junk$ cat test.c\n#define _GNU_SOURCE\n\n#include <fcntl.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n\nint\nmain()\n{\n int symlink_fd, file_fd;\n\n symlink_fd = open(\"/dev/shm/my_wal_dir_symlink\", O_RDONLY);\n if (symlink_fd < 0) {\n perror(\"open1\");\n return EXIT_FAILURE;\n }\n\n file_fd = open(\"/home/tmunro/junk/my_wal_dir/foo\", O_RDWR);\n if (file_fd < 0) {\n perror(\"open2\");\n return EXIT_FAILURE;\n }\n\n for (int i = 0; i < 4; ++i) {\n if (pwrite(file_fd, \"hello world\", 10, 0) != 10) {\n perror(\"pwrite\");\n return EXIT_FAILURE;\n }\n if (syncfs(symlink_fd) < 0) {\n perror(\"syncfs\");\n return EXIT_FAILURE;\n }\n sleep(1);\n }\n return EXIT_SUCCESS;\n}\ntmunro@x1:~/junk$ cc test.c\ntmunro@x1:~/junk$ ./a.out\n\nWhile that's running, to prove that it does what we want it to do,\nI'll first find out where foo lives on the disk:\n\ntmunro@x1:~/junk$ /sbin/xfs_bmap my_wal_dir/foo\nmy_wal_dir/foo:\n 0: [0..7]: 242968520..242968527\n\nNow I'll trace the writes going to block 242968520, and start the program again:\n\ntmunro@x1:~/junk$ sudo btrace /dev/nvme0n1p2 | grep 242968520\n259,0 4 93 4.157000669 724924 A W 244019144 + 8 <-\n(259,2) 242968520\n259,0 2 155 5.158446989 718635 A W 244019144 + 8 <-\n(259,2) 242968520\n259,0 7 23 6.163765728 724924 A W 244019144 + 8 <-\n(259,2) 242968520\n259,0 7 30 7.169112683 724924 A W 244019144 + 8 <-\n(259,2) 242968520\n\n\n",
"msg_date": "Fri, 19 Mar 2021 00:05:11 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "\n\nOn 2021/03/18 19:19, Thomas Munro wrote:\n> On Thu, Mar 18, 2021 at 8:52 PM Paul Guo <guopa@vmware.com> wrote:\n>> About the syncfs patch, my first impression on the guc name sync_after_crash\n>> is that it is a boolean type. Not sure about other people's feeling. Do you guys think\n>> It is better to rename it to a clearer name like sync_method_after_crash or others?\n> \n> Works for me. Here is a new version like that, also including the\n> documentation change discussed with Fujii-san, and a couple of\n> cosmetic changes.\n\nThanks for updating the patch!\n\n+ database cluster that did not shut down cleanly, including copies\n+ created with pg_basebackup.\n\n\"pg_basebackup\" should be \"<application>pg_basebackup</application>\"?\n\n+\t\twhile ((de = ReadDir(dir, \"pg_tblspc\")))\n\nThe comment of SyncDataDirectory() says \"Errors are logged but not\nconsidered fatal\". So ReadDirExtended() with LOG level should be used\nhere, instead?\n\nIsn't it better to call CHECK_FOR_INTERRUPTS() in this loop?\n\n+\tfd = open(path, O_RDONLY);\n\nFor current use, it's not harmful to use open() and close(). But isn't\nit safer to use OpenTransientFile() and CloseTransientFile(), instead?\nBecause do_syncfs() may be used for other purpose in the future.\n\n+\tif (syncfs(fd) < 0)\n+\t\telog(LOG, \"syncfs failed for %s: %m\", path);\n\nAccording to the message style guide, this message should be something\nlike \"could not sync filesystem for \\\"%s\\\": %m\"?\n\nI confirmed that no error was reported when crash recovery started with\nsyncfs, in old Linux. I should also confirm that no error is reported in that\ncase in Linux 5.8+, but I don't have that environement. So I've not tested\nthis feature in Linux 5.8+....\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 18 Mar 2021 22:12:48 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Thu, Mar 18, 2021 at 11:19:13PM +1300, Thomas Munro wrote:\n> On Thu, Mar 18, 2021 at 8:52 PM Paul Guo <guopa@vmware.com> wrote:\n> > About the syncfs patch, my first impression on the guc name sync_after_crash\n> > is that it is a boolean type. Not sure about other people's feeling. Do you guys think\n> > It is better to rename it to a clearer name like sync_method_after_crash or others?\n> \n> Works for me. Here is a new version like that, also including the\n> documentation change discussed with Fujii-san, and a couple of\n> cosmetic changes.\n\nAre we sure we want to use the word \"crash\" here? I don't remember\nseeing it used anywhere else in our user interface. I guess it is\n\"crash recovery\".\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 18 Mar 2021 09:54:11 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Thu, Mar 18, 2021 at 09:54:11AM -0400, Bruce Momjian wrote:\n> On Thu, Mar 18, 2021 at 11:19:13PM +1300, Thomas Munro wrote:\n> > On Thu, Mar 18, 2021 at 8:52 PM Paul Guo <guopa@vmware.com> wrote:\n> > > About the syncfs patch, my first impression on the guc name sync_after_crash\n> > > is that it is a boolean type. Not sure about other people's feeling. Do you guys think\n> > > It is better to rename it to a clearer name like sync_method_after_crash or others?\n> > \n> > Works for me. Here is a new version like that, also including the\n> > documentation change discussed with Fujii-san, and a couple of\n> > cosmetic changes.\n> \n> Are we sure we want to use the word \"crash\" here? I don't remember\n> seeing it used anywhere else in our user interface. I guess it is\n> \"crash recovery\".\n\nMaybe call it \"recovery_sync_method\"?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 18 Mar 2021 10:03:52 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "\n\nOn 2021/03/18 23:03, Bruce Momjian wrote:\n>> Are we sure we want to use the word \"crash\" here? I don't remember\n>> seeing it used anywhere else in our user interface.\n\nWe have GUC restart_after_crash.\n\n\n> I guess it is\n>> \"crash recovery\".\n> \n> Maybe call it \"recovery_sync_method\"\n+1. This name sounds good to me. Or recovery_init_sync_method, because that\nsync happens in the initial phase of recovery.\n\nAnother idea from different angle is data_directory_sync_method. If we adopt\nthis, we can easily extend this feature for other use cases (other than sync at\nthe beginning of recovery) without changing the name.\nI'm not sure if such cases actually exist, though.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 19 Mar 2021 01:50:22 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Fri, Mar 19, 2021 at 5:50 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2021/03/18 23:03, Bruce Momjian wrote:\n> >> Are we sure we want to use the word \"crash\" here? I don't remember\n> >> seeing it used anywhere else in our user interface.\n>\n> We have GUC restart_after_crash.\n>\n> > I guess it is\n> >> \"crash recovery\".\n> >\n> > Maybe call it \"recovery_sync_method\"\n> +1. This name sounds good to me. Or recovery_init_sync_method, because that\n> sync happens in the initial phase of recovery.\n\nYeah, I was trying to fit the existing pattern\n{restart,remove_temp_files}_after_crash. But\nrecovery_init_sync_method also sounds good to me. I prefer the\nversion with \"init\"... without \"init\", people might get the wrong idea\nabout what this controls, so let's try that. Done in the attached\nversion.\n\n> Another idea from different angle is data_directory_sync_method. If we adopt\n> this, we can easily extend this feature for other use cases (other than sync at\n> the beginning of recovery) without changing the name.\n> I'm not sure if such cases actually exist, though.\n\nI can't imagine what -- it's like using a sledge hammer to crack a nut.\n\n(I am aware of a semi-related idea: use the proposed fsinfo() Linux\nsystem call to read the filesystem-wide error counter at every\ncheckpoint to see if anything bad happened that Linux forgot to tell\nus about through the usual channels. That's the same internal\nmechanism used by syncfs() to report errors, but last I checked it\nhadn't been committed yet. I don't think that's share anything with\nthis code though.)\n\n From your earlier email:\n\nOn Fri, Mar 19, 2021 at 2:12 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> + database cluster that did not shut down cleanly, including copies\n> + created with pg_basebackup.\n>\n> \"pg_basebackup\" should be \"<application>pg_basebackup</application>\"?\n\nFixed.\n\n> + while ((de = ReadDir(dir, \"pg_tblspc\")))\n>\n> The comment of SyncDataDirectory() says \"Errors are logged but not\n> considered fatal\". So ReadDirExtended() with LOG level should be used\n> here, instead?\n\nFixed.\n\n> Isn't it better to call CHECK_FOR_INTERRUPTS() in this loop?\n\nHow could this be useful?\n\n> + fd = open(path, O_RDONLY);\n>\n> For current use, it's not harmful to use open() and close(). But isn't\n> it safer to use OpenTransientFile() and CloseTransientFile(), instead?\n\nOk, done, for consistency with other code.\n\n> Because do_syncfs() may be used for other purpose in the future.\n\nI hope not :-)\n\n> + if (syncfs(fd) < 0)\n> + elog(LOG, \"syncfs failed for %s: %m\", path);\n>\n> According to the message style guide, this message should be something\n> like \"could not sync filesystem for \\\"%s\\\": %m\"?\n\nFixed.\n\n> I confirmed that no error was reported when crash recovery started with\n> syncfs, in old Linux. I should also confirm that no error is reported in that\n> case in Linux 5.8+, but I don't have that environement. So I've not tested\n> this feature in Linux 5.8+....\n\nI have a Linux 5.10 system. Here's a trace of SyncDataDirectory on a\nsystem that has two tablespaces and has a symlink for pg_wal:\n\n[pid 3861224] lstat(\"pg_wal\", {st_mode=S_IFLNK|0777, st_size=11, ...}) = 0\n[pid 3861224] openat(AT_FDCWD, \".\", O_RDONLY) = 4\n[pid 3861224] syncfs(4) = 0\n[pid 3861224] close(4) = 0\n[pid 3861224] openat(AT_FDCWD, \"pg_tblspc\",\nO_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 4\n[pid 3861224] fstat(4, {st_mode=S_IFDIR|0700, st_size=32, ...}) = 0\n[pid 3861224] getdents64(4, 0x55627e18fb60 /* 4 entries */, 32768) = 112\n[pid 3861224] openat(AT_FDCWD, \"pg_tblspc/16406\", O_RDONLY) = 5\n[pid 3861224] syncfs(5) = 0\n[pid 3861224] close(5) = 0\n[pid 3861224] openat(AT_FDCWD, \"pg_tblspc/16407\", O_RDONLY) = 5\n[pid 3861224] syncfs(5) = 0\n[pid 3861224] close(5) = 0\n[pid 3861224] getdents64(4, 0x55627e18fb60 /* 0 entries */, 32768) = 0\n[pid 3861224] close(4) = 0\n[pid 3861224] openat(AT_FDCWD, \"pg_wal\", O_RDONLY) = 4\n[pid 3861224] syncfs(4) = 0\n[pid 3861224] close(4) = 0\n\nTo see how it looks when syncfs() fails, I added a fake implementation\nthat fails with EUCLEAN on every second call, and then the output\nlooks like this:\n\n...\n1616111356.663 startup 3879272 LOG: database system was interrupted;\nlast known up at 2021-03-19 12:48:33 NZDT\n1616111356.663 startup 3879272 LOG: could not sync filesystem for\n\"pg_tblspc/16406\": Structure needs cleaning\n1616111356.663 startup 3879272 LOG: could not sync filesystem for\n\"pg_wal\": Structure needs cleaning\n1616111356.663 startup 3879272 LOG: database system was not properly\nshut down; automatic recovery in progress\n...\n\nA more common setup with no tablespaces and pg_wal as a non-symlink looks like:\n\n[pid 3861448] lstat(\"pg_wal\", {st_mode=S_IFDIR|0700, st_size=316, ...}) = 0\n[pid 3861448] openat(AT_FDCWD, \".\", O_RDONLY) = 4\n[pid 3861448] syncfs(4) = 0\n[pid 3861448] close(4) = 0\n[pid 3861448] openat(AT_FDCWD, \"pg_tblspc\",\nO_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 4\n[pid 3861448] fstat(4, {st_mode=S_IFDIR|0700, st_size=6, ...}) = 0\n[pid 3861448] getdents64(4, 0x55764beb0b60 /* 2 entries */, 32768) = 48\n[pid 3861448] getdents64(4, 0x55764beb0b60 /* 0 entries */, 32768) = 0\n[pid 3861448] close(4)\n\nThe alternative fsync() mode is (unsurprisingly) much longer.\n\nThanks for the reviews!\n\nPS: For illustration/discussion, I've also attached a \"none\" patch. I\nalso couldn't resist rebasing my \"wal\" mode patch, which I plan to\npropose for PG15 because there is not enough time left for this\nrelease.",
"msg_date": "Fri, 19 Mar 2021 14:16:19 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Fri, Mar 19, 2021 at 2:16 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> PS: For illustration/discussion, I've also attached a \"none\" patch. I\n> also couldn't resist rebasing my \"wal\" mode patch, which I plan to\n> propose for PG15 because there is not enough time left for this\n> release.\n\nErm... I attached the wrong version by mistake. Here's a better one.\n(Note: I'm not expecting any review of the 0003 patch in this CF, I\njust wanted to share the future direction I'd like to consider for\nthis problem.)",
"msg_date": "Fri, 19 Mar 2021 14:37:11 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "\n\nOn 2021/03/19 10:37, Thomas Munro wrote:\n> On Fri, Mar 19, 2021 at 2:16 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> PS: For illustration/discussion, I've also attached a \"none\" patch. I\n>> also couldn't resist rebasing my \"wal\" mode patch, which I plan to\n>> propose for PG15 because there is not enough time left for this\n>> release.\n> \n> Erm... I attached the wrong version by mistake. Here's a better one.\n\nThanks for updating the patch! It looks good to me!\nI have one minor comment for the patch.\n\n+\t\telog(LOG, \"could not open %s: %m\", path);\n+\t\treturn;\n+\t}\n+\tif (syncfs(fd) < 0)\n+\t\telog(LOG, \"could not sync filesystem for \\\"%s\\\": %m\", path);\n\nSince these are neither internal errors nor low-level debug messages, ereport() should be used for them rather than elog()? For example,\n\n\t\tereport(LOG,\n\t\t\t\t(errcode_for_file_access(),\n\t\t\t\t errmsg(\"could not open \\\"%s\\\": %m\", path)))\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 19 Mar 2021 11:22:59 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "\n\nOn 2021/03/19 11:22, Fujii Masao wrote:\n> \n> \n> On 2021/03/19 10:37, Thomas Munro wrote:\n>> On Fri, Mar 19, 2021 at 2:16 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>>> PS: For illustration/discussion, I've also attached a \"none\" patch. I\n>>> also couldn't resist rebasing my \"wal\" mode patch, which I plan to\n>>> propose for PG15 because there is not enough time left for this\n>>> release.\n>>\n>> Erm... I attached the wrong version by mistake. Here's a better one.\n\n0002 patch looks good to me. Thanks!\nI have minor comments.\n\n- * Issue fsync recursively on PGDATA and all its contents, or issue syncfs for\n- * all potential filesystem, depending on recovery_init_sync_method setting.\n+ * Issue fsync recursively on PGDATA and all its contents, issue syncfs for\n+ * all potential filesystem, or do nothing, depending on\n+ * recovery_init_sync_method setting.\n\nThe comment in SyncDataDirectory() should be updated so that\nit mentions \"none\" method, as the above?\n\n+ This is only safe if all buffered data is known to have been flushed\n+ to disk already, for example by a tool such as\n+ <application>pg_basebackup</application>. It is not a good idea to\n\nIsn't it better to add something like \"without <literal>--no-sync</literal>\"\nto \"pg_basebackup\" part? Which would prevent users from misunderstanding\nthat pg_basebackup always ensures that whatever options are specified.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 19 Mar 2021 12:08:19 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Fri, Mar 19, 2021 at 3:23 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> Thanks for updating the patch! It looks good to me!\n> I have one minor comment for the patch.\n>\n> + elog(LOG, \"could not open %s: %m\", path);\n> + return;\n> + }\n> + if (syncfs(fd) < 0)\n> + elog(LOG, \"could not sync filesystem for \\\"%s\\\": %m\", path);\n>\n> Since these are neither internal errors nor low-level debug messages, ereport() should be used for them rather than elog()? For example,\n\nFixed.\n\nI'll let this sit until tomorrow to collect any other feedback or\nobjections, and then push the 0001 patch\n(recovery_init_sync_method=syncfs).\n\nOn Fri, Mar 19, 2021 at 4:08 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> 0002 patch looks good to me. Thanks!\n> I have minor comments.\n\nOk, I made the changes you suggested. Let's see if anyone else would\nlike to vote for or against the concept of the 0002 patch\n(recovery_init_sync_method=none).",
"msg_date": "Fri, 19 Mar 2021 18:28:46 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "\n\nOn 2021/03/19 14:28, Thomas Munro wrote:\n> On Fri, Mar 19, 2021 at 3:23 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> Thanks for updating the patch! It looks good to me!\n>> I have one minor comment for the patch.\n>>\n>> + elog(LOG, \"could not open %s: %m\", path);\n>> + return;\n>> + }\n>> + if (syncfs(fd) < 0)\n>> + elog(LOG, \"could not sync filesystem for \\\"%s\\\": %m\", path);\n>>\n>> Since these are neither internal errors nor low-level debug messages, ereport() should be used for them rather than elog()? For example,\n> \n> Fixed.\n\nThanks! LGTM.\n\n> I'll let this sit until tomorrow to collect any other feedback or\n> objections, and then push the 0001 patch\n> (recovery_init_sync_method=syncfs).\n\nUnderstood.\n\n> On Fri, Mar 19, 2021 at 4:08 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> 0002 patch looks good to me. Thanks!\n>> I have minor comments.\n> \n> Ok, I made the changes you suggested.\n\nThanks! LGTM.\n\n> Let's see if anyone else would\n> like to vote for or against the concept of the 0002 patch\n> (recovery_init_sync_method=none).\n\nAgreed. I also want to hear more opinion about the setting \"none\".\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 19 Mar 2021 16:34:57 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Fri, Mar 19, 2021 at 06:28:46PM +1300, Thomas Munro wrote:\n> +++ b/doc/src/sgml/config.sgml\n\n> + <productname>PostgreSQL</productname> will recursively open and fsync\n> + all files in the data directory before crash recovery begins. This\n\nMaybe it should say \"data, tablespace, and WAL directories\", or just \"critical\ndirectories\".\n\n> +\t{\n> +\t\t{\"recovery_init_sync_method\", PGC_POSTMASTER, ERROR_HANDLING_OPTIONS,\n> +\t\t\tgettext_noop(\"Sets the method for synchronizing the data directory before crash recovery.\"),\n> +\t\t},\n\n\"and tablespaces and WAL\"\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 19 Mar 2021 06:01:34 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On 3/19/21 1:28 AM, Thomas Munro wrote:\n> On Fri, Mar 19, 2021 at 3:23 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> Thanks for updating the patch! It looks good to me!\n>> I have one minor comment for the patch.\n>>\n>> + elog(LOG, \"could not open %s: %m\", path);\n>> + return;\n>> + }\n>> + if (syncfs(fd) < 0)\n>> + elog(LOG, \"could not sync filesystem for \\\"%s\\\": %m\", path);\n>>\n>> Since these are neither internal errors nor low-level debug messages, ereport() should be used for them rather than elog()? For example,\n> \n> Fixed.\n> \n> I'll let this sit until tomorrow to collect any other feedback or\n> objections, and then push the 0001 patch\n> (recovery_init_sync_method=syncfs).\n\nI had a look at the patch and it looks good to me.\n\nShould we mention in the docs that the contents of non-standard symlinks \nwill not be synced, i.e. anything other than tablespaces and pg_wal? Or \ncan we point them to some docs saying not to do that (if such exists)?\n\n> On Fri, Mar 19, 2021 at 4:08 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> 0002 patch looks good to me. Thanks!\n>> I have minor comments.\n> \n> Ok, I made the changes you suggested. Let's see if anyone else would\n> like to vote for or against the concept of the 0002 patch\n> (recovery_init_sync_method=none).\n\nIt worries me that this needs to be explicitly \"turned off\" after the \ninitial recovery. Seems like something of a foot gun.\n\nSince we have not offered this functionality before I'm not sure we \nshould rush to introduce it now. For backup solutions that do their own \nsyncing, syncfs() should provide excellent performance so long as the \nfile system is not shared, which is something the user can control (and \nis noted in the docs).\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 19 Mar 2021 09:55:29 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "Thanks Justin and David. Replies to two emails inline:\n\nOn Sat, Mar 20, 2021 at 12:01 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Fri, Mar 19, 2021 at 06:28:46PM +1300, Thomas Munro wrote:\n> > +++ b/doc/src/sgml/config.sgml\n>\n> > + <productname>PostgreSQL</productname> will recursively open and fsync\n> > + all files in the data directory before crash recovery begins. This\n>\n> Maybe it should say \"data, tablespace, and WAL directories\", or just \"critical\n> directories\".\n\nFair point. Here's what I went with:\n\n When set to <literal>fsync</literal>, which is the default,\n <productname>PostgreSQL</productname> will recursively open and\n synchronize all files in the data directory before crash\nrecovery\n begins. The search for files will follow symbolic links for the WAL\n directory and each configured tablespace (but not any other symbolic\n links).\n\n> > + {\n> > + {\"recovery_init_sync_method\", PGC_POSTMASTER, ERROR_HANDLING_OPTIONS,\n> > + gettext_noop(\"Sets the method for synchronizing the data directory before crash recovery.\"),\n> > + },\n>\n> \"and tablespaces and WAL\"\n\nI feel like that's getting too detailed for the GUC description?\n\nOn Sat, Mar 20, 2021 at 2:55 AM David Steele <david@pgmasters.net> wrote:\n> I had a look at the patch and it looks good to me.\n\nThanks!\n\n> Should we mention in the docs that the contents of non-standard symlinks\n> will not be synced, i.e. anything other than tablespaces and pg_wal? Or\n> can we point them to some docs saying not to do that (if such exists)?\n\nGood idea. See above for the adjustment I went with to describe the\ntraditional behaviour, and then I also made a similar change to the\nsyncfs part, which, I hope, manages to convey that the new mode\nmatches the existing policy on symlinks:\n\n On Linux, <literal>syncfs</literal> may be used instead, to ask the\n operating system to synchronize the whole file systems that contain the\n data directory, the WAL file and each tablespace (but not any other\n file systems that may be reachable through symbolic links).\n\nI thought about adding some text along the lines that such symlinks\nare not expected, but I think you're right that what we really need is\na good place to point to. I mean, generally you can't mess around\nwith the files managed by PostgreSQL and expect everything to keep\nworking correctly, but it wouldn't hurt to make an explicit statement\nabout symlinks and where they're allowed (or maybe there is one\nalready and I failed to find it). There are hints though, like\npg_basebackup's documentation which tells you it won't follow or\npreserve them in general, but... hmm, it also contemplates various\nspecial subdirectories (pg_dynshmem, pg_notify, pg_replslot, ...) that\nmight be symlinks without saying why.\n\n> > Ok, I made the changes you suggested. Let's see if anyone else would\n> > like to vote for or against the concept of the 0002 patch\n> > (recovery_init_sync_method=none).\n>\n> It worries me that this needs to be explicitly \"turned off\" after the\n> initial recovery. Seems like something of a foot gun.\n>\n> Since we have not offered this functionality before I'm not sure we\n> should rush to introduce it now. For backup solutions that do their own\n> syncing, syncfs() should provide excellent performance so long as the\n> file system is not shared, which is something the user can control (and\n> is noted in the docs).\n\nThanks. I'm leaving the 0002 patch \"on ice\" until someone can explain\nhow you're supposed to use it without putting a hole in your foot.\n\nI pushed the 0001 patch. Thanks to all who reviewed. Of course,\nfurther documentation improvement patches are always welcome.\n\n(One silly thing I noticed is that our comments generally think\n\"filesystem\" is one word, but our documentation always has a space;\nthis patch followed the local convention in both cases!)\n\n\n",
"msg_date": "Sat, 20 Mar 2021 12:16:27 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On 3/19/21 7:16 PM, Thomas Munro wrote:\n> Thanks Justin and David. Replies to two emails inline:\n> \n> Fair point. Here's what I went with:\n> \n> When set to <literal>fsync</literal>, which is the default,\n> <productname>PostgreSQL</productname> will recursively open and\n> synchronize all files in the data directory before crash\n> recovery\n> begins. The search for files will follow symbolic links for the WAL\n> directory and each configured tablespace (but not any other symbolic\n> links).\n> \n\n+1\n\n> I thought about adding some text along the lines that such symlinks\n> are not expected, but I think you're right that what we really need is\n> a good place to point to. I mean, generally you can't mess around\n> with the files managed by PostgreSQL and expect everything to keep\n> working correctly\n\nWRT to symlinks I'm not sure that's fair to say. From PG's perspective \nit's just a dir/file after all. Other than pg_wal I have seen \npg_stat/pg_stat_tmp sometimes symlinked, plus config files, and the log dir.\n\npgBackRest takes a pretty liberal approach here. Were preserve all \ndir/file symlinks no matter where they appear and allow all of them to \nbe remapped on restore.\n\n> but it wouldn't hurt to make an explicit statement\n> about symlinks and where they're allowed (or maybe there is one\n> already and I failed to find it). \n\nI couldn't find it either and I would be in favor of it. For instance, \npgBackRest forbids tablespaces inside PGDATA and when people complain \n(more often then you might imagine) we can just point to the code/docs.\n\n> There are hints though, like\n> pg_basebackup's documentation which tells you it won't follow or\n> preserve them in general, but... hmm, it also contemplates various\n> special subdirectories (pg_dynshmem, pg_notify, pg_replslot, ...) that\n> might be symlinks without saying why.\n\nRight, pg_dynshmem is another one that I've seen symlinked. Some things \nare nice to have on fast storage. pg_notify and pg_replslot are similar \nsince they get written to a lot in certain configurations.\n\n>> It worries me that this needs to be explicitly \"turned off\" after the\n>> initial recovery. Seems like something of a foot gun.\n>>\n>> Since we have not offered this functionality before I'm not sure we\n>> should rush to introduce it now. For backup solutions that do their own\n>> syncing, syncfs() should provide excellent performance so long as the\n>> file system is not shared, which is something the user can control (and\n>> is noted in the docs).\n> \n> Thanks. I'm leaving the 0002 patch \"on ice\" until someone can explain\n> how you're supposed to use it without putting a hole in your foot.\n\n+1\n\n> (One silly thing I noticed is that our comments generally think\n> \"filesystem\" is one word, but our documentation always has a space;\n> this patch followed the local convention in both cases!)\n\nPersonally I prefer \"file system\".\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Fri, 19 Mar 2021 20:30:54 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Wed, 10 Mar 2021 at 20:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> So this means that in less-than-bleeding-edge kernels, syncfs can\n> only be regarded as a dangerous toy. If we expose an option to use\n> it, there had better be large blinking warnings in the docs.\n\nIsn't that true for fsync and everything else related on\nless-than-bleeding-edge kernels anyways?\n\n-- \ngreg\n\n\n",
"msg_date": "Sun, 21 Mar 2021 03:55:02 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Sat, Mar 20, 2021 at 12:16:27PM +1300, Thomas Munro wrote:\n> > > + {\n> > > + {\"recovery_init_sync_method\", PGC_POSTMASTER, ERROR_HANDLING_OPTIONS,\n> > > + gettext_noop(\"Sets the method for synchronizing the data directory before crash recovery.\"),\n> > > + },\n\nIs there any reason why this can't be PGC_SIGHUP ?\n(Same as restart_after_crash, remove_temp_files_after_crash)\n\nAs neat as it'd be, I am not expecting the recovery process to reload the\nconfiguration and finish fast if I send it HUP.\n\nWhile I'm looking, it's not clear why this needs to be PGC_POSTMASTER.\ndata_sync_retry - but see b3a156858\n\nThis one isn't documented as requiring a restart:\nmax_logical_replication_workers.\n\nignore_invalid_pages could probably be SIGHUP, but it's intended to be used as\na commandline option, not in a config file.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 25 May 2021 19:13:59 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Tue, May 25, 2021 at 07:13:59PM -0500, Justin Pryzby wrote:\n> This one isn't documented as requiring a restart:\n> max_logical_replication_workers.\n\nThere is much more than meets the eye here, and this is unrelated to\nthis thread, so let's discuss that on a separate thread. I'll start a\nnew one with everything I found. \n--\nMichael",
"msg_date": "Wed, 26 May 2021 10:16:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Tue, May 25, 2021 at 07:13:59PM -0500, Justin Pryzby wrote:\n> On Sat, Mar 20, 2021 at 12:16:27PM +1300, Thomas Munro wrote:\n> > > > + {\n> > > > + {\"recovery_init_sync_method\", PGC_POSTMASTER, ERROR_HANDLING_OPTIONS,\n> > > > + gettext_noop(\"Sets the method for synchronizing the data directory before crash recovery.\"),\n> > > > + },\n> \n> Is there any reason why this can't be PGC_SIGHUP ?\n> (Same as restart_after_crash, remove_temp_files_after_crash)\n\nI can't see any reason why this is nontrivial.\nWhat about data_sync_retry?\n\ncommit 2d2d2e10f99548c486b50a1ce095437d558e8649\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Sat May 29 13:41:14 2021 -0500\n\n Change recovery_init_sync_method to PGC_SIGHUP..\n \n The setting has no effect except during startup.\n But it's nice to be able to change the setting dynamically, which is expected\n to be pretty useful to an admin following crash recovery when turning the\n service off and on again is not so appealing.\n \n See also: 2941138e6, 61752afb2\n\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\nindex d8c0fd3315..ab9916eac5 100644\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -9950,7 +9950,8 @@ dynamic_library_path = 'C:\\tools\\postgresql;H:\\my_project\\lib;$libdir'\n appear only in kernel logs.\n </para>\n <para>\n- This parameter can only be set at server start.\n+ This parameter can only be set in the <filename>postgresql.conf</filename>\n+ file or on the server command line.\n </para>\n </listitem>\n </varlistentry>\ndiff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\nindex 87bc688704..796b4e83ce 100644\n--- a/src/backend/utils/misc/guc.c\n+++ b/src/backend/utils/misc/guc.c\n@@ -4945,7 +4945,7 @@ static struct config_enum ConfigureNamesEnum[] =\n \t},\n \n \t{\n-\t\t{\"recovery_init_sync_method\", PGC_POSTMASTER, ERROR_HANDLING_OPTIONS,\n+\t\t{\"recovery_init_sync_method\", PGC_SIGHUP, ERROR_HANDLING_OPTIONS,\n \t\t\tgettext_noop(\"Sets the method for synchronizing the data directory before crash recovery.\"),\n \t\t},\n \t\t&recovery_init_sync_method,\ndiff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample\nindex ddbb6dc2be..9c4c4a9eec 100644\n--- a/src/backend/utils/misc/postgresql.conf.sample\n+++ b/src/backend/utils/misc/postgresql.conf.sample\n@@ -774,7 +774,6 @@\n \t\t\t\t\t# data?\n \t\t\t\t\t# (change requires restart)\n #recovery_init_sync_method = fsync\t# fsync, syncfs (Linux 5.8+)\n-\t\t\t\t\t# (change requires restart)\n \n \n #------------------------------------------------------------------------------\n\n\n",
"msg_date": "Sat, 29 May 2021 14:23:21 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Sat, May 29, 2021 at 02:23:21PM -0500, Justin Pryzby wrote:\n> On Tue, May 25, 2021 at 07:13:59PM -0500, Justin Pryzby wrote:\n>> On Sat, Mar 20, 2021 at 12:16:27PM +1300, Thomas Munro wrote:\n>> > > > + {\n>> > > > + {\"recovery_init_sync_method\", PGC_POSTMASTER, ERROR_HANDLING_OPTIONS,\n>> > > > + gettext_noop(\"Sets the method for synchronizing the data directory before crash recovery.\"),\n>> > > > + },\n>> \n>> Is there any reason why this can't be PGC_SIGHUP ?\n> \n> I can't see any reason why this is nontrivial.\n\nI think that we had better let recovery_init_sync_method as\nPGC_POSTMASTER, to stay on the safe side. SyncDataDirectory() only\ngets called now in the backend code by the startup process after a\ncrash at the beginning of recovery, so switching to PGC_SIGHUP would\nhave zero effect to begin with. Now, let's not forget that\nSyncDataDirectory() is a published API, and if anything exterior were\nto call that, it does not seem right to me to make that its behavior\nreloadable at will.\n--\nMichael",
"msg_date": "Fri, 4 Jun 2021 16:24:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Fri, Jun 04, 2021 at 04:24:02PM +0900, Michael Paquier wrote:\n> On Sat, May 29, 2021 at 02:23:21PM -0500, Justin Pryzby wrote:\n> > On Tue, May 25, 2021 at 07:13:59PM -0500, Justin Pryzby wrote:\n> >> On Sat, Mar 20, 2021 at 12:16:27PM +1300, Thomas Munro wrote:\n> >> > > > + {\n> >> > > > + {\"recovery_init_sync_method\", PGC_POSTMASTER, ERROR_HANDLING_OPTIONS,\n> >> > > > + gettext_noop(\"Sets the method for synchronizing the data directory before crash recovery.\"),\n> >> > > > + },\n> >> \n> >> Is there any reason why this can't be PGC_SIGHUP ?\n> > \n> > I can't see any reason why this is nontrivial.\n> \n> I think that we had better let recovery_init_sync_method as\n> PGC_POSTMASTER, to stay on the safe side. SyncDataDirectory() only\n> gets called now in the backend code by the startup process after a\n> crash at the beginning of recovery, so switching to PGC_SIGHUP would\n> have zero effect to begin with. Now, let's not forget that\n> SyncDataDirectory() is a published API, and if anything exterior were\n> to call that, it does not seem right to me to make that its behavior\n> reloadable at will.\n\nYou said switching to SIGHUP \"would have zero effect\"; but, actually it allows\nan admin who's DB took a long time in recovery/startup to change the parameter\nwithout shutting down the service. This mitigates the downtime if it crashes\nagain. I think that's at least 50% of how this feature might end up being\nused.\n\nIt might be \"safer\" if fsync were PGC_POSTMASTER, but it's allowed to change at\nruntime that parameter, which is much more widely applicable. I've already\nmentioned restart_after_crash, and remove_temp_files_after_crash.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 4 Jun 2021 09:39:04 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "Thomas, could you comment on this ?\n\nOn Sat, May 29, 2021 at 02:23:21PM -0500, Justin Pryzby wrote:\n> On Tue, May 25, 2021 at 07:13:59PM -0500, Justin Pryzby wrote:\n> > On Sat, Mar 20, 2021 at 12:16:27PM +1300, Thomas Munro wrote:\n> > > > > + {\n> > > > > + {\"recovery_init_sync_method\", PGC_POSTMASTER, ERROR_HANDLING_OPTIONS,\n> > > > > + gettext_noop(\"Sets the method for synchronizing the data directory before crash recovery.\"),\n> > > > > + },\n> > \n> > Is there any reason why this can't be PGC_SIGHUP ?\n> > (Same as restart_after_crash, remove_temp_files_after_crash)\n> \n> I can't see any reason why this is nontrivial.\n> What about data_sync_retry?\n> \n> commit 2d2d2e10f99548c486b50a1ce095437d558e8649\n> Author: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Sat May 29 13:41:14 2021 -0500\n> \n> Change recovery_init_sync_method to PGC_SIGHUP..\n> \n> The setting has no effect except during startup.\n> But it's nice to be able to change the setting dynamically, which is expected\n> to be pretty useful to an admin following crash recovery when turning the\n> service off and on again is not so appealing.\n> \n> See also: 2941138e6, 61752afb2\n> \n> diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\n> index d8c0fd3315..ab9916eac5 100644\n> --- a/doc/src/sgml/config.sgml\n> +++ b/doc/src/sgml/config.sgml\n> @@ -9950,7 +9950,8 @@ dynamic_library_path = 'C:\\tools\\postgresql;H:\\my_project\\lib;$libdir'\n> appear only in kernel logs.\n> </para>\n> <para>\n> - This parameter can only be set at server start.\n> + This parameter can only be set in the <filename>postgresql.conf</filename>\n> + file or on the server command line.\n> </para>\n> </listitem>\n> </varlistentry>\n> diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\n> index 87bc688704..796b4e83ce 100644\n> --- a/src/backend/utils/misc/guc.c\n> +++ b/src/backend/utils/misc/guc.c\n> @@ -4945,7 +4945,7 @@ static struct config_enum ConfigureNamesEnum[] =\n> \t},\n> \n> \t{\n> -\t\t{\"recovery_init_sync_method\", PGC_POSTMASTER, ERROR_HANDLING_OPTIONS,\n> +\t\t{\"recovery_init_sync_method\", PGC_SIGHUP, ERROR_HANDLING_OPTIONS,\n> \t\t\tgettext_noop(\"Sets the method for synchronizing the data directory before crash recovery.\"),\n> \t\t},\n> \t\t&recovery_init_sync_method,\n> diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample\n> index ddbb6dc2be..9c4c4a9eec 100644\n> --- a/src/backend/utils/misc/postgresql.conf.sample\n> +++ b/src/backend/utils/misc/postgresql.conf.sample\n> @@ -774,7 +774,6 @@\n> \t\t\t\t\t# data?\n> \t\t\t\t\t# (change requires restart)\n> #recovery_init_sync_method = fsync\t# fsync, syncfs (Linux 5.8+)\n> -\t\t\t\t\t# (change requires restart)\n> \n> \n> #------------------------------------------------------------------------------\n\n\n",
"msg_date": "Thu, 17 Jun 2021 20:11:01 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "\n\nOn 2021/06/04 23:39, Justin Pryzby wrote:\n> You said switching to SIGHUP \"would have zero effect\"; but, actually it allows\n> an admin who's DB took a long time in recovery/startup to change the parameter\n> without shutting down the service. This mitigates the downtime if it crashes\n> again. I think that's at least 50% of how this feature might end up being\n> used.\n\nYes, it would have an effect when the server is automatically restarted\nafter crash when restart_after_crash is enabled. At least for me +1 to\nyour proposed change.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 18 Jun 2021 18:18:59 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Fri, Jun 18, 2021 at 06:18:59PM +0900, Fujii Masao wrote:\n> On 2021/06/04 23:39, Justin Pryzby wrote:\n>> You said switching to SIGHUP \"would have zero effect\"; but, actually it allows\n>> an admin who's DB took a long time in recovery/startup to change the parameter\n>> without shutting down the service. This mitigates the downtime if it crashes\n>> again. I think that's at least 50% of how this feature might end up being\n>> used.\n> \n> Yes, it would have an effect when the server is automatically restarted\n> after crash when restart_after_crash is enabled. At least for me +1 to\n> your proposed change.\n\nGood point about restart_after_crash, I did not consider that.\nSwitching recovery_init_sync_method to SIGHUP could be helpful with\nthat.\n--\nMichael",
"msg_date": "Tue, 22 Jun 2021 13:45:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Fri, Jun 18, 2021 at 1:11 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Thomas, could you comment on this ?\n\nSorry, I missed that. It is initially a confusing proposal, but after\ntrying it out (that is: making recovery_init_sync_method PGC_SIGHUP\nand testing a scenario where you want to make the next crash use it\nthat way and without the change), I agree. +1 from me.\n\n\n",
"msg_date": "Tue, 22 Jun 2021 17:01:44 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
},
{
"msg_contents": "On Tue, Jun 22, 2021 at 5:01 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Jun 18, 2021 at 1:11 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Thomas, could you comment on this ?\n>\n> Sorry, I missed that. It is initially a confusing proposal, but after\n> trying it out (that is: making recovery_init_sync_method PGC_SIGHUP\n> and testing a scenario where you want to make the next crash use it\n> that way and without the change), I agree. +1 from me.\n\n... and pushed.\n\n\n",
"msg_date": "Mon, 28 Jun 2021 15:51:11 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fdatasync performance problem with large number of DB files"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile discussing freezing tuples during CTAS[1], we found that\nheap_insert() with HEAP_INSERT_FROZEN brings performance degradation.\nFor instance, with Paul's patch that sets HEAP_INSERT_FROZEN to CTAS,\nit took 12 sec whereas the code without the patch took 10 sec with the\nfollowing query:\n\ncreate table t1 (a, b, c, d) as select i,i,i,i from\ngenerate_series(1,20000000) i;\n\nI've done a simple benchmark of REFRESH MATERIALIZED VIEW with the\nfollowing queries:\n\ncreate table source as select generate_series(1, 50000000);\ncreate materialized view mv as select * from source;\nrefresh materialized view mv;\n\nThe execution time of REFRESH MATERIALIZED VIEW are:\n\nw/ HEAP_INSERT_FROZEN flag : 42 sec\nw/o HEAP_INSERT_FROZEN flag : 33 sec\n\nAfter investigation, I found that such performance degradation happens\non only HEAD code. It seems to me that commit 39b66a91b (and\n7db0cd2145) is relevant that has heap_insert() set VM bits and\nPD_ALL_VISIBLE if HEAP_INSERT_FROZEN is specified (so CCing Tomas\nVondra and authors). Since heap_insert() sets PD_ALL_VISIBLE to the\npage when inserting a tuple for the first time on the page (around\nL2133 in heapam.c), every subsequent heap_insert() on the page reads\nand pins a VM buffer (see RelationGetBufferForTuple()). Reading and\npinning a VM buffer for every insertion is a very high cost. This\ndoesn't happen in heap_multi_insert() since it sets VM buffer after\nfilling the heap page with tuples. Therefore, there is no such\nperformance degradation between COPY and COPY FREEZE if they use\nheap_multi_insert() (i.g., CIM_MULTI). Paul also reported it in that\nthread.\n\nAs far as I read the thread and commit messages related to those\ncommits, they are intended to COPY FREEZE and I could not find any\ndiscussion and mention about REFRESH MATERIALIZED VIEW. So I'm\nconcerned we didn't expect such performance degradation.\n\nSetting VM bits and PD_ALL_VISIBLE at REFRESH MATERIALIZED VIEW would\nbe a good choice in some cases. Since materialized views are read-only\nVM bits never be cleared after creation. So it might make sense for\nusers to pay a cost to set them at refresh (note that CREATE\nMATERIALIZED VIEW doesn’t set VM bits since it’s internally treated as\nCTAS). On the other hand, given this big performance degradation\n(about 20%) users might want to rely on autovacuum so that VM bits are\nset in the background. However, unlike COPY, there is no way to\ndisable freezing tuples for REFRESH MATERIALIZED VIEW. So every user\nwould be imposed on those costs and affected by that performance\ndegradation. I’m concerned that it could be a problem.\n\nWhat do you think?\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/flat/FB1F5E2D-CBD1-4645-B74C-E0A1BFAE4AC8%40vmware.com\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 11 Mar 2021 17:44:37 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "Hi,\n\nOn 2021-03-11 17:44:37 +0900, Masahiko Sawada wrote:\n> The execution time of REFRESH MATERIALIZED VIEW are:\n> \n> w/ HEAP_INSERT_FROZEN flag : 42 sec\n> w/o HEAP_INSERT_FROZEN flag : 33 sec\n> \n> After investigation, I found that such performance degradation happens\n> on only HEAD code. It seems to me that commit 39b66a91b (and\n> 7db0cd2145) is relevant that has heap_insert() set VM bits and\n> PD_ALL_VISIBLE if HEAP_INSERT_FROZEN is specified (so CCing Tomas\n> Vondra and authors). Since heap_insert() sets PD_ALL_VISIBLE to the\n> page when inserting a tuple for the first time on the page (around\n> L2133 in heapam.c), every subsequent heap_insert() on the page reads\n> and pins a VM buffer (see RelationGetBufferForTuple()). Reading and\n> pinning a VM buffer for every insertion is a very high cost. This\n> doesn't happen in heap_multi_insert() since it sets VM buffer after\n> filling the heap page with tuples. Therefore, there is no such\n> performance degradation between COPY and COPY FREEZE if they use\n> heap_multi_insert() (i.g., CIM_MULTI). Paul also reported it in that\n> thread.\n\nProbably worth adding as an open item for 14.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 11 Mar 2021 10:13:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Fri, Mar 12, 2021 at 3:13 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-03-11 17:44:37 +0900, Masahiko Sawada wrote:\n> > The execution time of REFRESH MATERIALIZED VIEW are:\n> >\n> > w/ HEAP_INSERT_FROZEN flag : 42 sec\n> > w/o HEAP_INSERT_FROZEN flag : 33 sec\n> >\n> > After investigation, I found that such performance degradation happens\n> > on only HEAD code. It seems to me that commit 39b66a91b (and\n> > 7db0cd2145) is relevant that has heap_insert() set VM bits and\n> > PD_ALL_VISIBLE if HEAP_INSERT_FROZEN is specified (so CCing Tomas\n> > Vondra and authors). Since heap_insert() sets PD_ALL_VISIBLE to the\n> > page when inserting a tuple for the first time on the page (around\n> > L2133 in heapam.c), every subsequent heap_insert() on the page reads\n> > and pins a VM buffer (see RelationGetBufferForTuple()). Reading and\n> > pinning a VM buffer for every insertion is a very high cost. This\n> > doesn't happen in heap_multi_insert() since it sets VM buffer after\n> > filling the heap page with tuples. Therefore, there is no such\n> > performance degradation between COPY and COPY FREEZE if they use\n> > heap_multi_insert() (i.g., CIM_MULTI). Paul also reported it in that\n> > thread.\n>\n> Probably worth adding as an open item for 14.\n\nI've added it to PostgreSQL 14 Open Items.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 12 Mar 2021 13:32:24 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": ".\n\nOn Thu, Mar 11, 2021 at 5:44 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi,\n>\n> While discussing freezing tuples during CTAS[1], we found that\n> heap_insert() with HEAP_INSERT_FROZEN brings performance degradation.\n> For instance, with Paul's patch that sets HEAP_INSERT_FROZEN to CTAS,\n> it took 12 sec whereas the code without the patch took 10 sec with the\n> following query:\n>\n> create table t1 (a, b, c, d) as select i,i,i,i from\n> generate_series(1,20000000) i;\n>\n> I've done a simple benchmark of REFRESH MATERIALIZED VIEW with the\n> following queries:\n>\n> create table source as select generate_series(1, 50000000);\n> create materialized view mv as select * from source;\n> refresh materialized view mv;\n>\n> The execution time of REFRESH MATERIALIZED VIEW are:\n>\n> w/ HEAP_INSERT_FROZEN flag : 42 sec\n> w/o HEAP_INSERT_FROZEN flag : 33 sec\n>\n> After investigation, I found that such performance degradation happens\n> on only HEAD code. It seems to me that commit 39b66a91b (and\n> 7db0cd2145) is relevant that has heap_insert() set VM bits and\n> PD_ALL_VISIBLE if HEAP_INSERT_FROZEN is specified (so CCing Tomas\n> Vondra and authors). Since heap_insert() sets PD_ALL_VISIBLE to the\n> page when inserting a tuple for the first time on the page (around\n> L2133 in heapam.c), every subsequent heap_insert() on the page reads\n> and pins a VM buffer (see RelationGetBufferForTuple()).\n\nIIUC RelationGetBufferForTuple() pins vm buffer if the page is\nall-visible since the caller might clear vm bit during operation. But\nit's not necessarily true in HEAP_FROZEN_INSERT case. When inserting\nHEAP_FROZEN_INSERT, we might set PD_ALL_VISIBLE flag and all-visible\nbit but never clear those flag and bit during insertion. Therefore to\nfix this issue, I think we can have RelationGetBufferForTuple() not to\npin vm buffer if we're inserting a frozen tuple (i.g.,\nHEAP_FROZEN_INSERT case) and the target page is already all-visible.\nIn HEAP_FROZEN_INSERT, the cases where we need to pin vm buffer would\nbe the table is empty. That way, we will pin vm buffer only for the\nfirst time of inserting frozen tuple into the empty page, then set\nPD_ALL_VISIBLE to the page and all-frozen bit on vm. Also set\nXLH_INSERT_ALL_FROZEN_SET to WAL. At further insertions, we would not\npin vm buffer as long as we’re inserting a frozen tuple into the same\npage.\n\nIf the target page is neither empty nor all-visible we will not pin vm\nbuffer, which is fine because if the page has non-frozen tuple we\ncannot set bit on vm during heap_insert(). If all tuples on the page\nare already frozen but PD_ALL_VISIBLE is not set for some reason, we\nwould be able to set all-frozen bit on vm but it seems not a good idea\nsince it requires checking during insertion if all existing tuples are\nfrozen or not.\n\nThe attached patch does the above idea. With this patch, the same\nperformance tests took 33 sec.\n\nAlso, I've measured the number of page read during REFRESH\nMATERIALIZED VIEW using by pg_stat_statements. There were big\ndifferent on shared_blks_hit on pg_stat_statements:\n\n1. w/ HEAP_INSERT_FROZEN flag (HEAD) : 50221781\n2. w/ HEAP_INSERT_FROZEN flag (HEAD) : 221782\n3. Patched: 443014\n\nSince the 'source' table has 50000000 and each heap_insert() read vm\nbuffer, test 1 read pages as many as the number of insertion tuples.\nThe value of test 3 is about twice as much as the one of test 2. This\nis because heap_insert() read the vm buffer for each first insertion\nto the page. The table has 221239 blocks.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 12 Apr 2021 15:20:41 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "At Mon, 12 Apr 2021 15:20:41 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \r\n> .\r\n> \r\n> On Thu, Mar 11, 2021 at 5:44 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> >\r\n> > Hi,\r\n> >\r\n> > While discussing freezing tuples during CTAS[1], we found that\r\n> > heap_insert() with HEAP_INSERT_FROZEN brings performance degradation.\r\n> > For instance, with Paul's patch that sets HEAP_INSERT_FROZEN to CTAS,\r\n> > it took 12 sec whereas the code without the patch took 10 sec with the\r\n> > following query:\r\n> >\r\n> > create table t1 (a, b, c, d) as select i,i,i,i from\r\n> > generate_series(1,20000000) i;\r\n> >\r\n> > I've done a simple benchmark of REFRESH MATERIALIZED VIEW with the\r\n> > following queries:\r\n> >\r\n> > create table source as select generate_series(1, 50000000);\r\n> > create materialized view mv as select * from source;\r\n> > refresh materialized view mv;\r\n> >\r\n> > The execution time of REFRESH MATERIALIZED VIEW are:\r\n> >\r\n> > w/ HEAP_INSERT_FROZEN flag : 42 sec\r\n> > w/o HEAP_INSERT_FROZEN flag : 33 sec\r\n> >\r\n> > After investigation, I found that such performance degradation happens\r\n> > on only HEAD code. It seems to me that commit 39b66a91b (and\r\n> > 7db0cd2145) is relevant that has heap_insert() set VM bits and\r\n> > PD_ALL_VISIBLE if HEAP_INSERT_FROZEN is specified (so CCing Tomas\r\n> > Vondra and authors). Since heap_insert() sets PD_ALL_VISIBLE to the\r\n> > page when inserting a tuple for the first time on the page (around\r\n> > L2133 in heapam.c), every subsequent heap_insert() on the page reads\r\n> > and pins a VM buffer (see RelationGetBufferForTuple()).\r\n> \r\n> IIUC RelationGetBufferForTuple() pins vm buffer if the page is\r\n> all-visible since the caller might clear vm bit during operation. But\r\n> it's not necessarily true in HEAP_FROZEN_INSERT case. When inserting\r\n> HEAP_FROZEN_INSERT, we might set PD_ALL_VISIBLE flag and all-visible\r\n> bit but never clear those flag and bit during insertion. Therefore to\r\n> fix this issue, I think we can have RelationGetBufferForTuple() not to\r\n> pin vm buffer if we're inserting a frozen tuple (i.g.,\r\n> HEAP_FROZEN_INSERT case) and the target page is already all-visible.\r\n\r\nIt seems right to me.\r\n\r\n> In HEAP_FROZEN_INSERT, the cases where we need to pin vm buffer would\r\n> be the table is empty. That way, we will pin vm buffer only for the\r\n> first time of inserting frozen tuple into the empty page, then set\r\n> PD_ALL_VISIBLE to the page and all-frozen bit on vm. Also set\r\n> XLH_INSERT_ALL_FROZEN_SET to WAL. At further insertions, we would not\r\n> pin vm buffer as long as we’re inserting a frozen tuple into the same\r\n> page.\r\n> \r\n> If the target page is neither empty nor all-visible we will not pin vm\r\n> buffer, which is fine because if the page has non-frozen tuple we\r\n> cannot set bit on vm during heap_insert(). If all tuples on the page\r\n> are already frozen but PD_ALL_VISIBLE is not set for some reason, we\r\n> would be able to set all-frozen bit on vm but it seems not a good idea\r\n> since it requires checking during insertion if all existing tuples are\r\n> frozen or not.\r\n> \r\n> The attached patch does the above idea. With this patch, the same\r\n> performance tests took 33 sec.\r\n\r\nGreat! The direction of the patch looks fine to me.\r\n\r\n+\t\t * If we're inserting frozen entry into empty page, we will set\r\n+\t\t * all-visible to page and all-frozen on visibility map.\r\n+\t\t */\r\n+\t\tif (PageGetMaxOffsetNumber(page) == 0)\r\n \t\t\tall_frozen_set = true;\r\n\r\nAFAICS the page is always empty when RelationGetBufferForTuple\r\nreturned a valid vmbuffer. So the \"if\" should be an \"assert\" instead.\r\n\r\nAnd, the patch changes the value of all_frozen_set to false when the\r\npage was already all-frozen (thus not empty). It would be fine since\r\nwe don't need to change the visibility of the page in that case but\r\nthe variable name is no longer correct. set_all_visible or such?\r\n\r\n> Also, I've measured the number of page read during REFRESH\r\n> MATERIALIZED VIEW using by pg_stat_statements. There were big\r\n> different on shared_blks_hit on pg_stat_statements:\r\n> \r\n> 1. w/ HEAP_INSERT_FROZEN flag (HEAD) : 50221781\r\n> 2. w/ HEAP_INSERT_FROZEN flag (HEAD) : 221782\r\n> 3. Patched: 443014\r\n> \r\n> Since the 'source' table has 50000000 and each heap_insert() read vm\r\n> buffer, test 1 read pages as many as the number of insertion tuples.\r\n> The value of test 3 is about twice as much as the one of test 2. This\r\n> is because heap_insert() read the vm buffer for each first insertion\r\n> to the page. The table has 221239 blocks.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Fri, 16 Apr 2021 12:16:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Fri, Apr 16, 2021 at 12:16 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 12 Apr 2021 15:20:41 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in\n> > .\n> >\n> > On Thu, Mar 11, 2021 at 5:44 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > While discussing freezing tuples during CTAS[1], we found that\n> > > heap_insert() with HEAP_INSERT_FROZEN brings performance degradation.\n> > > For instance, with Paul's patch that sets HEAP_INSERT_FROZEN to CTAS,\n> > > it took 12 sec whereas the code without the patch took 10 sec with the\n> > > following query:\n> > >\n> > > create table t1 (a, b, c, d) as select i,i,i,i from\n> > > generate_series(1,20000000) i;\n> > >\n> > > I've done a simple benchmark of REFRESH MATERIALIZED VIEW with the\n> > > following queries:\n> > >\n> > > create table source as select generate_series(1, 50000000);\n> > > create materialized view mv as select * from source;\n> > > refresh materialized view mv;\n> > >\n> > > The execution time of REFRESH MATERIALIZED VIEW are:\n> > >\n> > > w/ HEAP_INSERT_FROZEN flag : 42 sec\n> > > w/o HEAP_INSERT_FROZEN flag : 33 sec\n> > >\n> > > After investigation, I found that such performance degradation happens\n> > > on only HEAD code. It seems to me that commit 39b66a91b (and\n> > > 7db0cd2145) is relevant that has heap_insert() set VM bits and\n> > > PD_ALL_VISIBLE if HEAP_INSERT_FROZEN is specified (so CCing Tomas\n> > > Vondra and authors). Since heap_insert() sets PD_ALL_VISIBLE to the\n> > > page when inserting a tuple for the first time on the page (around\n> > > L2133 in heapam.c), every subsequent heap_insert() on the page reads\n> > > and pins a VM buffer (see RelationGetBufferForTuple()).\n> >\n> > IIUC RelationGetBufferForTuple() pins vm buffer if the page is\n> > all-visible since the caller might clear vm bit during operation. But\n> > it's not necessarily true in HEAP_FROZEN_INSERT case. When inserting\n> > HEAP_FROZEN_INSERT, we might set PD_ALL_VISIBLE flag and all-visible\n> > bit but never clear those flag and bit during insertion. Therefore to\n> > fix this issue, I think we can have RelationGetBufferForTuple() not to\n> > pin vm buffer if we're inserting a frozen tuple (i.g.,\n> > HEAP_FROZEN_INSERT case) and the target page is already all-visible.\n>\n> It seems right to me.\n>\n> > In HEAP_FROZEN_INSERT, the cases where we need to pin vm buffer would\n> > be the table is empty. That way, we will pin vm buffer only for the\n> > first time of inserting frozen tuple into the empty page, then set\n> > PD_ALL_VISIBLE to the page and all-frozen bit on vm. Also set\n> > XLH_INSERT_ALL_FROZEN_SET to WAL. At further insertions, we would not\n> > pin vm buffer as long as we’re inserting a frozen tuple into the same\n> > page.\n> >\n> > If the target page is neither empty nor all-visible we will not pin vm\n> > buffer, which is fine because if the page has non-frozen tuple we\n> > cannot set bit on vm during heap_insert(). If all tuples on the page\n> > are already frozen but PD_ALL_VISIBLE is not set for some reason, we\n> > would be able to set all-frozen bit on vm but it seems not a good idea\n> > since it requires checking during insertion if all existing tuples are\n> > frozen or not.\n> >\n> > The attached patch does the above idea. With this patch, the same\n> > performance tests took 33 sec.\n\nThank you for the comments.\n\n>\n> Great! The direction of the patch looks fine to me.\n>\n> + * If we're inserting frozen entry into empty page, we will set\n> + * all-visible to page and all-frozen on visibility map.\n> + */\n> + if (PageGetMaxOffsetNumber(page) == 0)\n> all_frozen_set = true;\n>\n> AFAICS the page is always empty when RelationGetBufferForTuple\n> returned a valid vmbuffer. So the \"if\" should be an \"assert\" instead.\n\nThere is a chance that RelationGetBufferForTuple() returns a valid\nvmbuffer but the page is not empty, since RelationGetBufferForTuple()\nchecks without a lock if the page is empty. But when it comes to\nHEAP_INSERT_FROZEN cases it actually doesn’t happen at least for now\nsince only one process inserts tuples into the relation. Will fix.\n\n>\n> And, the patch changes the value of all_frozen_set to false when the\n> page was already all-frozen (thus not empty). It would be fine since\n> we don't need to change the visibility of the page in that case but\n> the variable name is no longer correct. set_all_visible or such?\n\nIt seems to me that the variable name all_frozen_set corresponds to\nXLH_INSERT_ALL_FROZEN_SET but I see your point. How about\nset_all_frozen instead since we set all-frozen bits (also implying\nsetting all-visible)?\n\nBTW I found the following description of XLH_INSERT_ALL_FROZEN_SET but\nthere is no all_visible_set anywhere:\n\n/* all_frozen_set always implies all_visible_set */\n#define XLH_INSERT_ALL_FROZEN_SET (1<<5)\n\nI'll update those comments as well.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 19 Apr 2021 13:32:31 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "At Mon, 19 Apr 2021 13:32:31 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \r\n> On Fri, Apr 16, 2021 at 12:16 PM Kyotaro Horiguchi\r\n> <horikyota.ntt@gmail.com> wrote:\r\n> > AFAICS the page is always empty when RelationGetBufferForTuple\r\n> > returned a valid vmbuffer. So the \"if\" should be an \"assert\" instead.\r\n> \r\n> There is a chance that RelationGetBufferForTuple() returns a valid\r\n> vmbuffer but the page is not empty, since RelationGetBufferForTuple()\r\n> checks without a lock if the page is empty. But when it comes to\r\n> HEAP_INSERT_FROZEN cases it actually doesn’t happen at least for now\r\n> since only one process inserts tuples into the relation. Will fix.\r\n\r\nYes. It seems to me that it is cleaner that RelationGetBufferForTuple\r\nreturns vmbuffer only when the caller needs to change vm state.\r\nThanks.\r\n\r\n> > And, the patch changes the value of all_frozen_set to false when the\r\n> > page was already all-frozen (thus not empty). It would be fine since\r\n> > we don't need to change the visibility of the page in that case but\r\n> > the variable name is no longer correct. set_all_visible or such?\r\n> \r\n> It seems to me that the variable name all_frozen_set corresponds to\r\n> XLH_INSERT_ALL_FROZEN_SET but I see your point. How about\r\n> set_all_frozen instead since we set all-frozen bits (also implying\r\n> setting all-visible)?\r\n\r\nRight. However, \"if (set_all_frozen) then \"set all_visible\" looks like\r\na bug^^;. all_frozen_set looks better in that context than\r\nset_all_frozen. So I withdraw the comment.\r\n\r\n> BTW I found the following description of XLH_INSERT_ALL_FROZEN_SET but\r\n> there is no all_visible_set anywhere:\r\n> \r\n> /* all_frozen_set always implies all_visible_set */\r\n> #define XLH_INSERT_ALL_FROZEN_SET (1<<5)\r\n> \r\n> I'll update those comments as well.\r\n\r\nFWIW, it seems like a shorthand of \"ALL_FROZEN_SET implies ALL_VISIBLE\r\nto be set together\". The current comment is working to me.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Mon, 19 Apr 2021 17:04:04 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Mon, Apr 19, 2021 at 5:04 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 19 Apr 2021 13:32:31 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in\n> > On Fri, Apr 16, 2021 at 12:16 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > AFAICS the page is always empty when RelationGetBufferForTuple\n> > > returned a valid vmbuffer. So the \"if\" should be an \"assert\" instead.\n> >\n> > There is a chance that RelationGetBufferForTuple() returns a valid\n> > vmbuffer but the page is not empty, since RelationGetBufferForTuple()\n> > checks without a lock if the page is empty. But when it comes to\n> > HEAP_INSERT_FROZEN cases it actually doesn’t happen at least for now\n> > since only one process inserts tuples into the relation. Will fix.\n>\n> Yes. It seems to me that it is cleaner that RelationGetBufferForTuple\n> returns vmbuffer only when the caller needs to change vm state.\n> Thanks.\n>\n> > > And, the patch changes the value of all_frozen_set to false when the\n> > > page was already all-frozen (thus not empty). It would be fine since\n> > > we don't need to change the visibility of the page in that case but\n> > > the variable name is no longer correct. set_all_visible or such?\n> >\n> > It seems to me that the variable name all_frozen_set corresponds to\n> > XLH_INSERT_ALL_FROZEN_SET but I see your point. How about\n> > set_all_frozen instead since we set all-frozen bits (also implying\n> > setting all-visible)?\n>\n> Right. However, \"if (set_all_frozen) then \"set all_visible\" looks like\n> a bug^^;. all_frozen_set looks better in that context than\n> set_all_frozen. So I withdraw the comment.\n>\n> > BTW I found the following description of XLH_INSERT_ALL_FROZEN_SET but\n> > there is no all_visible_set anywhere:\n> >\n> > /* all_frozen_set always implies all_visible_set */\n> > #define XLH_INSERT_ALL_FROZEN_SET (1<<5)\n> >\n> > I'll update those comments as well.\n>\n> FWIW, it seems like a shorthand of \"ALL_FROZEN_SET implies ALL_VISIBLE\n> to be set together\". The current comment is working to me.\n>\n\nOkay, I've updated the patch accordingly. Please review it.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 19 Apr 2021 17:27:11 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Mon, Apr 19, 2021 at 1:57 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Apr 19, 2021 at 5:04 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Mon, 19 Apr 2021 13:32:31 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in\n> > > On Fri, Apr 16, 2021 at 12:16 PM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > > AFAICS the page is always empty when RelationGetBufferForTuple\n> > > > returned a valid vmbuffer. So the \"if\" should be an \"assert\" instead.\n> > >\n> > > There is a chance that RelationGetBufferForTuple() returns a valid\n> > > vmbuffer but the page is not empty, since RelationGetBufferForTuple()\n> > > checks without a lock if the page is empty. But when it comes to\n> > > HEAP_INSERT_FROZEN cases it actually doesn’t happen at least for now\n> > > since only one process inserts tuples into the relation. Will fix.\n> >\n> > Yes. It seems to me that it is cleaner that RelationGetBufferForTuple\n> > returns vmbuffer only when the caller needs to change vm state.\n> > Thanks.\n> >\n> > > > And, the patch changes the value of all_frozen_set to false when the\n> > > > page was already all-frozen (thus not empty). It would be fine since\n> > > > we don't need to change the visibility of the page in that case but\n> > > > the variable name is no longer correct. set_all_visible or such?\n> > >\n> > > It seems to me that the variable name all_frozen_set corresponds to\n> > > XLH_INSERT_ALL_FROZEN_SET but I see your point. How about\n> > > set_all_frozen instead since we set all-frozen bits (also implying\n> > > setting all-visible)?\n> >\n> > Right. However, \"if (set_all_frozen) then \"set all_visible\" looks like\n> > a bug^^;. all_frozen_set looks better in that context than\n> > set_all_frozen. So I withdraw the comment.\n> >\n> > > BTW I found the following description of XLH_INSERT_ALL_FROZEN_SET but\n> > > there is no all_visible_set anywhere:\n> > >\n> > > /* all_frozen_set always implies all_visible_set */\n> > > #define XLH_INSERT_ALL_FROZEN_SET (1<<5)\n> > >\n> > > I'll update those comments as well.\n> >\n> > FWIW, it seems like a shorthand of \"ALL_FROZEN_SET implies ALL_VISIBLE\n> > to be set together\". The current comment is working to me.\n> >\n>\n> Okay, I've updated the patch accordingly. Please review it.\n\nI was reading the patch, just found some typos: it should be \"a frozen\ntuple\" not \"an frozen tuple\".\n\n+ * Also pin visibility map page if we're inserting an frozen tuple into\n+ * If we're inserting an frozen entry into empty page, pin the\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Apr 2021 16:33:58 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Mon, Apr 19, 2021 at 8:04 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Apr 19, 2021 at 1:57 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Apr 19, 2021 at 5:04 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Mon, 19 Apr 2021 13:32:31 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in\n> > > > On Fri, Apr 16, 2021 at 12:16 PM Kyotaro Horiguchi\n> > > > <horikyota.ntt@gmail.com> wrote:\n> > > > > AFAICS the page is always empty when RelationGetBufferForTuple\n> > > > > returned a valid vmbuffer. So the \"if\" should be an \"assert\" instead.\n> > > >\n> > > > There is a chance that RelationGetBufferForTuple() returns a valid\n> > > > vmbuffer but the page is not empty, since RelationGetBufferForTuple()\n> > > > checks without a lock if the page is empty. But when it comes to\n> > > > HEAP_INSERT_FROZEN cases it actually doesn’t happen at least for now\n> > > > since only one process inserts tuples into the relation. Will fix.\n> > >\n> > > Yes. It seems to me that it is cleaner that RelationGetBufferForTuple\n> > > returns vmbuffer only when the caller needs to change vm state.\n> > > Thanks.\n> > >\n> > > > > And, the patch changes the value of all_frozen_set to false when the\n> > > > > page was already all-frozen (thus not empty). It would be fine since\n> > > > > we don't need to change the visibility of the page in that case but\n> > > > > the variable name is no longer correct. set_all_visible or such?\n> > > >\n> > > > It seems to me that the variable name all_frozen_set corresponds to\n> > > > XLH_INSERT_ALL_FROZEN_SET but I see your point. How about\n> > > > set_all_frozen instead since we set all-frozen bits (also implying\n> > > > setting all-visible)?\n> > >\n> > > Right. However, \"if (set_all_frozen) then \"set all_visible\" looks like\n> > > a bug^^;. all_frozen_set looks better in that context than\n> > > set_all_frozen. So I withdraw the comment.\n> > >\n> > > > BTW I found the following description of XLH_INSERT_ALL_FROZEN_SET but\n> > > > there is no all_visible_set anywhere:\n> > > >\n> > > > /* all_frozen_set always implies all_visible_set */\n> > > > #define XLH_INSERT_ALL_FROZEN_SET (1<<5)\n> > > >\n> > > > I'll update those comments as well.\n> > >\n> > > FWIW, it seems like a shorthand of \"ALL_FROZEN_SET implies ALL_VISIBLE\n> > > to be set together\". The current comment is working to me.\n> > >\n> >\n> > Okay, I've updated the patch accordingly. Please review it.\n>\n> I was reading the patch, just found some typos: it should be \"a frozen\n> tuple\" not \"an frozen tuple\".\n>\n> + * Also pin visibility map page if we're inserting an frozen tuple into\n> + * If we're inserting an frozen entry into empty page, pin the\n\nThank you for the comment.\n\nI’ve updated the patch including the above comment.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 19 Apr 2021 22:51:09 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Mon, Apr 19, 2021 at 7:21 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> I’ve updated the patch including the above comment.\n\nThanks for the patch.\n\nI was trying to understand below statements:\n+ * we check without a buffer lock if the page is empty but the\n+ * caller doesn't need to recheck that since we assume that in\n+ * HEAP_INSERT_FROZEN case, only one process is inserting a\n+ * frozen tuple into this relation.\n+ *\n\nAnd earlier comments from upthread:\n\n>> AFAICS the page is always empty when RelationGetBufferForTuple\n>> returned a valid vmbuffer. So the \"if\" should be an \"assert\" instead.\n\n> There is a chance that RelationGetBufferForTuple() returns a valid\n> vmbuffer but the page is not empty, since RelationGetBufferForTuple()\n> checks without a lock if the page is empty. But when it comes to\n> HEAP_INSERT_FROZEN cases it actually doesn’t happen at least for now\n> since only one process inserts tuples into the relation. Will fix.\"\n\nI'm not sure whether it is safe to assume \"at least for now since only\none process inserts tuples into the relation\". What if we allow\nparallel inserts for HEAP_INSERT_FROZEN cases, I don't know whether we\ncan do that or not. Correct me if I'm wrong.\n\nWhile we are modifying something in heap_insert:\n1) Can we adjust the comment below in heap_insert to the 80char limit?\n * If we're inserting frozen entry into an empty page,\n * set visibility map bits and PageAllVisible() hint.\n2) I'm thinking whether we can do page = BufferGetPage(buffer); after\nRelationGetBufferForTuple and use in all the places where currently\nBufferGetPage(buffer) is being used:\nif (PageIsAllVisible(BufferGetPage(buffer)),\nPageClearAllVisible(BufferGetPage(buffer)); and we could even remove\nthe local variable page of if (RelationNeedsWAL(relation)).\n3) We could as well get the block number once and use it in all the\nplaces in heap_insert, thus we can remove extra calls of\nBufferGetBlockNumber(buffer).\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Apr 2021 07:34:43 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Tue, Apr 20, 2021 at 11:04 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Apr 19, 2021 at 7:21 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I’ve updated the patch including the above comment.\n>\n> Thanks for the patch.\n>\n> I was trying to understand below statements:\n> + * we check without a buffer lock if the page is empty but the\n> + * caller doesn't need to recheck that since we assume that in\n> + * HEAP_INSERT_FROZEN case, only one process is inserting a\n> + * frozen tuple into this relation.\n> + *\n>\n> And earlier comments from upthread:\n>\n> >> AFAICS the page is always empty when RelationGetBufferForTuple\n> >> returned a valid vmbuffer. So the \"if\" should be an \"assert\" instead.\n>\n> > There is a chance that RelationGetBufferForTuple() returns a valid\n> > vmbuffer but the page is not empty, since RelationGetBufferForTuple()\n> > checks without a lock if the page is empty. But when it comes to\n> > HEAP_INSERT_FROZEN cases it actually doesn’t happen at least for now\n> > since only one process inserts tuples into the relation. Will fix.\"\n>\n> I'm not sure whether it is safe to assume \"at least for now since only\n> one process inserts tuples into the relation\". What if we allow\n> parallel inserts for HEAP_INSERT_FROZEN cases, I don't know whether we\n> can do that or not. Correct me if I'm wrong.\n\nI think if my assumption is wrong or we allow parallel insert for\nHEAP_INSERT_FROZEN cases in the future, we need to deal with the case\nwhere frozen tuples are concurrently inserted into the same page. For\nexample, we can release vmbuffer when we see the page is no longer\nempty, or we can return a valid buffer but require the caller to\nre-check if the page is still empty. The previous version patch took\nthe former approach. More concretely, heap_insert() rechecked if the\npage is still empty in HEAP_INSERT_FROZEN case and set all_frozen_set\nif so. But AFAICS concurrently inserting frozen tuples into the same\npage doesn’t happen for now (COPY FREEZE and REFRESH MATERIALIZED VIEW\nare users of HEAP_INSERT_FROZEN), also pointed out by Horiguchi-san.\nSo I added comments and assertions rather than addressing the case\nthat never happens with the current code. If concurrently inserting\nfrozen tuples into the same page happens, we should get the assertion\nfailure that I added in RelationGetBufferForTuple().\n\n>\n> While we are modifying something in heap_insert:\n> 1) Can we adjust the comment below in heap_insert to the 80char limit?\n> * If we're inserting frozen entry into an empty page,\n> * set visibility map bits and PageAllVisible() hint.\n> 2) I'm thinking whether we can do page = BufferGetPage(buffer); after\n> RelationGetBufferForTuple and use in all the places where currently\n> BufferGetPage(buffer) is being used:\n> if (PageIsAllVisible(BufferGetPage(buffer)),\n> PageClearAllVisible(BufferGetPage(buffer)); and we could even remove\n> the local variable page of if (RelationNeedsWAL(relation)).\n> 3) We could as well get the block number once and use it in all the\n> places in heap_insert, thus we can remove extra calls of\n> BufferGetBlockNumber(buffer).\n\nAll points are reasonable to me. I'll incorporate them in the next version.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 20 Apr 2021 14:49:59 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Tue, Apr 20, 2021 at 11:20 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Apr 20, 2021 at 11:04 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Mon, Apr 19, 2021 at 7:21 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > I’ve updated the patch including the above comment.\n> >\n> > Thanks for the patch.\n> >\n> > I was trying to understand below statements:\n> > + * we check without a buffer lock if the page is empty but the\n> > + * caller doesn't need to recheck that since we assume that in\n> > + * HEAP_INSERT_FROZEN case, only one process is inserting a\n> > + * frozen tuple into this relation.\n> > + *\n> >\n> > And earlier comments from upthread:\n> >\n> > >> AFAICS the page is always empty when RelationGetBufferForTuple\n> > >> returned a valid vmbuffer. So the \"if\" should be an \"assert\" instead.\n> >\n> > > There is a chance that RelationGetBufferForTuple() returns a valid\n> > > vmbuffer but the page is not empty, since RelationGetBufferForTuple()\n> > > checks without a lock if the page is empty. But when it comes to\n> > > HEAP_INSERT_FROZEN cases it actually doesn’t happen at least for now\n> > > since only one process inserts tuples into the relation. Will fix.\"\n> >\n> > I'm not sure whether it is safe to assume \"at least for now since only\n> > one process inserts tuples into the relation\". What if we allow\n> > parallel inserts for HEAP_INSERT_FROZEN cases, I don't know whether we\n> > can do that or not. Correct me if I'm wrong.\n>\n> I think if my assumption is wrong or we allow parallel insert for\n> HEAP_INSERT_FROZEN cases in the future, we need to deal with the case\n> where frozen tuples are concurrently inserted into the same page. For\n> example, we can release vmbuffer when we see the page is no longer\n> empty, or we can return a valid buffer but require the caller to\n> re-check if the page is still empty. The previous version patch took\n> the former approach. More concretely, heap_insert() rechecked if the\n> page is still empty in HEAP_INSERT_FROZEN case and set all_frozen_set\n> if so. But AFAICS concurrently inserting frozen tuples into the same\n> page doesn’t happen for now (COPY FREEZE and REFRESH MATERIALIZED VIEW\n> are users of HEAP_INSERT_FROZEN), also pointed out by Horiguchi-san.\n> So I added comments and assertions rather than addressing the case\n> that never happens with the current code. If concurrently inserting\n> frozen tuples into the same page happens, we should get the assertion\n> failure that I added in RelationGetBufferForTuple().\n\nUpon thinking further, concurrent insertions into the same page are\nnot possible while we are in heap_insert in between\nRelationGetBufferForTuple and UnlockReleaseBuffer(buffer);.\nRelationGetBufferForTuple will lock the buffer in exclusive mode, see\nLockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE); and comment \" * Returns\npinned and exclusive-locked buffer of a page in given relation\". Even\nif parallel insertions are allowed in HEAP_INSERT_FROZEN cases, then\neach worker will separately acquire pages, insert into them and they\nskip getting visibility map page pin if the page is set all-visible by\nanother worker.\n\nSome more comments on v3 patch:\n1) Isn't it good to specify here that what we gain by avoiding pinning\nvisibility map page something like: gain a few seconds/avoid extra\nfunction calls/or some other better wording?\n+ * If the page already is non-empty and all-visible, we skip to\n+ * pin on a visibility map buffer since we never clear and set\n+ * all-frozen bit on visibility map during inserting a frozen\n+ * tuple.\n+ */\n\n2) Isn't it good to put PageIsAllVisible(BufferGetPage(buffer))) in\nthe if clause instead of else if clause, because this is going to be\nhitting most of the time, we could avoid page empty check every time?\n+ if (PageGetMaxOffsetNumber(BufferGetPage(buffer)) == 0)\n+ visibilitymap_pin(relation, targetBlock, vmbuffer);\n+ else if (PageIsAllVisible(BufferGetPage(buffer)))\n+ skip_vmbuffer = true;\n\n3) I found another typo in v3 - it is \"will set\" not \"will sets\":\n+ * In HEAP_INSERT_FROZEN cases, we handle the possibility that the\ncaller will\n+ * sets all-frozen bit on the visibility map page. We pin on the visibility\n\n4) I think a commit message can be added to the upcoming patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Apr 2021 12:25:25 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "Hi,\n\nI took a look at this today, as I committed 39b66a91b back in January. I \ncan reproduce the issue, with just 1M rows the before/after timings are \nroughly 480ms and 620ms on my hardware.\n\nUnfortunately, the v3 patch does not really fix the issue for me. The \ntiming with it applied is ~610ms so the improvement is only minimal.\n\nI'm not sure what to do about this :-( I don't have any ideas about how \nto eliminate this overhead, so the only option I see is reverting the \nchanges in heap_insert. Unfortunately, that'd mean inserts into TOAST \ntables won't be frozen ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 26 Apr 2021 15:31:02 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-26 15:31:02 +0200, Tomas Vondra wrote:\n> I'm not sure what to do about this :-( I don't have any ideas about how to\n> eliminate this overhead, so the only option I see is reverting the changes\n> in heap_insert. Unfortunately, that'd mean inserts into TOAST tables won't\n> be frozen ...\n\nISTM that the fundamental issue here is not that we acquire pins that we\nshouldn't, but that we do so at a much higher frequency than needed.\n\nIt's probably too invasive for 14, but I think it might be worth exploring\npassing down a BulkInsertState in nodeModifyTable.c's table_tuple_insert() iff\nthe input will be more than one row.\n\nAnd then add the vm buffer of the target page to BulkInsertState, so that\nhio.c can avoid re-pinning the buffer.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 26 Apr 2021 12:27:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "\n\nOn 4/26/21 9:27 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2021-04-26 15:31:02 +0200, Tomas Vondra wrote:\n>> I'm not sure what to do about this :-( I don't have any ideas about how to\n>> eliminate this overhead, so the only option I see is reverting the changes\n>> in heap_insert. Unfortunately, that'd mean inserts into TOAST tables won't\n>> be frozen ...\n> \n> ISTM that the fundamental issue here is not that we acquire pins that we\n> shouldn't, but that we do so at a much higher frequency than needed.\n> \n> It's probably too invasive for 14, but I think it might be worth exploring\n> passing down a BulkInsertState in nodeModifyTable.c's table_tuple_insert() iff\n> the input will be more than one row.\n> \n> And then add the vm buffer of the target page to BulkInsertState, so that\n> hio.c can avoid re-pinning the buffer.\n> \n\nYeah. The question still is what to do about 14, though. Shall we leave \nthe code as it is now, or should we change it somehow? It seem a bit \nunfortunate that a COPY FREEZE optimization should negatively influence \nother (more) common use cases, so I guess we can't just keep the current \ncode ...\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 26 Apr 2021 23:59:17 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-26 23:59:17 +0200, Tomas Vondra wrote:\n> On 4/26/21 9:27 PM, Andres Freund wrote:\n> > On 2021-04-26 15:31:02 +0200, Tomas Vondra wrote:\n> > > I'm not sure what to do about this :-( I don't have any ideas about how to\n> > > eliminate this overhead, so the only option I see is reverting the changes\n> > > in heap_insert. Unfortunately, that'd mean inserts into TOAST tables won't\n> > > be frozen ...\n> > \n> > ISTM that the fundamental issue here is not that we acquire pins that we\n> > shouldn't, but that we do so at a much higher frequency than needed.\n> > \n> > It's probably too invasive for 14, but I think it might be worth exploring\n> > passing down a BulkInsertState in nodeModifyTable.c's table_tuple_insert() iff\n> > the input will be more than one row.\n> > \n> > And then add the vm buffer of the target page to BulkInsertState, so that\n> > hio.c can avoid re-pinning the buffer.\n> > \n> \n> Yeah. The question still is what to do about 14, though. Shall we leave the\n> code as it is now, or should we change it somehow? It seem a bit unfortunate\n> that a COPY FREEZE optimization should negatively influence other (more)\n> common use cases, so I guess we can't just keep the current code ...\n\nI'd suggest prototyping the use of BulkInsertState in nodeModifyTable.c\nand see whether that fixes the regression. If it does, then we can\nanalyze whether that's possibly the best way forward. Or whether we\nrevert, live with the regression or find yet another path.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 26 Apr 2021 16:07:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Mon, Apr 26, 2021 at 10:31 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> I took a look at this today, as I committed 39b66a91b back in January. I\n> can reproduce the issue, with just 1M rows the before/after timings are\n> roughly 480ms and 620ms on my hardware.\n>\n> Unfortunately, the v3 patch does not really fix the issue for me. The\n> timing with it applied is ~610ms so the improvement is only minimal.\n\nSince the reading vmbuffer is likely to hit on the shared buffer\nduring inserting frozen tuples, I think the improvement would not be\nvisible with a few million tuples depending on hardware. But it might\nnot be as fast as before commit 39b66a91b since we read vmbuffer at\nleast per insertion.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 27 Apr 2021 10:47:08 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Tue, Apr 27, 2021 at 8:07 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-04-26 23:59:17 +0200, Tomas Vondra wrote:\n> > On 4/26/21 9:27 PM, Andres Freund wrote:\n> > > On 2021-04-26 15:31:02 +0200, Tomas Vondra wrote:\n> > > > I'm not sure what to do about this :-( I don't have any ideas about how to\n> > > > eliminate this overhead, so the only option I see is reverting the changes\n> > > > in heap_insert. Unfortunately, that'd mean inserts into TOAST tables won't\n> > > > be frozen ...\n> > >\n> > > ISTM that the fundamental issue here is not that we acquire pins that we\n> > > shouldn't, but that we do so at a much higher frequency than needed.\n> > >\n> > > It's probably too invasive for 14, but I think it might be worth exploring\n> > > passing down a BulkInsertState in nodeModifyTable.c's table_tuple_insert() iff\n> > > the input will be more than one row.\n> > >\n> > > And then add the vm buffer of the target page to BulkInsertState, so that\n> > > hio.c can avoid re-pinning the buffer.\n> > >\n> >\n> > Yeah. The question still is what to do about 14, though. Shall we leave the\n> > code as it is now, or should we change it somehow? It seem a bit unfortunate\n> > that a COPY FREEZE optimization should negatively influence other (more)\n> > common use cases, so I guess we can't just keep the current code ...\n>\n> I'd suggest prototyping the use of BulkInsertState in nodeModifyTable.c\n> and see whether that fixes the regression.\n\nIs this idea to have RelationGetBufferForTuple() skip re-pinning\nvmbuffer? If so, is this essentially the same as the one in the v3\npatch?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 27 Apr 2021 14:34:43 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "\n\nOn 4/27/21 7:34 AM, Masahiko Sawada wrote:\n> On Tue, Apr 27, 2021 at 8:07 AM Andres Freund <andres@anarazel.de> wrote:\n>>\n>> Hi,\n>>\n>> On 2021-04-26 23:59:17 +0200, Tomas Vondra wrote:\n>>> On 4/26/21 9:27 PM, Andres Freund wrote:\n>>>> On 2021-04-26 15:31:02 +0200, Tomas Vondra wrote:\n>>>>> I'm not sure what to do about this :-( I don't have any ideas about how to\n>>>>> eliminate this overhead, so the only option I see is reverting the changes\n>>>>> in heap_insert. Unfortunately, that'd mean inserts into TOAST tables won't\n>>>>> be frozen ...\n>>>>\n>>>> ISTM that the fundamental issue here is not that we acquire pins that we\n>>>> shouldn't, but that we do so at a much higher frequency than needed.\n>>>>\n>>>> It's probably too invasive for 14, but I think it might be worth exploring\n>>>> passing down a BulkInsertState in nodeModifyTable.c's table_tuple_insert() iff\n>>>> the input will be more than one row.\n>>>>\n>>>> And then add the vm buffer of the target page to BulkInsertState, so that\n>>>> hio.c can avoid re-pinning the buffer.\n>>>>\n>>>\n>>> Yeah. The question still is what to do about 14, though. Shall we leave the\n>>> code as it is now, or should we change it somehow? It seem a bit unfortunate\n>>> that a COPY FREEZE optimization should negatively influence other (more)\n>>> common use cases, so I guess we can't just keep the current code ...\n>>\n>> I'd suggest prototyping the use of BulkInsertState in nodeModifyTable.c\n>> and see whether that fixes the regression.\n> \n> Is this idea to have RelationGetBufferForTuple() skip re-pinning\n> vmbuffer? If so, is this essentially the same as the one in the v3\n> patch?\n> \n\nI don't think it is the same approach - it's a bit hard to follow what \nexactly happens in RelationGetBufferForTuple, but AFAICS it always \nstarts with vmbuffer = InvalidBuffer, so it may pin the vmbuffer quite \noften, no?\n\nWhat Andres is suggesting (I think) is to modify ExecInsert() to pass a \nvalid bistate to table_tuple_insert, instead of just NULL, and store the \nvmbuffer in it. Not sure how to identify when inserting more than just a \nsingle row, though ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 27 Apr 2021 15:43:07 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Tue, Apr 27, 2021 at 7:13 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> What Andres is suggesting (I think) is to modify ExecInsert() to pass a\n> valid bistate to table_tuple_insert, instead of just NULL, and store the\n> vmbuffer in it. Not sure how to identify when inserting more than just a\n> single row, though ...\n\nI think the thread \"should INSERT SELECT use a BulkInsertState?\" [1],\nhas a simple dynamic mechanism [with a GUC defining the threshold\ntuples] to switch over to using BulkInsertState. Maybe it's worth\nhaving a look at the patch -\n0001-INSERT-SELECT-to-use-BulkInsertState-and-multi_i.patch?\n\n+ /* Use bulk insert after a threshold number of tuples */\n+ // XXX: maybe this should only be done if it's not a partitioned table or\n+ // if the partitions don't support miinfo, which uses its own bistates\n+ mtstate->ntuples++;\n+ if (mtstate->bistate == NULL &&\n+ mtstate->operation == CMD_INSERT &&\n+ mtstate->ntuples > bulk_insert_ntuples &&\n+ bulk_insert_ntuples >= 0)\n+ {\n+ elog(DEBUG1, \"enabling bulk insert\");\n+ mtstate->bistate = GetBulkInsertState();\n+ }\n\n[1] https://www.postgresql.org/message-id/20210222030158.GS14772%40telsasoft.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Apr 2021 19:23:32 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Tue, Apr 27, 2021 at 03:43:07PM +0200, Tomas Vondra wrote:\n> On 4/27/21 7:34 AM, Masahiko Sawada wrote:\n> > On Tue, Apr 27, 2021 at 8:07 AM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2021-04-26 23:59:17 +0200, Tomas Vondra wrote:\n> > > > On 4/26/21 9:27 PM, Andres Freund wrote:\n> > > > > On 2021-04-26 15:31:02 +0200, Tomas Vondra wrote:\n> > > > > > I'm not sure what to do about this :-( I don't have any ideas about how to\n> > > > > > eliminate this overhead, so the only option I see is reverting the changes\n> > > > > > in heap_insert. Unfortunately, that'd mean inserts into TOAST tables won't\n> > > > > > be frozen ...\n> > > > > \n> > > > > ISTM that the fundamental issue here is not that we acquire pins that we\n> > > > > shouldn't, but that we do so at a much higher frequency than needed.\n> > > > > \n> > > > > It's probably too invasive for 14, but I think it might be worth exploring\n> > > > > passing down a BulkInsertState in nodeModifyTable.c's table_tuple_insert() iff\n> > > > > the input will be more than one row.\n> > > > > \n> > > > > And then add the vm buffer of the target page to BulkInsertState, so that\n> > > > > hio.c can avoid re-pinning the buffer.\n> > > > > \n> > > > \n> > > > Yeah. The question still is what to do about 14, though. Shall we leave the\n> > > > code as it is now, or should we change it somehow? It seem a bit unfortunate\n> > > > that a COPY FREEZE optimization should negatively influence other (more)\n> > > > common use cases, so I guess we can't just keep the current code ...\n> > > \n> > > I'd suggest prototyping the use of BulkInsertState in nodeModifyTable.c\n> > > and see whether that fixes the regression.\n> \n> What Andres is suggesting (I think) is to modify ExecInsert() to pass a\n> valid bistate to table_tuple_insert, instead of just NULL, and store the\n> vmbuffer in it. Not sure how to identify when inserting more than just a\n> single row, though ...\n\nMaybe this is relevant.\nhttps://commitfest.postgresql.org/33/2553/\n| INSERT SELECT: BulkInsertState and table_multi_insert\n\nThe biistate part is small - Simon requested to also use table_multi_insert,\nwhich makes the patch much bigger, and there's probably lots of restrictions I\nhaven't even thought to check.\n\nThis uses a GUC threshold for bulk insert, but I'm still not sure it's really\nproblematic to use a biistate for a single row.\n\n /* Use bulk insert after a threshold number of tuples */\n // XXX: maybe this should only be done if it's not a partitioned table or\n // if the partitions don't support miinfo, which uses its own bistates\n mtstate->ntuples++;\n if (mtstate->bistate == NULL &&\n mtstate->ntuples > bulk_insert_ntuples &&\n bulk_insert_ntuples >= 0)\n {\n elog(DEBUG1, \"enabling bulk insert\");\n mtstate->bistate = GetBulkInsertState();\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 27 Apr 2021 09:03:47 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Tue, Apr 27, 2021 at 10:43 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 4/27/21 7:34 AM, Masahiko Sawada wrote:\n> > On Tue, Apr 27, 2021 at 8:07 AM Andres Freund <andres@anarazel.de> wrote:\n> >>\n> >> Hi,\n> >>\n> >> On 2021-04-26 23:59:17 +0200, Tomas Vondra wrote:\n> >>> On 4/26/21 9:27 PM, Andres Freund wrote:\n> >>>> On 2021-04-26 15:31:02 +0200, Tomas Vondra wrote:\n> >>>>> I'm not sure what to do about this :-( I don't have any ideas about how to\n> >>>>> eliminate this overhead, so the only option I see is reverting the changes\n> >>>>> in heap_insert. Unfortunately, that'd mean inserts into TOAST tables won't\n> >>>>> be frozen ...\n> >>>>\n> >>>> ISTM that the fundamental issue here is not that we acquire pins that we\n> >>>> shouldn't, but that we do so at a much higher frequency than needed.\n> >>>>\n> >>>> It's probably too invasive for 14, but I think it might be worth exploring\n> >>>> passing down a BulkInsertState in nodeModifyTable.c's table_tuple_insert() iff\n> >>>> the input will be more than one row.\n> >>>>\n> >>>> And then add the vm buffer of the target page to BulkInsertState, so that\n> >>>> hio.c can avoid re-pinning the buffer.\n> >>>>\n> >>>\n> >>> Yeah. The question still is what to do about 14, though. Shall we leave the\n> >>> code as it is now, or should we change it somehow? It seem a bit unfortunate\n> >>> that a COPY FREEZE optimization should negatively influence other (more)\n> >>> common use cases, so I guess we can't just keep the current code ...\n> >>\n> >> I'd suggest prototyping the use of BulkInsertState in nodeModifyTable.c\n> >> and see whether that fixes the regression.\n> >\n> > Is this idea to have RelationGetBufferForTuple() skip re-pinning\n> > vmbuffer? If so, is this essentially the same as the one in the v3\n> > patch?\n> >\n>\n> I don't think it is the same approach - it's a bit hard to follow what\n> exactly happens in RelationGetBufferForTuple, but AFAICS it always\n> starts with vmbuffer = InvalidBuffer, so it may pin the vmbuffer quite\n> often, no?\n\nWith that patch, we pin the vmbuffer only when inserting a frozen\ntuple into an empty page. That is, when inserting a frozen tuple into\nan empty page, we pin the vmbuffer and heap_insert() will mark the\npage all-visible and set all-frozen bit on vm. And from the next\ninsertion (into the same page) until the page gets full, since the\npage is already all-visible, we skip pinning the vmbuffer. IOW, if the\ntarget page is not empty but all-visible, we skip pinning the\nvmbuffer. We pin the vmbuffer only once per heap page used during\ninsertion.\n\n>\n> What Andres is suggesting (I think) is to modify ExecInsert() to pass a\n> valid bistate to table_tuple_insert, instead of just NULL, and store the\n> vmbuffer in it.\n\nUnderstood. This approach keeps using the same vmbuffer until we need\nanother vm page corresponding to the target heap page, which seems\nbetter.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 28 Apr 2021 00:26:02 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Wed, Apr 28, 2021 at 12:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 10:43 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> >\n> >\n> > On 4/27/21 7:34 AM, Masahiko Sawada wrote:\n> > > On Tue, Apr 27, 2021 at 8:07 AM Andres Freund <andres@anarazel.de> wrote:\n> > >>\n> > >> Hi,\n> > >>\n> > >> On 2021-04-26 23:59:17 +0200, Tomas Vondra wrote:\n> > >>> On 4/26/21 9:27 PM, Andres Freund wrote:\n> > >>>> On 2021-04-26 15:31:02 +0200, Tomas Vondra wrote:\n> > >>>>> I'm not sure what to do about this :-( I don't have any ideas about how to\n> > >>>>> eliminate this overhead, so the only option I see is reverting the changes\n> > >>>>> in heap_insert. Unfortunately, that'd mean inserts into TOAST tables won't\n> > >>>>> be frozen ...\n> > >>>>\n> > >>>> ISTM that the fundamental issue here is not that we acquire pins that we\n> > >>>> shouldn't, but that we do so at a much higher frequency than needed.\n> > >>>>\n> > >>>> It's probably too invasive for 14, but I think it might be worth exploring\n> > >>>> passing down a BulkInsertState in nodeModifyTable.c's table_tuple_insert() iff\n> > >>>> the input will be more than one row.\n> > >>>>\n> > >>>> And then add the vm buffer of the target page to BulkInsertState, so that\n> > >>>> hio.c can avoid re-pinning the buffer.\n> > >>>>\n> > >>>\n> > >>> Yeah. The question still is what to do about 14, though. Shall we leave the\n> > >>> code as it is now, or should we change it somehow? It seem a bit unfortunate\n> > >>> that a COPY FREEZE optimization should negatively influence other (more)\n> > >>> common use cases, so I guess we can't just keep the current code ...\n> > >>\n> > >> I'd suggest prototyping the use of BulkInsertState in nodeModifyTable.c\n> > >> and see whether that fixes the regression.\n> > >\n> > > Is this idea to have RelationGetBufferForTuple() skip re-pinning\n> > > vmbuffer? If so, is this essentially the same as the one in the v3\n> > > patch?\n> > >\n> >\n> > I don't think it is the same approach - it's a bit hard to follow what\n> > exactly happens in RelationGetBufferForTuple, but AFAICS it always\n> > starts with vmbuffer = InvalidBuffer, so it may pin the vmbuffer quite\n> > often, no?\n>\n> With that patch, we pin the vmbuffer only when inserting a frozen\n> tuple into an empty page. That is, when inserting a frozen tuple into\n> an empty page, we pin the vmbuffer and heap_insert() will mark the\n> page all-visible and set all-frozen bit on vm. And from the next\n> insertion (into the same page) until the page gets full, since the\n> page is already all-visible, we skip pinning the vmbuffer. IOW, if the\n> target page is not empty but all-visible, we skip pinning the\n> vmbuffer. We pin the vmbuffer only once per heap page used during\n> insertion.\n>\n> >\n> > What Andres is suggesting (I think) is to modify ExecInsert() to pass a\n> > valid bistate to table_tuple_insert, instead of just NULL, and store the\n> > vmbuffer in it.\n>\n> Understood. This approach keeps using the same vmbuffer until we need\n> another vm page corresponding to the target heap page, which seems\n> better.\n\nBut how is ExecInsert() related to REFRESH MATERIALIZED VIEW?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 28 Apr 2021 00:44:47 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On 4/27/21 5:44 PM, Masahiko Sawada wrote:\n> On Wed, Apr 28, 2021 at 12:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> On Tue, Apr 27, 2021 at 10:43 PM Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n>>>\n>>>\n>>> On 4/27/21 7:34 AM, Masahiko Sawada wrote:\n>>>> On Tue, Apr 27, 2021 at 8:07 AM Andres Freund <andres@anarazel.de> wrote:\n>>>>>\n>>>>> Hi,\n>>>>>\n>>>>> On 2021-04-26 23:59:17 +0200, Tomas Vondra wrote:\n>>>>>> On 4/26/21 9:27 PM, Andres Freund wrote:\n>>>>>>> On 2021-04-26 15:31:02 +0200, Tomas Vondra wrote:\n>>>>>>>> I'm not sure what to do about this :-( I don't have any ideas about how to\n>>>>>>>> eliminate this overhead, so the only option I see is reverting the changes\n>>>>>>>> in heap_insert. Unfortunately, that'd mean inserts into TOAST tables won't\n>>>>>>>> be frozen ...\n>>>>>>>\n>>>>>>> ISTM that the fundamental issue here is not that we acquire pins that we\n>>>>>>> shouldn't, but that we do so at a much higher frequency than needed.\n>>>>>>>\n>>>>>>> It's probably too invasive for 14, but I think it might be worth exploring\n>>>>>>> passing down a BulkInsertState in nodeModifyTable.c's table_tuple_insert() iff\n>>>>>>> the input will be more than one row.\n>>>>>>>\n>>>>>>> And then add the vm buffer of the target page to BulkInsertState, so that\n>>>>>>> hio.c can avoid re-pinning the buffer.\n>>>>>>>\n>>>>>>\n>>>>>> Yeah. The question still is what to do about 14, though. Shall we leave the\n>>>>>> code as it is now, or should we change it somehow? It seem a bit unfortunate\n>>>>>> that a COPY FREEZE optimization should negatively influence other (more)\n>>>>>> common use cases, so I guess we can't just keep the current code ...\n>>>>>\n>>>>> I'd suggest prototyping the use of BulkInsertState in nodeModifyTable.c\n>>>>> and see whether that fixes the regression.\n>>>>\n>>>> Is this idea to have RelationGetBufferForTuple() skip re-pinning\n>>>> vmbuffer? If so, is this essentially the same as the one in the v3\n>>>> patch?\n>>>>\n>>>\n>>> I don't think it is the same approach - it's a bit hard to follow what\n>>> exactly happens in RelationGetBufferForTuple, but AFAICS it always\n>>> starts with vmbuffer = InvalidBuffer, so it may pin the vmbuffer quite\n>>> often, no?\n>>\n>> With that patch, we pin the vmbuffer only when inserting a frozen\n>> tuple into an empty page. That is, when inserting a frozen tuple into\n>> an empty page, we pin the vmbuffer and heap_insert() will mark the\n>> page all-visible and set all-frozen bit on vm. And from the next\n>> insertion (into the same page) until the page gets full, since the\n>> page is already all-visible, we skip pinning the vmbuffer. IOW, if the\n>> target page is not empty but all-visible, we skip pinning the\n>> vmbuffer. We pin the vmbuffer only once per heap page used during\n>> insertion.\n>>\n>>>\n>>> What Andres is suggesting (I think) is to modify ExecInsert() to pass a\n>>> valid bistate to table_tuple_insert, instead of just NULL, and store the\n>>> vmbuffer in it.\n>>\n>> Understood. This approach keeps using the same vmbuffer until we need\n>> another vm page corresponding to the target heap page, which seems\n>> better.\n> \n> But how is ExecInsert() related to REFRESH MATERIALIZED VIEW?\n> \n\nTBH I haven't looked into the details, but Andres talked about \nnodeModifyTable and table_tuple_insert, and ExecInsert is the only place \ncalling it. But maybe I'm just confused and Andres meant something else?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 27 Apr 2021 18:24:14 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-28 00:44:47 +0900, Masahiko Sawada wrote:\n> On Wed, Apr 28, 2021 at 12:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > What Andres is suggesting (I think) is to modify ExecInsert() to pass a\n> > > valid bistate to table_tuple_insert, instead of just NULL, and store the\n> > > vmbuffer in it.\n> >\n> > Understood. This approach keeps using the same vmbuffer until we need\n> > another vm page corresponding to the target heap page, which seems\n> > better.\n> \n> But how is ExecInsert() related to REFRESH MATERIALIZED VIEW?\n\nI was thinking of the CONCURRENTLY path for REFRESH MATERIALIZED VIEW I\nthink. Or something.\n\nThat actually makes it easier - we already pass in a bistate in the relevant\npaths. So if we add a current_vmbuf to BulkInsertStateData, we can avoid\nneeding to pin so often. It seems that'd end up with a good bit cleaner and\nless risky code than the skip_vmbuffer_for_frozen_tuple_insertion_v3.patch\napproach.\n\nThe current RelationGetBufferForTuple() interface / how it's used in heapam.c\ndoesn't make this quite as trivial as it could be... Attached is a quick hack\nimplementing this. For me it reduces the overhead noticably:\n\nREFRESH MATERIALIZED VIEW mv;\nbefore:\nTime: 26542.333 ms (00:26.542)\nafter:\nTime: 23105.047 ms (00:23.105)\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 27 Apr 2021 11:22:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "\n\nOn 4/27/21 8:22 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2021-04-28 00:44:47 +0900, Masahiko Sawada wrote:\n>> On Wed, Apr 28, 2021 at 12:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>> What Andres is suggesting (I think) is to modify ExecInsert() to pass a\n>>>> valid bistate to table_tuple_insert, instead of just NULL, and store the\n>>>> vmbuffer in it.\n>>>\n>>> Understood. This approach keeps using the same vmbuffer until we need\n>>> another vm page corresponding to the target heap page, which seems\n>>> better.\n>>\n>> But how is ExecInsert() related to REFRESH MATERIALIZED VIEW?\n> \n> I was thinking of the CONCURRENTLY path for REFRESH MATERIALIZED VIEW I\n> think. Or something.\n> \n> That actually makes it easier - we already pass in a bistate in the relevant\n> paths. So if we add a current_vmbuf to BulkInsertStateData, we can avoid\n> needing to pin so often. It seems that'd end up with a good bit cleaner and\n> less risky code than the skip_vmbuffer_for_frozen_tuple_insertion_v3.patch\n> approach.\n> \n> The current RelationGetBufferForTuple() interface / how it's used in heapam.c\n> doesn't make this quite as trivial as it could be... Attached is a quick hack\n> implementing this. For me it reduces the overhead noticably:\n> \n> REFRESH MATERIALIZED VIEW mv;\n> before:\n> Time: 26542.333 ms (00:26.542)\n> after:\n> Time: 23105.047 ms (00:23.105)\n> \n\nThanks, that looks promising. I repeated the tests I did on 26/4, and \nthe results look like this:\n\nold (0c7d3bb99): 497ms\nmaster: 621ms\npatched: 531ms\n\nSo yeah, that's a bit improvement - it does not remove the regression \nentirely, but +5% is much better than +25%.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 5 May 2021 15:04:53 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Wed, May 05, 2021 at 03:04:53PM +0200, Tomas Vondra wrote:\n> Thanks, that looks promising. I repeated the tests I did on 26/4, and the\n> results look like this:\n> \n> old (0c7d3bb99): 497ms\n> master: 621ms\n> patched: 531ms\n> \n> So yeah, that's a bit improvement - it does not remove the regression\n> entirely, but +5% is much better than +25%.\n\nHmm. Is that really something we should do after feature freeze? A\n25% degradation for matview refresh may be a problem for a lot of\nusers and could be an upgrade stopper. Another thing we could do is\nalso to revert 7db0cd2 and 39b66a9 from the v14 tree, and work on a\nproper solution for this performance problem for matviews for 15~.\n\nThoughts?\n--\nMichael",
"msg_date": "Tue, 11 May 2021 16:37:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Tue, May 11, 2021 at 4:37 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, May 05, 2021 at 03:04:53PM +0200, Tomas Vondra wrote:\n> > Thanks, that looks promising. I repeated the tests I did on 26/4, and the\n> > results look like this:\n> >\n> > old (0c7d3bb99): 497ms\n> > master: 621ms\n> > patched: 531ms\n> >\n> > So yeah, that's a bit improvement - it does not remove the regression\n> > entirely, but +5% is much better than +25%.\n>\n> Hmm. Is that really something we should do after feature freeze? A\n> 25% degradation for matview refresh may be a problem for a lot of\n> users and could be an upgrade stopper. Another thing we could do is\n> also to revert 7db0cd2 and 39b66a9 from the v14 tree, and work on a\n> proper solution for this performance problem for matviews for 15~.\n\nI think the approach proposed by Andres eliminates the extra vmbuffer\nreads as much as possible. But even with the patch, there still is 5%\ndegradation (and there is no way to disable inserting frozen tuples at\nmatview refresh). Which could be a problem for some users. I think\nit’s hard to completely eliminate the overhead so we might need to\nconsider another approach like having matview refresh use\nheap_multi_insert() instead of heap_insert().\n\nI think the changes for heap_multi_insert() are fine so we can revert\nonly heap_insert() part if we revert something from the v14 tree,\nalthough we will end up not inserting frozen tuples into toast tables.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 11 May 2021 18:04:06 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Tue, May 11, 2021 at 2:34 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> I think the approach proposed by Andres eliminates the extra vmbuffer\n> reads as much as possible. But even with the patch, there still is 5%\n> degradation (and there is no way to disable inserting frozen tuples at\n> matview refresh). Which could be a problem for some users. I think\n> it’s hard to completely eliminate the overhead so we might need to\n> consider another approach like having matview refresh use\n> heap_multi_insert() instead of heap_insert().\n\nI may not have understood what's being discussed here completely, but\nif you want to use multi inserts for refresh matview code, maybe the\n\"New Table Access Methods for Multi and Single Inserts\" patches at\n[1], can help.\n\n[1] - https://www.postgresql.org/message-id/CALj2ACXdrOmB6Na9amHWZHKvRT3Z0nwTRsCwoMT-npOBtmXLXg%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 11 May 2021 16:28:23 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "\n\nOn 5/11/21 12:58 PM, Bharath Rupireddy wrote:\n> On Tue, May 11, 2021 at 2:34 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> I think the approach proposed by Andres eliminates the extra vmbuffer\n>> reads as much as possible. But even with the patch, there still is 5%\n>> degradation (and there is no way to disable inserting frozen tuples at\n>> matview refresh). Which could be a problem for some users. I think\n>> it’s hard to completely eliminate the overhead so we might need to\n>> consider another approach like having matview refresh use\n>> heap_multi_insert() instead of heap_insert().\n> \n> I may not have understood what's being discussed here completely, but\n> if you want to use multi inserts for refresh matview code, maybe the\n> \"New Table Access Methods for Multi and Single Inserts\" patches at\n> [1], can help.\n> \n\nMaybe, but I think the main question is what to do for v14, so the \nuncommitted patch is kinda irrelevant.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 11 May 2021 15:58:20 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On 5/11/21 11:04 AM, Masahiko Sawada wrote:\n> On Tue, May 11, 2021 at 4:37 PM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Wed, May 05, 2021 at 03:04:53PM +0200, Tomas Vondra wrote:\n>>> Thanks, that looks promising. I repeated the tests I did on 26/4, and the\n>>> results look like this:\n>>>\n>>> old (0c7d3bb99): 497ms\n>>> master: 621ms\n>>> patched: 531ms\n>>>\n>>> So yeah, that's a bit improvement - it does not remove the regression\n>>> entirely, but +5% is much better than +25%.\n>>\n>> Hmm. Is that really something we should do after feature freeze? A\n>> 25% degradation for matview refresh may be a problem for a lot of\n>> users and could be an upgrade stopper. Another thing we could do is\n>> also to revert 7db0cd2 and 39b66a9 from the v14 tree, and work on a\n>> proper solution for this performance problem for matviews for 15~.\n> \n> I think the approach proposed by Andres eliminates the extra vmbuffer\n> reads as much as possible. But even with the patch, there still is 5%\n> degradation (and there is no way to disable inserting frozen tuples at\n> matview refresh). Which could be a problem for some users. I think\n> it’s hard to completely eliminate the overhead so we might need to\n> consider another approach like having matview refresh use\n> heap_multi_insert() instead of heap_insert().\n> \n\nI think it's way too late to make such significant change (switching to \nheap_multi_insert) for v14 :-( Moreover, I doubt it affects just matview \nrefresh - why wouldn't it affect other similar use cases? More likely \nit's just the case that was discovered.\n\n> I think the changes for heap_multi_insert() are fine so we can revert\n> only heap_insert() part if we revert something from the v14 tree,\n> although we will end up not inserting frozen tuples into toast tables.\n> \n\nI'd be somewhat unhappy about reverting just this bit, because it'd mean \nthat we freeze rows in the main table but not rows in the TOAST tables \n(that was kinda why we concluded we need the heap_insert part too).\n\nI'm still a bit puzzled where does the extra overhead (in cases when \nfreeze is not requested) come from, TBH. Intuitively, I'd hope there's a \nway to eliminate that entirely, and only pay the cost when requested \n(with the expectation that it's cheaper than freezing it that later).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 11 May 2021 16:07:44 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Tue, May 11, 2021 at 11:07 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 5/11/21 11:04 AM, Masahiko Sawada wrote:\n> > On Tue, May 11, 2021 at 4:37 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >>\n> >> On Wed, May 05, 2021 at 03:04:53PM +0200, Tomas Vondra wrote:\n> >>> Thanks, that looks promising. I repeated the tests I did on 26/4, and the\n> >>> results look like this:\n> >>>\n> >>> old (0c7d3bb99): 497ms\n> >>> master: 621ms\n> >>> patched: 531ms\n> >>>\n> >>> So yeah, that's a bit improvement - it does not remove the regression\n> >>> entirely, but +5% is much better than +25%.\n> >>\n> >> Hmm. Is that really something we should do after feature freeze? A\n> >> 25% degradation for matview refresh may be a problem for a lot of\n> >> users and could be an upgrade stopper. Another thing we could do is\n> >> also to revert 7db0cd2 and 39b66a9 from the v14 tree, and work on a\n> >> proper solution for this performance problem for matviews for 15~.\n> >\n> > I think the approach proposed by Andres eliminates the extra vmbuffer\n> > reads as much as possible. But even with the patch, there still is 5%\n> > degradation (and there is no way to disable inserting frozen tuples at\n> > matview refresh). Which could be a problem for some users. I think\n> > it’s hard to completely eliminate the overhead so we might need to\n> > consider another approach like having matview refresh use\n> > heap_multi_insert() instead of heap_insert().\n> >\n>\n> I think it's way too late to make such significant change (switching to\n> heap_multi_insert) for v14 :-(\n\nRight.\n\n> Moreover, I doubt it affects just matview\n> refresh - why wouldn't it affect other similar use cases? More likely\n> it's just the case that was discovered.\n\nI've not tested yet but I guess COPY FROM … FREEZE using heap_insert\nwould similarly be affected since it also uses heap_insert() with\nTABLE_INSERT_FROZEN.\n\n>\n> > I think the changes for heap_multi_insert() are fine so we can revert\n> > only heap_insert() part if we revert something from the v14 tree,\n> > although we will end up not inserting frozen tuples into toast tables.\n> >\n>\n> I'd be somewhat unhappy about reverting just this bit, because it'd mean\n> that we freeze rows in the main table but not rows in the TOAST tables\n> (that was kinda why we concluded we need the heap_insert part too).\n>\n> I'm still a bit puzzled where does the extra overhead (in cases when\n> freeze is not requested) come from, TBH.\n\nWhich cases do you mean? Doesn't matview refresh always request to\nfreeze tuples even after applying the patch proposed on this thread?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 12 May 2021 00:56:33 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-11 16:07:44 +0200, Tomas Vondra wrote:\n> On 5/11/21 11:04 AM, Masahiko Sawada wrote:\n> > I think the changes for heap_multi_insert() are fine so we can revert\n> > only heap_insert() part if we revert something from the v14 tree,\n> > although we will end up not inserting frozen tuples into toast tables.\n> > \n> \n> I'd be somewhat unhappy about reverting just this bit, because it'd mean\n> that we freeze rows in the main table but not rows in the TOAST tables (that\n> was kinda why we concluded we need the heap_insert part too).\n\nIs there a reason not to apply a polished version of my proposal? And\nthen to look at the remaining difference?\n\n\n> I'm still a bit puzzled where does the extra overhead (in cases when freeze\n> is not requested) come from, TBH. Intuitively, I'd hope there's a way to\n> eliminate that entirely, and only pay the cost when requested (with the\n> expectation that it's cheaper than freezing it that later).\n\nI'd like to see a profile comparison between those two cases. Best with\nboth profiles done in master, just once with the freeze path disabled...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 May 2021 10:25:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "\n\nOn 5/11/21 5:56 PM, Masahiko Sawada wrote:\n> On Tue, May 11, 2021 at 11:07 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 5/11/21 11:04 AM, Masahiko Sawada wrote:\n>>> On Tue, May 11, 2021 at 4:37 PM Michael Paquier <michael@paquier.xyz> wrote:\n>>>>\n>>>> On Wed, May 05, 2021 at 03:04:53PM +0200, Tomas Vondra wrote:\n>>>>> Thanks, that looks promising. I repeated the tests I did on 26/4, and the\n>>>>> results look like this:\n>>>>>\n>>>>> old (0c7d3bb99): 497ms\n>>>>> master: 621ms\n>>>>> patched: 531ms\n>>>>>\n>>>>> So yeah, that's a bit improvement - it does not remove the regression\n>>>>> entirely, but +5% is much better than +25%.\n>>>>\n>>>> Hmm. Is that really something we should do after feature freeze? A\n>>>> 25% degradation for matview refresh may be a problem for a lot of\n>>>> users and could be an upgrade stopper. Another thing we could do is\n>>>> also to revert 7db0cd2 and 39b66a9 from the v14 tree, and work on a\n>>>> proper solution for this performance problem for matviews for 15~.\n>>>\n>>> I think the approach proposed by Andres eliminates the extra vmbuffer\n>>> reads as much as possible. But even with the patch, there still is 5%\n>>> degradation (and there is no way to disable inserting frozen tuples at\n>>> matview refresh). Which could be a problem for some users. I think\n>>> it’s hard to completely eliminate the overhead so we might need to\n>>> consider another approach like having matview refresh use\n>>> heap_multi_insert() instead of heap_insert().\n>>>\n>>\n>> I think it's way too late to make such significant change (switching to\n>> heap_multi_insert) for v14 :-(\n> \n> Right.\n> \n>> Moreover, I doubt it affects just matview\n>> refresh - why wouldn't it affect other similar use cases? More likely\n>> it's just the case that was discovered.\n> \n> I've not tested yet but I guess COPY FROM … FREEZE using heap_insert\n> would similarly be affected since it also uses heap_insert() with\n> TABLE_INSERT_FROZEN.\n> \n\nI'd say that's somewhat acceptable, as it's a trade-off between paying a \nbit of time during COPY vs. paying much more later (when freezing the \nrows eventually).\n\n From my POV the problem here is we've not asked to freeze the rows \n(unless I'm missing something and REFRESH freezes them?), but it's still \na bit slower. However, 5% might also be just noise due to changes in \nlayout of the binary.\n\n>>\n>>> I think the changes for heap_multi_insert() are fine so we can revert\n>>> only heap_insert() part if we revert something from the v14 tree,\n>>> although we will end up not inserting frozen tuples into toast tables.\n>>>\n>>\n>> I'd be somewhat unhappy about reverting just this bit, because it'd mean\n>> that we freeze rows in the main table but not rows in the TOAST tables\n>> (that was kinda why we concluded we need the heap_insert part too).\n>>\n>> I'm still a bit puzzled where does the extra overhead (in cases when\n>> freeze is not requested) come from, TBH.\n> \n> Which cases do you mean? Doesn't matview refresh always request to\n> freeze tuples even after applying the patch proposed on this thread?\n> \n\nOh, I didn't realize that! That'd make this much less of an issue, I'd \nsay, because if we're intentionally freezing the rows it's reasonable to \npay a bit of time (in exchange for not having to do it later). The \noriginal +25% was a bit too much, of course, but +5% seems reasonable.\n\nFWIW I'm on vacation until the end of this week, I can't do much testing \nat the moment. Sorry.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 11 May 2021 19:32:05 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "\n\nOn 5/11/21 7:25 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2021-05-11 16:07:44 +0200, Tomas Vondra wrote:\n>> On 5/11/21 11:04 AM, Masahiko Sawada wrote:\n>>> I think the changes for heap_multi_insert() are fine so we can revert\n>>> only heap_insert() part if we revert something from the v14 tree,\n>>> although we will end up not inserting frozen tuples into toast tables.\n>>>\n>>\n>> I'd be somewhat unhappy about reverting just this bit, because it'd mean\n>> that we freeze rows in the main table but not rows in the TOAST tables (that\n>> was kinda why we concluded we need the heap_insert part too).\n> \n> Is there a reason not to apply a polished version of my proposal? And\n> then to look at the remaining difference?\n> \n\nProbably not, I was just a little bit confused what exactly is going on, \nunsure what to do about it. But if RMV freezes the rows, that probably \nexplains it and your patch is the way to go.\n\n> \n>> I'm still a bit puzzled where does the extra overhead (in cases when freeze\n>> is not requested) come from, TBH. Intuitively, I'd hope there's a way to\n>> eliminate that entirely, and only pay the cost when requested (with the\n>> expectation that it's cheaper than freezing it that later).\n> \n> I'd like to see a profile comparison between those two cases. Best with\n> both profiles done in master, just once with the freeze path disabled...\n> \n\nOK. I'm mostly afk at the moment, I'll do that once I get back home, \nsometime over the weekend / maybe early next week.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 11 May 2021 19:35:23 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On 2021-May-11, Michael Paquier wrote:\n\n> Hmm. Is that really something we should do after feature freeze? A\n> 25% degradation for matview refresh may be a problem for a lot of\n> users and could be an upgrade stopper. Another thing we could do is\n> also to revert 7db0cd2 and 39b66a9 from the v14 tree, and work on a\n> proper solution for this performance problem for matviews for 15~.\n> \n> Thoughts?\n\nMy main thought while reading this thread is about the rules of feature\nfreeze. I mean, we are indeed in feature freeze, so no new features\nshould be added. But that doesn't mean we are in code freeze. For the\nperiod starting now and until RC (which is a couple of months away\nstill) we should focus on ensuring that the features we do have are in\nas good a shape as possible. If that means adding more code to fix\nproblems/bugs/performance problems in the existing code, so be it.\nI mean, reverting is not the only tool we have.\n\nYes, reverting has its place. Moreover, threats of reversion have their\nplace. People should definitely be working towards finding solutions to\nthe problems in their commits lest they be reverted. However, freezing\n*people* by saying that no fixes are acceptable other than reverts ...\nis not good.\n\nSo I agree with what Andres is saying downthread: let's apply the fix he\nproposed (it's not even that invasive anyway), and investigate the\nremaining 5% and see if we can find a solution. If by the end of the\nbeta process we can definitely find no solution to the problem, we can\nrevert the whole lot then.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Tue, 11 May 2021 14:23:08 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "\nOn 5/11/21 2:23 PM, Alvaro Herrera wrote:\n> On 2021-May-11, Michael Paquier wrote:\n>\n>> Hmm. Is that really something we should do after feature freeze? A\n>> 25% degradation for matview refresh may be a problem for a lot of\n>> users and could be an upgrade stopper. Another thing we could do is\n>> also to revert 7db0cd2 and 39b66a9 from the v14 tree, and work on a\n>> proper solution for this performance problem for matviews for 15~.\n>>\n>> Thoughts?\n> My main thought while reading this thread is about the rules of feature\n> freeze. I mean, we are indeed in feature freeze, so no new features\n> should be added. But that doesn't mean we are in code freeze. For the\n> period starting now and until RC (which is a couple of months away\n> still) we should focus on ensuring that the features we do have are in\n> as good a shape as possible. If that means adding more code to fix\n> problems/bugs/performance problems in the existing code, so be it.\n> I mean, reverting is not the only tool we have.\n>\n> Yes, reverting has its place. Moreover, threats of reversion have their\n> place. People should definitely be working towards finding solutions to\n> the problems in their commits lest they be reverted. However, freezing\n> *people* by saying that no fixes are acceptable other than reverts ...\n> is not good.\n>\n> So I agree with what Andres is saying downthread: let's apply the fix he\n> proposed (it's not even that invasive anyway), and investigate the\n> remaining 5% and see if we can find a solution. If by the end of the\n> beta process we can definitely find no solution to the problem, we can\n> revert the whole lot then.\n>\n\n\nI agree with all of this. Right now I'm only concerned if there isn't\nwork apparently being done on some issue.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 11 May 2021 14:46:35 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Tue, May 11, 2021 at 02:46:35PM -0400, Andrew Dunstan wrote:\n> On 5/11/21 2:23 PM, Alvaro Herrera wrote:\n>> Yes, reverting has its place. Moreover, threats of reversion have their\n>> place. People should definitely be working towards finding solutions to\n>> the problems in their commits lest they be reverted. However, freezing\n>> *people* by saying that no fixes are acceptable other than reverts ...\n>> is not good.\n\nWell, that's an option on the table and a possibility, so I am listing\nit as a possible exit path as a potential solution, as much as a\ndifferent optimization is another exit path to take care of this item\n:)\n\n>> So I agree with what Andres is saying downthread: let's apply the fix he\n>> proposed (it's not even that invasive anyway), and investigate the\n>> remaining 5% and see if we can find a solution. If by the end of the\n>> beta process we can definitely find no solution to the problem, we can\n>> revert the whole lot then.\n> \n> I agree with all of this. Right now I'm only concerned if there isn't\n> work apparently being done on some issue.\n\nIf that's the consensus reached, that's fine by me as long as we don't\nkeep a 25% performance regression. Now, looking at the patch\nproposed, I have to admit that this looks like some redesign of an\nexisting feature, so that stresses me a bit in a period when we are\naiming at making things stable, because this has a risk of making a\npart of the code more unstable. And I've had my share of calls over\nthe last years in such situations, not only with Postgres, FWIW, so I\nmay just sound like a conservative guy with a conservative hat.\n--\nMichael",
"msg_date": "Thu, 13 May 2021 11:12:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-13 11:12:43 +0900, Michael Paquier wrote:\n> If that's the consensus reached, that's fine by me as long as we don't\n> keep a 25% performance regression. Now, looking at the patch\n> proposed, I have to admit that this looks like some redesign of an\n> existing feature, so that stresses me a bit in a period when we are\n> aiming at making things stable, because this has a risk of making a\n> part of the code more unstable.\n\nYou're referencing tracking the vm page in the bulk insert state? I\ndon't see how you get a less invasive fix that's not architecturally\nworse than this. If that's over your level of comfort, I don't see an\nalternative but to revert. But I also don't think it's particularly\ninvasive?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 12 May 2021 19:16:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Tue, May 11, 2021 at 11:46 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> > Yes, reverting has its place. Moreover, threats of reversion have their\n> > place. People should definitely be working towards finding solutions to\n> > the problems in their commits lest they be reverted. However, freezing\n> > *people* by saying that no fixes are acceptable other than reverts ...\n> > is not good.\n> >\n> > So I agree with what Andres is saying downthread: let's apply the fix he\n> > proposed (it's not even that invasive anyway), and investigate the\n> > remaining 5% and see if we can find a solution. If by the end of the\n> > beta process we can definitely find no solution to the problem, we can\n> > revert the whole lot then.\n> >\n>\n>\n> I agree with all of this. Right now I'm only concerned if there isn't\n> work apparently being done on some issue.\n\n+1. While reverting a patch is always on the table, it must be the\noption of last resort. I don't have any specific reason to believe\nthat that's the point we're at just yet.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 12 May 2021 19:47:31 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Wed, May 12, 2021 at 2:32 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 5/11/21 5:56 PM, Masahiko Sawada wrote:\n> > On Tue, May 11, 2021 at 11:07 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> On 5/11/21 11:04 AM, Masahiko Sawada wrote:\n> >>> On Tue, May 11, 2021 at 4:37 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >>>>\n> >>>> On Wed, May 05, 2021 at 03:04:53PM +0200, Tomas Vondra wrote:\n> >>>>> Thanks, that looks promising. I repeated the tests I did on 26/4, and the\n> >>>>> results look like this:\n> >>>>>\n> >>>>> old (0c7d3bb99): 497ms\n> >>>>> master: 621ms\n> >>>>> patched: 531ms\n> >>>>>\n> >>>>> So yeah, that's a bit improvement - it does not remove the regression\n> >>>>> entirely, but +5% is much better than +25%.\n> >>>>\n> >>>> Hmm. Is that really something we should do after feature freeze? A\n> >>>> 25% degradation for matview refresh may be a problem for a lot of\n> >>>> users and could be an upgrade stopper. Another thing we could do is\n> >>>> also to revert 7db0cd2 and 39b66a9 from the v14 tree, and work on a\n> >>>> proper solution for this performance problem for matviews for 15~.\n> >>>\n> >>> I think the approach proposed by Andres eliminates the extra vmbuffer\n> >>> reads as much as possible. But even with the patch, there still is 5%\n> >>> degradation (and there is no way to disable inserting frozen tuples at\n> >>> matview refresh). Which could be a problem for some users. I think\n> >>> it’s hard to completely eliminate the overhead so we might need to\n> >>> consider another approach like having matview refresh use\n> >>> heap_multi_insert() instead of heap_insert().\n> >>>\n> >>\n> >> I think it's way too late to make such significant change (switching to\n> >> heap_multi_insert) for v14 :-(\n> >\n> > Right.\n> >\n> >> Moreover, I doubt it affects just matview\n> >> refresh - why wouldn't it affect other similar use cases? More likely\n> >> it's just the case that was discovered.\n> >\n> > I've not tested yet but I guess COPY FROM … FREEZE using heap_insert\n> > would similarly be affected since it also uses heap_insert() with\n> > TABLE_INSERT_FROZEN.\n> >\n>\n> I'd say that's somewhat acceptable, as it's a trade-off between paying a\n> bit of time during COPY vs. paying much more later (when freezing the\n> rows eventually).\n>\n> From my POV the problem here is we've not asked to freeze the rows\n> (unless I'm missing something and REFRESH freezes them?), but it's still\n> a bit slower. However, 5% might also be just noise due to changes in\n> layout of the binary.\n>\n> >>\n> >>> I think the changes for heap_multi_insert() are fine so we can revert\n> >>> only heap_insert() part if we revert something from the v14 tree,\n> >>> although we will end up not inserting frozen tuples into toast tables.\n> >>>\n> >>\n> >> I'd be somewhat unhappy about reverting just this bit, because it'd mean\n> >> that we freeze rows in the main table but not rows in the TOAST tables\n> >> (that was kinda why we concluded we need the heap_insert part too).\n> >>\n> >> I'm still a bit puzzled where does the extra overhead (in cases when\n> >> freeze is not requested) come from, TBH.\n> >\n> > Which cases do you mean? Doesn't matview refresh always request to\n> > freeze tuples even after applying the patch proposed on this thread?\n> >\n>\n> Oh, I didn't realize that! That'd make this much less of an issue, I'd\n> say, because if we're intentionally freezing the rows it's reasonable to\n> pay a bit of time (in exchange for not having to do it later). The\n> original +25% was a bit too much, of course, but +5% seems reasonable.\n\nYes. It depends on how much the matview refresh gets slower but I\nthink the problem here is that users always are forced to pay the cost\nfor freezing tuple during refreshing the matview. There is no way to\ndisable it unlike FREEZE option of COPY command.\n\nI’ve done benchmarks for matview refresh on my machine (FreeBSD 12.1,\nAMD Ryzen 5 PRO 3400GE, 24GB RAM) with four codes: HEAD, HEAD +\nAndres’s patch, one before 39b66a91b, and HEAD without\nTABLE_INSERT_FROZEN.\n\nThe workload is to refresh the matview that simply selects 50M tuples\n(about 1.7 GB). Here are the average execution times of three trials\nfor each code:\n\n1) head: 42.263 sec\n2) head w/ Andres’s patch: 40.194 sec\n3) before 39b66a91b commit: 38.143 sec\n4) head w/o freezing tuples: 32.413 sec\n\nI also observed 5% degradation by comparing 1 and 2 but am not sure\nwhere the overhead came from. I agree with Andres’s proposal. It’s a\nstraightforward approach. I think it’s a reasonable degradation\ncomparing to the cost of freezing tuples later. But I’m concerned a\nbit that it’s reasonable that we force all users to pay the cost\nduring matview refresh without any choice. So we need to find the\nremaining differences after applying a polished version of the patch.\n\nFYI I’ve attached flame graphs for each evaluation. Looking at\n1_head.svg, we can see CPU spent much time on visibilittmap_pin() and\nit disappeared in 2_head_w_Andreas_patch.svg. There is no big\ndifference at a glance between 2_head_w_Andreas_patch.svg and\n3_before_39b66a91b.svg.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 18 May 2021 11:20:07 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-18 11:20:07 +0900, Masahiko Sawada wrote:\n> Yes. It depends on how much the matview refresh gets slower but I\n> think the problem here is that users always are forced to pay the cost\n> for freezing tuple during refreshing the matview. There is no way to\n> disable it unlike FREEZE option of COPY command.\n> \n> I’ve done benchmarks for matview refresh on my machine (FreeBSD 12.1,\n> AMD Ryzen 5 PRO 3400GE, 24GB RAM) with four codes: HEAD, HEAD +\n> Andres’s patch, one before 39b66a91b, and HEAD without\n> TABLE_INSERT_FROZEN.\n> \n> The workload is to refresh the matview that simply selects 50M tuples\n> (about 1.7 GB). Here are the average execution times of three trials\n> for each code:\n> \n> 1) head: 42.263 sec\n> 2) head w/ Andres’s patch: 40.194 sec\n> 3) before 39b66a91b commit: 38.143 sec\n> 4) head w/o freezing tuples: 32.413 sec\n\nI don't see such a big difference between andres-freeze/non-freeze. Is\nthere any chance there's some noise in there? I found that I need to\ndisable autovacuum and ensure that there's a checkpoint just before the\nREFRESH to get halfway meaningful numbers, as well as a min/max_wal_size\nensuring that only recycled WAL is used.\n\n\n> I also observed 5% degradation by comparing 1 and 2 but am not sure\n> where the overhead came from. I agree with Andres’s proposal. It’s a\n> straightforward approach.\n\nWhat degradation are you referencing here?\n\n\nI compared your case 2 with 4 - as far as I can see the remaining\nperformance difference is from the the difference in WAL records\nemitted:\n\nfreeze-andres:\n\nType N (%) Record size (%) FPI size (%) Combined size (%)\n---- - --- ----------- --- -------- --- ------------- ---\nXLOG/CHECKPOINT_ONLINE 1 ( 0.00) 114 ( 0.00) 0 ( 0.00) 114 ( 0.00)\nTransaction/COMMIT 1 ( 0.00) 949 ( 0.00) 0 ( 0.00) 949 ( 0.00)\nStorage/CREATE 1 ( 0.00) 42 ( 0.00) 0 ( 0.00) 42 ( 0.00)\nStandby/LOCK 3 ( 0.00) 138 ( 0.00) 0 ( 0.00) 138 ( 0.00)\nStandby/RUNNING_XACTS 2 ( 0.00) 104 ( 0.00) 0 ( 0.00) 104 ( 0.00)\nHeap2/VISIBLE 44248 ( 0.44) 2610642 ( 0.44) 16384 ( 14.44) 2627026 ( 0.44)\nHeap2/MULTI_INSERT 5 ( 0.00) 1125 ( 0.00) 6696 ( 5.90) 7821 ( 0.00)\nHeap/INSERT 9955755 ( 99.12) 587389836 ( 99.12) 5128 ( 4.52) 587394964 ( 99.10)\nHeap/DELETE 13 ( 0.00) 702 ( 0.00) 0 ( 0.00) 702 ( 0.00)\nHeap/UPDATE 2 ( 0.00) 202 ( 0.00) 0 ( 0.00) 202 ( 0.00)\nHeap/HOT_UPDATE 1 ( 0.00) 65 ( 0.00) 4372 ( 3.85) 4437 ( 0.00)\nHeap/INSERT+INIT 44248 ( 0.44) 2610632 ( 0.44) 0 ( 0.00) 2610632 ( 0.44)\nBtree/INSERT_LEAF 33 ( 0.00) 2030 ( 0.00) 80864 ( 71.28) 82894 ( 0.01)\n -------- -------- -------- --------\nTotal 10044313 592616581 [99.98%] 113444 [0.02%] 592730025 [100%]\n\nnofreeze:\n\nType N (%) Record size (%) FPI size (%) Combined size (%)\n---- - --- ----------- --- -------- --- ------------- ---\nXLOG/NEXTOID 1 ( 0.00) 30 ( 0.00) 0 ( 0.00) 30 ( 0.00)\nTransaction/COMMIT 1 ( 0.00) 949 ( 0.00) 0 ( 0.00) 949 ( 0.00)\nStorage/CREATE 1 ( 0.00) 42 ( 0.00) 0 ( 0.00) 42 ( 0.00)\nStandby/LOCK 3 ( 0.00) 138 ( 0.00) 0 ( 0.00) 138 ( 0.00)\nStandby/RUNNING_XACTS 1 ( 0.00) 54 ( 0.00) 0 ( 0.00) 54 ( 0.00)\nHeap2/MULTI_INSERT 5 ( 0.00) 1125 ( 0.00) 7968 ( 7.32) 9093 ( 0.00)\nHeap/INSERT 9955755 ( 99.56) 587389836 ( 99.56) 5504 ( 5.06) 587395340 ( 99.54)\nHeap/DELETE 13 ( 0.00) 702 ( 0.00) 0 ( 0.00) 702 ( 0.00)\nHeap/UPDATE 2 ( 0.00) 202 ( 0.00) 0 ( 0.00) 202 ( 0.00)\nHeap/HOT_UPDATE 1 ( 0.00) 65 ( 0.00) 5076 ( 4.67) 5141 ( 0.00)\nHeap/INSERT+INIT 44248 ( 0.44) 2610632 ( 0.44) 0 ( 0.00) 2610632 ( 0.44)\nBtree/INSERT_LEAF 32 ( 0.00) 1985 ( 0.00) 73476 ( 67.54) 75461 ( 0.01)\nBtree/INSERT_UPPER 1 ( 0.00) 61 ( 0.00) 1172 ( 1.08) 1233 ( 0.00)\nBtree/SPLIT_L 1 ( 0.00) 1549 ( 0.00) 7480 ( 6.88) 9029 ( 0.00)\nBtree/DELETE 1 ( 0.00) 59 ( 0.00) 8108 ( 7.45) 8167 ( 0.00)\nBtree/REUSE_PAGE 1 ( 0.00) 50 ( 0.00) 0 ( 0.00) 50 ( 0.00)\n -------- -------- -------- --------\nTotal 10000067 590007479 [99.98%] 108784 [0.02%] 590116263 [100%]\n\nI.e. the additional Heap2/VISIBLE records show up.\n\nIt's not particularly surprising that emitting an additional WAL record\nfor every page isn't free. It's particularly grating / unnecessary\nbecause this is the REGBUF_WILL_INIT path - it's completely unnecessary\nto emit a separate record.\n\nI dimly remember that we explicitly discussed that we do *not* want to\nemit WAL records here?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 18 May 2021 11:08:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On 5/18/21 4:20 AM, Masahiko Sawada wrote:\n > ...\n>>>>\n>>>>> I think the changes for heap_multi_insert() are fine so we can revert\n>>>>> only heap_insert() part if we revert something from the v14 tree,\n>>>>> although we will end up not inserting frozen tuples into toast tables.\n>>>>>\n>>>>\n>>>> I'd be somewhat unhappy about reverting just this bit, because it'd mean\n>>>> that we freeze rows in the main table but not rows in the TOAST tables\n>>>> (that was kinda why we concluded we need the heap_insert part too).\n>>>>\n>>>> I'm still a bit puzzled where does the extra overhead (in cases when\n>>>> freeze is not requested) come from, TBH.\n>>>\n>>> Which cases do you mean? Doesn't matview refresh always request to\n>>> freeze tuples even after applying the patch proposed on this thread?\n>>>\n>>\n>> Oh, I didn't realize that! That'd make this much less of an issue, I'd\n>> say, because if we're intentionally freezing the rows it's reasonable to\n>> pay a bit of time (in exchange for not having to do it later). The\n>> original +25% was a bit too much, of course, but +5% seems reasonable.\n> \n> Yes. It depends on how much the matview refresh gets slower but I\n> think the problem here is that users always are forced to pay the cost\n> for freezing tuple during refreshing the matview. There is no way to\n> disable it unlike FREEZE option of COPY command.\n> \n\nYeah, I see your point. I agree it's unfortunate there's no way to \ndisable freezing during REFRESH MV. For most users that trade-off is \nprobably fine, but for some cases (matviews refreshed often, or cases \nwhere it's fine to pay more but later) it may be an issue.\n\n From this POV, however, it may not be enough to optimize the current \nfreezing code - it's always going to be a bit slower than before. So the \nonly *real* solution may be adding a FREEZE option to the REFRESH \nMATERIALIZED VIEW command.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 18 May 2021 20:34:08 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "\n\nOn 5/18/21 8:08 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2021-05-18 11:20:07 +0900, Masahiko Sawada wrote:\n>> Yes. It depends on how much the matview refresh gets slower but I\n>> think the problem here is that users always are forced to pay the cost\n>> for freezing tuple during refreshing the matview. There is no way to\n>> disable it unlike FREEZE option of COPY command.\n>>\n>> I’ve done benchmarks for matview refresh on my machine (FreeBSD 12.1,\n>> AMD Ryzen 5 PRO 3400GE, 24GB RAM) with four codes: HEAD, HEAD +\n>> Andres’s patch, one before 39b66a91b, and HEAD without\n>> TABLE_INSERT_FROZEN.\n>>\n>> The workload is to refresh the matview that simply selects 50M tuples\n>> (about 1.7 GB). Here are the average execution times of three trials\n>> for each code:\n>>\n>> 1) head: 42.263 sec\n>> 2) head w/ Andres’s patch: 40.194 sec\n>> 3) before 39b66a91b commit: 38.143 sec\n>> 4) head w/o freezing tuples: 32.413 sec\n> \n> I don't see such a big difference between andres-freeze/non-freeze. Is\n> there any chance there's some noise in there? I found that I need to\n> disable autovacuum and ensure that there's a checkpoint just before the\n> REFRESH to get halfway meaningful numbers, as well as a min/max_wal_size\n> ensuring that only recycled WAL is used.\n> \n> \n>> I also observed 5% degradation by comparing 1 and 2 but am not sure\n>> where the overhead came from. I agree with Andres’s proposal. It’s a\n>> straightforward approach.\n> \n> What degradation are you referencing here?\n> \n> \n> I compared your case 2 with 4 - as far as I can see the remaining\n> performance difference is from the the difference in WAL records\n> emitted:\n> \n> freeze-andres:\n> \n> Type N (%) Record size (%) FPI size (%) Combined size (%)\n> ---- - --- ----------- --- -------- --- ------------- ---\n> XLOG/CHECKPOINT_ONLINE 1 ( 0.00) 114 ( 0.00) 0 ( 0.00) 114 ( 0.00)\n> Transaction/COMMIT 1 ( 0.00) 949 ( 0.00) 0 ( 0.00) 949 ( 0.00)\n> Storage/CREATE 1 ( 0.00) 42 ( 0.00) 0 ( 0.00) 42 ( 0.00)\n> Standby/LOCK 3 ( 0.00) 138 ( 0.00) 0 ( 0.00) 138 ( 0.00)\n> Standby/RUNNING_XACTS 2 ( 0.00) 104 ( 0.00) 0 ( 0.00) 104 ( 0.00)\n> Heap2/VISIBLE 44248 ( 0.44) 2610642 ( 0.44) 16384 ( 14.44) 2627026 ( 0.44)\n> Heap2/MULTI_INSERT 5 ( 0.00) 1125 ( 0.00) 6696 ( 5.90) 7821 ( 0.00)\n> Heap/INSERT 9955755 ( 99.12) 587389836 ( 99.12) 5128 ( 4.52) 587394964 ( 99.10)\n> Heap/DELETE 13 ( 0.00) 702 ( 0.00) 0 ( 0.00) 702 ( 0.00)\n> Heap/UPDATE 2 ( 0.00) 202 ( 0.00) 0 ( 0.00) 202 ( 0.00)\n> Heap/HOT_UPDATE 1 ( 0.00) 65 ( 0.00) 4372 ( 3.85) 4437 ( 0.00)\n> Heap/INSERT+INIT 44248 ( 0.44) 2610632 ( 0.44) 0 ( 0.00) 2610632 ( 0.44)\n> Btree/INSERT_LEAF 33 ( 0.00) 2030 ( 0.00) 80864 ( 71.28) 82894 ( 0.01)\n> -------- -------- -------- --------\n> Total 10044313 592616581 [99.98%] 113444 [0.02%] 592730025 [100%]\n> \n> nofreeze:\n> \n> Type N (%) Record size (%) FPI size (%) Combined size (%)\n> ---- - --- ----------- --- -------- --- ------------- ---\n> XLOG/NEXTOID 1 ( 0.00) 30 ( 0.00) 0 ( 0.00) 30 ( 0.00)\n> Transaction/COMMIT 1 ( 0.00) 949 ( 0.00) 0 ( 0.00) 949 ( 0.00)\n> Storage/CREATE 1 ( 0.00) 42 ( 0.00) 0 ( 0.00) 42 ( 0.00)\n> Standby/LOCK 3 ( 0.00) 138 ( 0.00) 0 ( 0.00) 138 ( 0.00)\n> Standby/RUNNING_XACTS 1 ( 0.00) 54 ( 0.00) 0 ( 0.00) 54 ( 0.00)\n> Heap2/MULTI_INSERT 5 ( 0.00) 1125 ( 0.00) 7968 ( 7.32) 9093 ( 0.00)\n> Heap/INSERT 9955755 ( 99.56) 587389836 ( 99.56) 5504 ( 5.06) 587395340 ( 99.54)\n> Heap/DELETE 13 ( 0.00) 702 ( 0.00) 0 ( 0.00) 702 ( 0.00)\n> Heap/UPDATE 2 ( 0.00) 202 ( 0.00) 0 ( 0.00) 202 ( 0.00)\n> Heap/HOT_UPDATE 1 ( 0.00) 65 ( 0.00) 5076 ( 4.67) 5141 ( 0.00)\n> Heap/INSERT+INIT 44248 ( 0.44) 2610632 ( 0.44) 0 ( 0.00) 2610632 ( 0.44)\n> Btree/INSERT_LEAF 32 ( 0.00) 1985 ( 0.00) 73476 ( 67.54) 75461 ( 0.01)\n> Btree/INSERT_UPPER 1 ( 0.00) 61 ( 0.00) 1172 ( 1.08) 1233 ( 0.00)\n> Btree/SPLIT_L 1 ( 0.00) 1549 ( 0.00) 7480 ( 6.88) 9029 ( 0.00)\n> Btree/DELETE 1 ( 0.00) 59 ( 0.00) 8108 ( 7.45) 8167 ( 0.00)\n> Btree/REUSE_PAGE 1 ( 0.00) 50 ( 0.00) 0 ( 0.00) 50 ( 0.00)\n> -------- -------- -------- --------\n> Total 10000067 590007479 [99.98%] 108784 [0.02%] 590116263 [100%]\n> \n> I.e. the additional Heap2/VISIBLE records show up.\n> \n> It's not particularly surprising that emitting an additional WAL record\n> for every page isn't free. It's particularly grating / unnecessary\n> because this is the REGBUF_WILL_INIT path - it's completely unnecessary\n> to emit a separate record.\n> \n\nYeah, emitting WAL is not exactly cheap, although it's just a little bit \nmore (0.44%). I haven't looked into the details, but I wonder why it has \nsuch disproportionate impact (although, the 32 vs. 40 sec may be off).\n\n> I dimly remember that we explicitly discussed that we do *not* want to\n> emit WAL records here?\n> \n\nUmmm, in which thread?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 18 May 2021 20:43:41 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-18 20:34:08 +0200, Tomas Vondra wrote:\n> Yeah, I see your point. I agree it's unfortunate there's no way to disable\n> freezing during REFRESH MV. For most users that trade-off is probably fine,\n> but for some cases (matviews refreshed often, or cases where it's fine to\n> pay more but later) it may be an issue.\n>\n> From this POV, however, it may not be enough to optimize the current\n> freezing code - it's always going to be a bit slower than before.\n\nBut the intrinsic overhead is *tiny*. Setting a few bits, with the other\ncosts amortized over a lot of pages. As far as I can tell the measurable\noverhead is that the increased WAL logging - which is not necessary.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 18 May 2021 11:44:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-18 20:43:41 +0200, Tomas Vondra wrote:\n> Yeah, emitting WAL is not exactly cheap, although it's just a little bit\n> more (0.44%). I haven't looked into the details, but I wonder why it has\n> such disproportionate impact (although, the 32 vs. 40 sec may be off).\n\nI couldn't reproduce this large a performance difference - I saw more\nlike 10% instead of 25%.\n\n\n> > I dimly remember that we explicitly discussed that we do *not* want to\n> > emit WAL records here?\n\n> Ummm, in which thread?\n\nhttps://postgr.es/m/20190408010427.4l63qr7h2fjcyp77%40alap3.anarazel.de\n\nOn 2019-04-07 18:04:27 -0700, Andres Freund wrote:\n> This avoids an extra WAL record for setting empty pages to all visible,\n> by adding XLH_INSERT_ALL_VISIBLE_SET & XLH_INSERT_ALL_FROZEN_SET, and\n> setting those when appropriate in heap_multi_insert. Unfortunately\n> currently visibilitymap_set() doesn't really properly allow to do this,\n> as it has embedded WAL logging for heap.\n>\n> I think we should remove the WAL logging from visibilitymap_set(), and\n> move it to a separate, heap specific, function.\n\nIt'd probably be sufficient for the current purpose to change\nvisibilitymap_set()'s documentation to say that recptr can also be\npassed in if the action is already covered by a WAL record, and that\nit's the callers responsibility to think through the correctness\nissues. Here it's easy, because any error will just throw the relation\naway.\n\nWe do need to to include all-visible / FSM change in the WAL, so\ncrash-recovery / standbys end up with the same result as a primary\nrunning normally. We already have the information, via\nXLH_INSERT_ALL_FROZEN_SET. I think all we need to do is to add a\nvisibilitymap_set() in the redo routines if XLH_INSERT_ALL_FROZEN_SET.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 18 May 2021 11:57:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Wed, May 19, 2021 at 3:08 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-05-18 11:20:07 +0900, Masahiko Sawada wrote:\n> > Yes. It depends on how much the matview refresh gets slower but I\n> > think the problem here is that users always are forced to pay the cost\n> > for freezing tuple during refreshing the matview. There is no way to\n> > disable it unlike FREEZE option of COPY command.\n> >\n> > I’ve done benchmarks for matview refresh on my machine (FreeBSD 12.1,\n> > AMD Ryzen 5 PRO 3400GE, 24GB RAM) with four codes: HEAD, HEAD +\n> > Andres’s patch, one before 39b66a91b, and HEAD without\n> > TABLE_INSERT_FROZEN.\n> >\n> > The workload is to refresh the matview that simply selects 50M tuples\n> > (about 1.7 GB). Here are the average execution times of three trials\n> > for each code:\n> >\n> > 1) head: 42.263 sec\n> > 2) head w/ Andres’s patch: 40.194 sec\n> > 3) before 39b66a91b commit: 38.143 sec\n> > 4) head w/o freezing tuples: 32.413 sec\n>\n> I don't see such a big difference between andres-freeze/non-freeze. Is\n> there any chance there's some noise in there? I found that I need to\n> disable autovacuum and ensure that there's a checkpoint just before the\n> REFRESH to get halfway meaningful numbers, as well as a min/max_wal_size\n> ensuring that only recycled WAL is used.\n\nI've ran the same benchmarks with the following parameters:\n\nshared_buffers = 10GB\nmax_wal_size = 50GB\nmin_wal_size = 50GB\ncheckpoint_timeout = 1h\nmaintenance_work_mem = 1GB\nwork_mem = 512MB\nautovacuum = off\n\n1) head: 42.397 sec\n2) head w/ Andres’s patch: 34.857 sec\n3) before 39b66a91b commit: 32.556 sec\n4) head w/o freezing tuples: 32.752 sec\n\nThere is 6% degradation between 2 and 4 but 2 is much better than the\nprevious tests.\n\n>\n>\n> > I also observed 5% degradation by comparing 1 and 2 but am not sure\n> > where the overhead came from. I agree with Andres’s proposal. It’s a\n> > straightforward approach.\n>\n> What degradation are you referencing here?\n\nSorry, I meant comparing 2 to 3 and 4.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 19 May 2021 11:56:40 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On 5/11/21 7:35 PM, Tomas Vondra wrote:\n> \n> \n> On 5/11/21 7:25 PM, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2021-05-11 16:07:44 +0200, Tomas Vondra wrote:\n>>> On 5/11/21 11:04 AM, Masahiko Sawada wrote:\n>>>> I think the changes for heap_multi_insert() are fine so we can revert\n>>>> only heap_insert() part if we revert something from the v14 tree,\n>>>> although we will end up not inserting frozen tuples into toast tables.\n>>>>\n>>>\n>>> I'd be somewhat unhappy about reverting just this bit, because it'd mean\n>>> that we freeze rows in the main table but not rows in the TOAST \n>>> tables (that\n>>> was kinda why we concluded we need the heap_insert part too).\n>>\n>> Is there a reason not to apply a polished version of my proposal? And\n>> then to look at the remaining difference?\n>>\n> \n> Probably not, I was just a little bit confused what exactly is going on, \n> unsure what to do about it. But if RMV freezes the rows, that probably \n> explains it and your patch is the way to go.\n> \n>>\n>>> I'm still a bit puzzled where does the extra overhead (in cases when \n>>> freeze\n>>> is not requested) come from, TBH. Intuitively, I'd hope there's a way to\n>>> eliminate that entirely, and only pay the cost when requested (with the\n>>> expectation that it's cheaper than freezing it that later).\n>>\n>> I'd like to see a profile comparison between those two cases. Best with\n>> both profiles done in master, just once with the freeze path disabled...\n>>\n> \n> OK. I'm mostly afk at the moment, I'll do that once I get back home, \n> sometime over the weekend / maybe early next week.\n> \n\nOK, so here are the flamegraphs, for all three cases - current master, \n0c7d3bb99 (i.e. before heap_insert changes) and with the pinning patch \napplied. I did this using the same test case as before (50M table), but \nwith -fno-omit-frame-pointer to get better profiles. It may add some \noverhead, but hopefully that applies to all cases equally.\n\nThe first 10 runs for each case look like this:\n\n old master patched\n ----------------------\n 55045 74284 58246\n 53927 74283 57273\n 54090 74114 57336\n 54194 74059 57223\n 54189 74186 57287\n 54090 74113 57278\n 54095 74036 57176\n 53896 74215 57303\n 54101 74060 57524\n 54062 74021 57278\n ----------------------\n 54168 74137 57392\n 1.36x 1.05x\n\nwhich is mostly in line with previous findings (the master overhead is a \nbit worse, possibly due to the frame pointers).\n\nAttached are the flame graphs for all three cases. The change in master \nis pretty clearly visible, but I don't see any clear difference between \nold and patched code :-(\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 21 May 2021 18:17:01 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-21 18:17:01 +0200, Tomas Vondra wrote:\n> OK, so here are the flamegraphs, for all three cases - current master,\n> 0c7d3bb99 (i.e. before heap_insert changes) and with the pinning patch\n> applied. I did this using the same test case as before (50M table), but with\n> -fno-omit-frame-pointer to get better profiles. It may add some overhead,\n> but hopefully that applies to all cases equally.\n> \n> The first 10 runs for each case look like this:\n> \n> old master patched\n> ----------------------\n> 55045 74284 58246\n> 53927 74283 57273\n> 54090 74114 57336\n> 54194 74059 57223\n> 54189 74186 57287\n> 54090 74113 57278\n> 54095 74036 57176\n> 53896 74215 57303\n> 54101 74060 57524\n> 54062 74021 57278\n> ----------------------\n> 54168 74137 57392\n> 1.36x 1.05x\n> \n> which is mostly in line with previous findings (the master overhead is a bit\n> worse, possibly due to the frame pointers).\n> \n> Attached are the flame graphs for all three cases. The change in master is\n> pretty clearly visible, but I don't see any clear difference between old and\n> patched code :-(\n\nI'm pretty sure it's the additional WAL records?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 21 May 2021 09:43:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On 5/21/21 6:43 PM, Andres Freund wrote:\n> Hi,\n>\n > ...\n >\n>> Attached are the flame graphs for all three cases. The change in master is\n>> pretty clearly visible, but I don't see any clear difference between old and\n>> patched code :-(\n> \n> I'm pretty sure it's the additional WAL records?\n> \n\nNot sure. If I understand what you suggested elsewhere in the thread, it \nshould be fine to modify heap_insert to pass the page recptr to \nvisibilitymap_set, roughly per the attached patch.\n\nI'm not sure it's correct, but it does eliminate the Heap2/VISIBILITY \nrecords for me (when applied on top of your patch). Funnily enough it \ndoes make it a wee bit slower:\n\npatch #1: 56941.505\npatch #2: 58099.788\n\nI wonder if this might be due to -fno-omit-frame-pointer, though, as \nwithout it I get these timings:\n\n0c7d3bb99: 25540.417\nmaster: 31868.236\npatch #1: 26566.199\npatch #2: 26487.943\n\nSo without the frame pointers there's no slowdown, but there's no clear \nimprovement after removal of the WAL records either :-(\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 21 May 2021 20:10:13 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Sat, May 22, 2021 at 3:10 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 5/21/21 6:43 PM, Andres Freund wrote:\n> > Hi,\n> >\n> > ...\n> >\n> >> Attached are the flame graphs for all three cases. The change in master is\n> >> pretty clearly visible, but I don't see any clear difference between old and\n> >> patched code :-(\n> >\n> > I'm pretty sure it's the additional WAL records?\n> >\n>\n> Not sure. If I understand what you suggested elsewhere in the thread, it\n> should be fine to modify heap_insert to pass the page recptr to\n> visibilitymap_set, roughly per the attached patch.\n>\n> I'm not sure it's correct, but it does eliminate the Heap2/VISIBILITY\n> records for me (when applied on top of your patch). Funnily enough it\n> does make it a wee bit slower:\n>\n> patch #1: 56941.505\n> patch #2: 58099.788\n>\n> I wonder if this might be due to -fno-omit-frame-pointer, though, as\n> without it I get these timings:\n>\n> 0c7d3bb99: 25540.417\n> master: 31868.236\n> patch #1: 26566.199\n> patch #2: 26487.943\n>\n> So without the frame pointers there's no slowdown, but there's no clear\n> improvement after removal of the WAL records either :-(\n\nCan we verify that the additional WAL records are the cause of this\ndifference by making the matview unlogged by manually updating\nrelpersistence = 'u'?\n\nHere are the results of benchmarks with unlogged matviews on my environment:\n\n1) head: 22.927 sec\n2) head w/ Andres’s patch: 16.629 sec\n3) before 39b66a91b commit: 15.377 sec\n4) head w/o freezing tuples: 14.551 sec\n\nAnd here are the results of logged matviews ICYMI:\n\n1) head: 42.397 sec\n2) head w/ Andres’s patch: 34.857 sec\n3) before 39b66a91b commit: 32.556 sec\n4) head w/o freezing tuples: 32.752 sec\n\nThere seems no difference in the tendency. Which means the additional\nWAL is not the culprit?\n\nInterestingly, my previously proposed patch[1] was a better\nperformance. With the patch, we skip all VM-related work on all\ninsertions except for when inserting a tuple into a page for the first\ntime.\n\nlogged matviews: 31.591 sec\nunlogged matviews: 15.317 sec\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoAaiPcgGRyJ7vpg05%3DNWqr6Vhaay_SEXyZBboQcZC8sFA%40mail.gmail.com\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 24 May 2021 16:53:10 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "\n\nOn 5/24/21 9:53 AM, Masahiko Sawada wrote:\n> On Sat, May 22, 2021 at 3:10 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 5/21/21 6:43 PM, Andres Freund wrote:\n>>> Hi,\n>>>\n>> > ...\n>> >\n>>>> Attached are the flame graphs for all three cases. The change in master is\n>>>> pretty clearly visible, but I don't see any clear difference between old and\n>>>> patched code :-(\n>>>\n>>> I'm pretty sure it's the additional WAL records?\n>>>\n>>\n>> Not sure. If I understand what you suggested elsewhere in the thread, it\n>> should be fine to modify heap_insert to pass the page recptr to\n>> visibilitymap_set, roughly per the attached patch.\n>>\n>> I'm not sure it's correct, but it does eliminate the Heap2/VISIBILITY\n>> records for me (when applied on top of your patch). Funnily enough it\n>> does make it a wee bit slower:\n>>\n>> patch #1: 56941.505\n>> patch #2: 58099.788\n>>\n>> I wonder if this might be due to -fno-omit-frame-pointer, though, as\n>> without it I get these timings:\n>>\n>> 0c7d3bb99: 25540.417\n>> master: 31868.236\n>> patch #1: 26566.199\n>> patch #2: 26487.943\n>>\n>> So without the frame pointers there's no slowdown, but there's no clear\n>> improvement after removal of the WAL records either :-(\n> \n> Can we verify that the additional WAL records are the cause of this\n> difference by making the matview unlogged by manually updating\n> relpersistence = 'u'?\n> \n> Here are the results of benchmarks with unlogged matviews on my environment:\n> \n> 1) head: 22.927 sec\n> 2) head w/ Andres’s patch: 16.629 sec\n> 3) before 39b66a91b commit: 15.377 sec\n> 4) head w/o freezing tuples: 14.551 sec\n> \n> And here are the results of logged matviews ICYMI:\n> \n> 1) head: 42.397 sec\n> 2) head w/ Andres’s patch: 34.857 sec\n> 3) before 39b66a91b commit: 32.556 sec\n> 4) head w/o freezing tuples: 32.752 sec\n> \n> There seems no difference in the tendency. Which means the additional\n> WAL is not the culprit?\n> \n\nYeah, I agree the WAL does not seem to be the culprit here.\n\nThe patch I posted skips the WAL logging entirely (verified by \npg_waldump, although I have not mentioned that), and there's no clear \nimprovement. (FWIW I'm not sure the patch is 100% correct, but it does \neliminate the the extra WAL.)\n\nThe patch however does not skip the whole visibilitymap_set, it still \ndoes the initial error checks. I wonder if that might play a role ...\n\nAnother option might be changes in the binary layout - 5% change is well \nwithin the range that could be attributed to this, but it feels very \nhand-wavy and more like an excuse than real analysis.\n\n> Interestingly, my previously proposed patch[1] was a better\n> performance. With the patch, we skip all VM-related work on all\n> insertions except for when inserting a tuple into a page for the first\n> time.\n> \n> logged matviews: 31.591 sec\n> unlogged matviews: 15.317 sec\n> \n\nHmmm, thanks for reminding us that patch. Why did we reject that \napproach in favor of the current one?\n\nI think at this point we have these two options:\n\n1) Revert the freeze patches, either completely or just the heap_insert \npart, which is what seems to be causing issues. And try again in PG15, \nperhaps using a different approach, allow disabling freezing in refresh, \nor something like that.\n\n2) Polish and commit the pinning patch from Andres, which does reduce \nthe slowdown quite a bit. And either call it a day, or continue with the \ninvestigation / analysis regarding the remaining ~5% (but I personally \nhave no idea what might be the problem ...).\n\n\nI'd like to keep the improvement, but I find the 5% regression rather \nannoying and hard to defend, considering how much we fight for every \nlittle improvement.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 24 May 2021 12:37:18 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-24 12:37:18 +0200, Tomas Vondra wrote:\n> Another option might be changes in the binary layout - 5% change is well\n> within the range that could be attributed to this, but it feels very\n> hand-wavy and more like an excuse than real analysis.\n\nI don't think 5% is likely to be explained by binary layout unless you\nlook for an explicitly adverse layout.\n\n\n> Hmmm, thanks for reminding us that patch. Why did we reject that approach in\n> favor of the current one?\n\nDon't know about others, but I think it's way too fragile.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 24 May 2021 11:21:19 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On 5/24/21 8:21 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2021-05-24 12:37:18 +0200, Tomas Vondra wrote:\n>> Another option might be changes in the binary layout - 5% change is well\n>> within the range that could be attributed to this, but it feels very\n>> hand-wavy and more like an excuse than real analysis.\n> \n> I don't think 5% is likely to be explained by binary layout unless you\n> look for an explicitly adverse layout.\n> \n\nYeah, true. But I'm out of ideas what might be causing the regression\nand how to fix it :-(\n\n> \n>> Hmmm, thanks for reminding us that patch. Why did we reject that approach in\n>> favor of the current one?\n> \n> Don't know about others, but I think it's way too fragile.\n> \n\nIs it really that fragile? Any particular risks you have in mind? Maybe\nwe could protect against that somehow ... Anyway, that change would\ncertainly be for PG15.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 25 May 2021 00:30:13 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "Hi,\n\nBased on the investigation and (lack of) progress so far, I'll revert\npart of the COPY FREEZE improvements shortly. I'll keep the initial\n7db0cd2145 changes, tweaking heap_multi_insert, and remove most of\n39b66a91bd (except for the heap_xlog_multi_insert bit).\n\nThis should address the small 5% regression in refresh matview. I have\nno other ideas how to fix that, short of adding a user-level option to\nREFRESH MATERIALIZED VIEW command so that the users can opt out/in.\n\nAttached is the revert patch - I'll get it committed in the next day or\ntwo, once the tests complete (running with CCA so it takes time).\n\nregards\n\nOn 5/25/21 12:30 AM, Tomas Vondra wrote:\n> On 5/24/21 8:21 PM, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2021-05-24 12:37:18 +0200, Tomas Vondra wrote:\n>>> Another option might be changes in the binary layout - 5% change is well\n>>> within the range that could be attributed to this, but it feels very\n>>> hand-wavy and more like an excuse than real analysis.\n>>\n>> I don't think 5% is likely to be explained by binary layout unless you\n>> look for an explicitly adverse layout.\n>>\n> \n> Yeah, true. But I'm out of ideas what might be causing the regression\n> and how to fix it :-(\n> \n>>\n>>> Hmmm, thanks for reminding us that patch. Why did we reject that approach in\n>>> favor of the current one?\n>>\n>> Don't know about others, but I think it's way too fragile.\n>>\n> \n> Is it really that fragile? Any particular risks you have in mind? Maybe\n> we could protect against that somehow ... Anyway, that change would\n> certainly be for PG15.\n> \n> \n> regards\n> \n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 31 May 2021 00:15:21 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "OK,\n\nAs mentioned in the previous message, I've reverted most of 39b66a91bd.\nIt took a bit longer to test, because the revert patch I shared a couple\ndays ago was actually incorrect/buggy in one place.\n\nI'm not entirely happy about the end result (as it does not really help\nwith TOAST tables), so hopefully we'll be able to do something about\nthat soon. I'm not sure what, though - we've spent quite a bit of time\ntrying to address the regression, and I don't envision some major\nbreakthrough.\n\nAs for the regression example, I think in practice the impact would be\nmuch lower, because the queries are likely much more complex (not just a\nseqscan from a table), so the query execution will be a much bigger part\nof execution time.\n\nI do think the optimization would be a win in most cases where freezing\nis desirable. From this POV the problem is rather that REFRESH MV does\nnot allow not freezing the result, so it has to pay the price always. So\nperhaps the way forward is to add \"NO FREEZE\" option to REFRESH MV, or\nsomething like that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 3 Jun 2021 01:02:17 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 8:02 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> OK,\n>\n> As mentioned in the previous message, I've reverted most of 39b66a91bd.\n> It took a bit longer to test, because the revert patch I shared a couple\n> days ago was actually incorrect/buggy in one place.\n>\n> I'm not entirely happy about the end result (as it does not really help\n> with TOAST tables), so hopefully we'll be able to do something about\n> that soon.\n\nMe too and +1 for addressing the problem soon for PG15.\n\n> I'm not sure what, though - we've spent quite a bit of time\n> trying to address the regression, and I don't envision some major\n> breakthrough.\n>\n> As for the regression example, I think in practice the impact would be\n> much lower, because the queries are likely much more complex (not just a\n> seqscan from a table), so the query execution will be a much bigger part\n> of execution time.\n>\n> I do think the optimization would be a win in most cases where freezing\n> is desirable. From this POV the problem is rather that REFRESH MV does\n> not allow not freezing the result, so it has to pay the price always. So\n> perhaps the way forward is to add \"NO FREEZE\" option to REFRESH MV, or\n> something like that.\n\nThat could be an option. Is it worth analyzing the cause of overhead\nand why my patch seemed to avoid it? If we can resolve the performance\nproblem by fixing heap_insert() and related codes, we can use\nHEAP_INSERT_FROZEN for CREATE TABLE AS as well.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 3 Jun 2021 11:56:19 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> As mentioned in the previous message, I've reverted most of 39b66a91bd.\n\nShould this topic be removed from the open-items list now?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Jun 2021 19:30:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
},
{
"msg_contents": "\nOn 6/3/21 7:30 PM, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> As mentioned in the previous message, I've reverted most of 39b66a91bd.\n> Should this topic be removed from the open-items list now?\n>\n> \t\t\t\n\n\n\nYep.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 3 Jun 2021 20:48:05 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of REFRESH MATERIALIZED VIEW"
}
] |
[
{
"msg_contents": "Hi All,\nI'm working on C plugin for Postgres (ver. 10). One of the thing which I \nneed is to automatically add some SQL functions from the plugin during \nits initialization ( inside _PG_init method ). It means that during \nloading libmyplugin.so _PG_init calls SPI_execute( \"CREATE FUNCTION \nfun() RETURNING void() AS '$libdir/libmyplugin.so', 'fun' LANGUAGE C\" \n), what leads to recursive call of _PG_init. It seems that the problem \nis with adding a plugin to the list after the execution of _PG_init, not \nbefore: \nhttps://github.com/postgres/postgres/blob/2c0cefcd18161549e9e8b103f46c0f65fca84d99/src/backend/utils/fmgr/dfmgr.c#L287 \n. What do You think about changing this and add a newly loaded plugin to \nthe list and then execute its _PG_init ? This change will help to keep \nconsistency between plugin expectation about database structure (about \ntables, functions, etc.) without delivery additional SQL scripts - \nplugin itself will be able to prepare all the required things.\n\nBest Regards,\nMarcin\n\n\n",
"msg_date": "Thu, 11 Mar 2021 12:29:49 +0100",
"msg_from": "mickiewicz@syncad.com",
"msg_from_op": true,
"msg_subject": "first add newly loaded plugin to the list then invoke _PG_init"
},
{
"msg_contents": "On Thu, Mar 11, 2021 at 12:29:49PM +0100, mickiewicz@syncad.com wrote:\n> Hi All,\n> I'm working on C plugin for Postgres (ver. 10). One of the thing which I\n> need is to automatically add some SQL functions from the plugin during its\n> initialization ( inside _PG_init method ). It means that during loading\n> libmyplugin.so _PG_init calls SPI_execute( \"CREATE FUNCTION fun() RETURNING\n> void() AS '$libdir/libmyplugin.so', 'fun' LANGUAGE C\" ), what leads to\n> recursive call of _PG_init. \n\nYou can't do that. This might appear to work in the common case, but you have\nno guarantee that you'll have an active transaction when your module is loaded,\nor even that you'll be able to perform write. For instance modules are also\nloaded in parallel workers, or or hot standby servers.\n\n> It seems that the problem is with adding a\n> plugin to the list after the execution of _PG_init, not before: https://github.com/postgres/postgres/blob/2c0cefcd18161549e9e8b103f46c0f65fca84d99/src/backend/utils/fmgr/dfmgr.c#L287\n> . What do You think about changing this and add a newly loaded plugin to the\n> list and then execute its _PG_init ? This change will help to keep\n> consistency between plugin expectation about database structure (about\n> tables, functions, etc.) without delivery additional SQL scripts - plugin\n> itself will be able to prepare all the required things.\n\nI think you're doing things backwards. You can look at pg_stat_statements on\nhow to achieve ABI compatability for lib vs SQL definition mismatch.\n\n\n",
"msg_date": "Thu, 11 Mar 2021 19:42:56 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: first add newly loaded plugin to the list then invoke _PG_init"
}
] |
[
{
"msg_contents": "Hi,\n\nProcSendSignal(pid) searches the ProcArray for the given pid and then\nsets that backend's procLatch. It has only two users: UnpinBuffer()\nand ReleasePredicateLocks(). In both cases, we could just as easily\nhave recorded the pgprocno instead, avoiding the locking and the\nsearching. We'd also be able to drop some special book-keeping for\nthe startup process, whose pid can't be found via the ProcArray.\n\nA related idea, saving space in BufferDesc but having to do slightly\nmore expensive work, would be for UnpinBuffer() to reuse the new\ncondition variable instead of ProcSendSignal().",
"msg_date": "Fri, 12 Mar 2021 00:31:44 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "A micro-optimisation for ProcSendSignal()"
},
{
"msg_contents": "On Fri, Mar 12, 2021 at 12:31 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> ProcSendSignal(pid) searches the ProcArray for the given pid and then\n> sets that backend's procLatch. It has only two users: UnpinBuffer()\n> and ReleasePredicateLocks(). In both cases, we could just as easily\n> have recorded the pgprocno instead, avoiding the locking and the\n> searching. We'd also be able to drop some special book-keeping for\n> the startup process, whose pid can't be found via the ProcArray.\n\nRebased.",
"msg_date": "Thu, 3 Jun 2021 14:38:07 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A micro-optimisation for ProcSendSignal()"
},
{
"msg_contents": "Hi Thomas,\n\nYou might have missed a spot to initialize SERIALIZABLE_XACT->pgprocno in\nInitPredicateLocks(), so:\n\n+ PredXact->OldCommittedSxact->pgprocno = INVALID_PGPROCNO;\n\nSlightly tangential: we should add a comment to PGPROC.pgprocno, for more\nimmediate understandability:\n\n+ int pgprocno; /* index of this PGPROC in ProcGlobal->allProcs */\n\nAlso, why don't we take the opportunity to get rid of SERIALIZABLEXACT->pid? We\ntook a stab. Attached is v2 of your patch with these changes.\n\nRegards,\nAshwin and Deep",
"msg_date": "Sat, 17 Jul 2021 11:57:39 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A micro-optimisation for ProcSendSignal()"
},
{
"msg_contents": "Hi Soumyadeep and Ashwin,\n\nThanks for looking!\n\nOn Sun, Jul 18, 2021 at 6:58 AM Soumyadeep Chakraborty\n<soumyadeep2007@gmail.com> wrote:\n> You might have missed a spot to initialize SERIALIZABLE_XACT->pgprocno in\n> InitPredicateLocks(), so:\n>\n> + PredXact->OldCommittedSxact->pgprocno = INVALID_PGPROCNO;\n\nThe magic OldCommittedSxact shouldn't be the target of a \"signal\", but\nthis is definitely tidier. Thanks.\n\n> Slightly tangential: we should add a comment to PGPROC.pgprocno, for more\n> immediate understandability:\n>\n> + int pgprocno; /* index of this PGPROC in ProcGlobal->allProcs */\n\nI wonder why we need this member anyway, when you can compute it from\nthe address... #define GetPGProcNumber(p) ((p) - ProcGlobal->allProcs)\nor something like that? Kinda wonder why we don't use\nGetPGProcByNumber() in more places instead of open-coding access to\nProcGlobal->allProcs, too...\n\n> Also, why don't we take the opportunity to get rid of SERIALIZABLEXACT->pid? We\n> took a stab. Attached is v2 of your patch with these changes.\n\nSERIALIZABLEXACT objects can live longer than the backends that\ncreated them. They hang around to sabotage other transactions' plans,\ndepending on what else they overlapped with before they committed.\nWith that change, the pg_locks view might show the pid of some\nunrelated session that moves into the same PGPROC.\n\nIt's only an \"informational\" pid, and pids are imperfect information\nanyway because (1) they are themselves recycled, and (2) they won't be\ninteresting in a hypothetical multi-threaded future. One solution\nwould be to hide the pids from the view after the backend disconnects\n(somehow -- add a generation number?), but they're also still kinda\nuseful, despite the weaknesses. I wonder what the ideal way would be\nto refer to sessions, anyway, including those that are no longer\nactive; perhaps we could invent a new \"session ID\" concept.",
"msg_date": "Wed, 21 Jul 2021 17:39:38 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A micro-optimisation for ProcSendSignal()"
},
{
"msg_contents": "HI Thomas,\n\nOn Tue, Jul 20, 2021 at 10:40 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> > Slightly tangential: we should add a comment to PGPROC.pgprocno, for more\n> > immediate understandability:\n> >\n> > + int pgprocno; /* index of this PGPROC in ProcGlobal->allProcs */\n>\n> I wonder why we need this member anyway, when you can compute it from\n> the address... #define GetPGProcNumber(p) ((p) - ProcGlobal->allProcs)\n> or something like that? Kinda wonder why we don't use\n> GetPGProcByNumber() in more places instead of open-coding access to\n> ProcGlobal->allProcs, too...\n\nI tried this out. See attached v4 of your patch with these changes.\n\n> > Also, why don't we take the opportunity to get rid of SERIALIZABLEXACT->pid? We\n> > took a stab. Attached is v2 of your patch with these changes.\n>\n> SERIALIZABLEXACT objects can live longer than the backends that\n> created them. They hang around to sabotage other transactions' plans,\n> depending on what else they overlapped with before they committed.\n> With that change, the pg_locks view might show the pid of some\n> unrelated session that moves into the same PGPROC.\n\nI see.\n\n>\n> It's only an \"informational\" pid, and pids are imperfect information\n> anyway because (1) they are themselves recycled, and (2) they won't be\n> interesting in a hypothetical multi-threaded future. One solution\n> would be to hide the pids from the view after the backend disconnects\n> (somehow -- add a generation number?), but they're also still kinda\n> useful, despite the weaknesses. I wonder what the ideal way would be\n> to refer to sessions, anyway, including those that are no longer\n> active; perhaps we could invent a new \"session ID\" concept.\n\nUpdating the pg_locks view:\n\nYes, the pids may be valuable for future debugging/audit purposes. Also,\nsystems where pid_max is high enough to not see wraparound, will have\npids that are not recycled. I would lean towards showing the pid even\nafter the backend has exited.\n\nPerhaps we could overload the stored pid to be negated (i.e. a backend\nwith pid 20000 will become -20000) to indicate that the pid belongs to\na backend that has exited. Additionally, we could introduce a boolean\nfield in pg_locks \"backendisalive\", so that the end user doesn't have\nto reason about negative pids.\n\nSession ID:\n\nInteresting, Greenplum uses the concept of session ID pervasively. Being\na distributed database, the session ID helps to tie individual backends\non each PG instance to the same session. The session ID of course comes\nwith its share of bookkeeping:\n\n* These IDs are incrementally dished out with a counter\n (with pg_atomic_add_fetch_u32), in increments of 1, on the Greenplum\n coordinator PG instance in InitProcess.\n\n* The counter is a part of ProcGlobal and itself is initialized to 0 in\n InitProcGlobal, which means that session IDs are reset on cluster\n restart.\n\n* The sessionID tied to each proceess is maintained in PGPROC.\n\n* The sessionID finds its way into PgBackendStatus, which is further\n reported with pg_stat_get_activity.\n\nA session ID seems a bit heavy just to indicate whether a backend has\nexited.\n\nRegards,\nSoumyadeep",
"msg_date": "Fri, 23 Jul 2021 22:26:17 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A micro-optimisation for ProcSendSignal()"
},
{
"msg_contents": "Hi Soumyadeep,\n\nOn Sat, Jul 24, 2021 at 5:26 PM Soumyadeep Chakraborty\n<soumyadeep2007@gmail.com> wrote:\n> On Tue, Jul 20, 2021 at 10:40 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I wonder why we need this member anyway, when you can compute it from\n> > the address... #define GetPGProcNumber(p) ((p) - ProcGlobal->allProcs)\n> > or something like that? Kinda wonder why we don't use\n> > GetPGProcByNumber() in more places instead of open-coding access to\n> > ProcGlobal->allProcs, too...\n>\n> I tried this out. See attached v4 of your patch with these changes.\n\nI like it. I've moved these changes to a separate patch, 0002, for\nseparate commit. I tweaked a couple of comments (it's not a position\nin the \"procarray\", well it's a position stored in the procarray, but\nthat's confusing; I also found a stray comment about leader->pgprocno\nthat is obsoleted by this change). Does anyone have objections to\nthis?\n\nI was going to commit the earlier change this morning, but then I read [1].\n\nNew idea. Instead of adding pgprocno to SERIALIZABLEXACT (which we\nshould really be trying to shrink, not grow), let's look it up by\nvxid->backendId. I didn't consider that before, because I was trying\nnot to get tangled up with BackendIds for various reasons, not least\nthat that's yet another lock + O(n) search.\n\nIt seems likely that getting from vxid to latch will be less clumsy in\nthe near future. That refactoring and harmonising of backend\nidentifiers seems like a great idea to me. Here's a version that\nanticipates that, using vxid->backendId to wake a sleeping\nSERIALIZABLE READ ONLY DEFERRABLE backend, without having to add a new\nmember to the struct.\n\n> A session ID seems a bit heavy just to indicate whether a backend has\n> exited.\n\nYeah. A Greenplum-like session ID might eventually be necessary in a\nworld where sessions are divorced from processes and handled by a pool\nof worker threads, though. /me gazes towards the horizon\n\n[1] https://www.postgresql.org/message-id/flat/20210802164124.ufo5buo4apl6yuvs%40alap3.anarazel.de",
"msg_date": "Tue, 3 Aug 2021 13:44:58 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A micro-optimisation for ProcSendSignal()"
},
{
"msg_contents": "Hey Thomas,\n\nOn Mon, Aug 2, 2021 at 6:45 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> Hi Soumyadeep,\n>\n> On Sat, Jul 24, 2021 at 5:26 PM Soumyadeep Chakraborty\n> <soumyadeep2007@gmail.com> wrote:\n> > On Tue, Jul 20, 2021 at 10:40 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > I wonder why we need this member anyway, when you can compute it from\n> > > the address... #define GetPGProcNumber(p) ((p) - ProcGlobal->allProcs)\n> > > or something like that? Kinda wonder why we don't use\n> > > GetPGProcByNumber() in more places instead of open-coding access to\n> > > ProcGlobal->allProcs, too...\n> >\n> > I tried this out. See attached v4 of your patch with these changes.\n>\n> I like it. I've moved these changes to a separate patch, 0002, for\n> separate commit. I tweaked a couple of comments (it's not a position\n> in the \"procarray\", well it's a position stored in the procarray, but\n> that's confusing; I also found a stray comment about leader->pgprocno\n> that is obsoleted by this change). Does anyone have objections to\n> this?\n\nAwesome, thanks! Looks good.\n\n> I was going to commit the earlier change this morning, but then I read [1].\n>\n> New idea. Instead of adding pgprocno to SERIALIZABLEXACT (which we\n> should really be trying to shrink, not grow), let's look it up by\n> vxid->backendId. I didn't consider that before, because I was trying\n> not to get tangled up with BackendIds for various reasons, not least\n> that that's yet another lock + O(n) search.\n>\n> It seems likely that getting from vxid to latch will be less clumsy in\n> the near future. That refactoring and harmonising of backend\n> identifiers seems like a great idea to me. Here's a version that\n> anticipates that, using vxid->backendId to wake a sleeping\n> SERIALIZABLE READ ONLY DEFERRABLE backend, without having to add a new\n> member to the struct.\n>\n\nNeat! A Vxid -> PGPROC lookup eventually becomes an O(1) operation with the\nchanges proposed at the ending paragraph of [1].\n\n[1] https://www.postgresql.org/message-id/20210802164124.ufo5buo4apl6yuvs%40alap3.anarazel.de\n\nRegards,\nSoumyadeep (VMware)\n\n\n",
"msg_date": "Mon, 2 Aug 2021 19:40:52 -0700",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A micro-optimisation for ProcSendSignal()"
},
{
"msg_contents": "Hi,\n\n\nOn 2021-08-03 13:44:58 +1200, Thomas Munro wrote:\n> New idea. Instead of adding pgprocno to SERIALIZABLEXACT (which we\n> should really be trying to shrink, not grow), let's look it up by\n> vxid->backendId. I didn't consider that before, because I was trying\n> not to get tangled up with BackendIds for various reasons, not least\n> that that's yet another lock + O(n) search.\n>\n> It seems likely that getting from vxid to latch will be less clumsy in\n> the near future.\n\nSo this change only makes sense of vxids would start to use pgprocno instead\nof backendid, basically?\n\n\n> From b284d8f29efc1c16c3aa75b64d9a940bcb74872c Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <tmunro@postgresql.org>\n> Date: Tue, 3 Aug 2021 10:02:15 +1200\n> Subject: [PATCH v5 1/2] Optimize ProcSendSignal().\n>\n> Instead of referring to target backends by pid, use pgprocno. This\n> means that we don't have to scan the ProcArray, and we can drop some\n> special case code for dealing with the startup process.\n>\n> In the case of buffer pin waits, we switch to storing the pgprocno of\n> the waiter. In the case of SERIALIZABLE READ ONLY DEFERRABLE waits, we\n> derive the pgprocno from the vxid (though that's not yet as efficient as\n> it could be).\n\nThat's kind of an understatement :)\n\n\n\n\n> -ProcSendSignal(int pid)\n> +ProcSendSignal(int pgprocno)\n> {\n> -\tPGPROC\t *proc = NULL;\n> -\n> -\tif (RecoveryInProgress())\n> -\t{\n> -\t\tSpinLockAcquire(ProcStructLock);\n> -\n> -\t\t/*\n> -\t\t * Check to see whether it is the Startup process we wish to signal.\n> -\t\t * This call is made by the buffer manager when it wishes to wake up a\n> -\t\t * process that has been waiting for a pin in so it can obtain a\n> -\t\t * cleanup lock using LockBufferForCleanup(). Startup is not a normal\n> -\t\t * backend, so BackendPidGetProc() will not return any pid at all. So\n> -\t\t * we remember the information for this special case.\n> -\t\t */\n> -\t\tif (pid == ProcGlobal->startupProcPid)\n> -\t\t\tproc = ProcGlobal->startupProc;\n> -\n> -\t\tSpinLockRelease(ProcStructLock);\n> -\t}\n> -\n> -\tif (proc == NULL)\n> -\t\tproc = BackendPidGetProc(pid);\n> -\n> -\tif (proc != NULL)\n> -\t{\n> -\t\tSetLatch(&proc->procLatch);\n> -\t}\n> +\tSetLatch(&ProcGlobal->allProcs[pgprocno].procLatch);\n> }\n\nI think some validation of the pgprocno here would be a good idea. I'm worried\nthat something could cause e.g. INVALID_PGPROCNO to be passed in, and suddenly\nwe're modifying random memory. That could end up being a pretty hard bug to\ncatch, because we might not even notice that the right latch isn't set...\n\n\n> From 562657ea3f7be124a6c6b6d1e7450da2431a54a0 Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <thomas.munro@gmail.com>\n> Date: Thu, 11 Mar 2021 23:09:11 +1300\n> Subject: [PATCH v5 2/2] Remove PGPROC's redundant pgprocno member.\n>\n> It's derivable with pointer arithmetic.\n>\n> Author: Soumyadeep Chakraborty <soumyadeep2007@gmail.com>\n> Discussion:\n> https://postgr.es/m/CA%2BhUKGLYRyDaneEwz5Uya_OgFLMx5BgJfkQSD%3Dq9HmwsfRRb-w%40mail.gmail.com\n\n\n> /* Accessor for PGPROC given a pgprocno. */\n> #define GetPGProcByNumber(n) (&ProcGlobal->allProcs[(n)])\n> +/* Accessor for pgprocno given a pointer to PGPROC. */\n> +#define GetPGProcNumber(proc) ((proc) - ProcGlobal->allProcs)\n\nI'm not sure this is a good idea. There's no really cheap way for the compiler\nto compute this, because sizeof(PGPROC) isn't a power of two. Given that\nPGPROC is ~880 bytes, I don't see all that much gain in getting rid of\n->pgprocno.\n\nYes, compilers can optimize away the super expensive division, but it'll end\nup as something like subtraction, shift, multiplication - not that cheap\neither. And I suspect it'll often have to first load the ProcGlobal via the\nGOT as well...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 2 Aug 2021 19:56:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: A micro-optimisation for ProcSendSignal()"
},
{
"msg_contents": "On Tue, Aug 3, 2021 at 2:56 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2021-08-03 13:44:58 +1200, Thomas Munro wrote:\n> > In the case of buffer pin waits, we switch to storing the pgprocno of\n> > the waiter. In the case of SERIALIZABLE READ ONLY DEFERRABLE waits, we\n> > derive the pgprocno from the vxid (though that's not yet as efficient as\n> > it could be).\n>\n> That's kind of an understatement :)\n\nI abandoned the vxid part for now and went back to v3. If/when\nBackendId is replaced with or becomes synonymous with pgprocno, we can\nmake this change and drop the pgprocno member from SERIALIZABLEXACT.\n\n> > + SetLatch(&ProcGlobal->allProcs[pgprocno].procLatch);\n\n> I think some validation of the pgprocno here would be a good idea. I'm worried\n> that something could cause e.g. INVALID_PGPROCNO to be passed in, and suddenly\n> we're modifying random memory. That could end up being a pretty hard bug to\n> catch, because we might not even notice that the right latch isn't set...\n\nAdded.\n\n> > /* Accessor for PGPROC given a pgprocno. */\n> > #define GetPGProcByNumber(n) (&ProcGlobal->allProcs[(n)])\n> > +/* Accessor for pgprocno given a pointer to PGPROC. */\n> > +#define GetPGProcNumber(proc) ((proc) - ProcGlobal->allProcs)\n>\n> I'm not sure this is a good idea. There's no really cheap way for the compiler\n> to compute this, because sizeof(PGPROC) isn't a power of two. Given that\n> PGPROC is ~880 bytes, I don't see all that much gain in getting rid of\n> ->pgprocno.\n\nYeah, that would need some examination; 0002 patch abandoned for now.\n\nPushed.\n\n\n",
"msg_date": "Thu, 16 Dec 2021 16:00:41 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A micro-optimisation for ProcSendSignal()"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.